In the Linux kernel, the following vulnerability has been resolved:
mm: fix a UAF when vma->mm is freed after vma->vm_refcnt got dropped
By inducing delays in the right places, Jann Horn created a reproducer for a hard to hit UAF issue that became possible after VMAs were allowed to be recycled by adding SLABTYPESAFEBY_RCU to their cache.
Race description is borrowed from Jann's discovery report: lockvmaunderrcu() looks up a VMA locklessly with maswalk() under rcureadlock(). At that point, the VMA may be concurrently freed, and it can be recycled by another process. vmastartread() then increments the vma->vmrefcnt (if it is in an acceptable range), and if this succeeds, vmastart_read() can return a recycled VMA.
In this scenario where the VMA has been recycled, lockvmaunderrcu() will then detect the mismatching ->vmmm pointer and drop the VMA through vmaendread(), which calls vmarefcountput(). vmarefcountput() drops the refcount and then calls rcuwaitwakeup() using a copy of vma->vmmm. This is wrong: It implicitly assumes that the caller is keeping the VMA's mm alive, but in this scenario the caller has no relation to the VMA's mm, so the rcuwaitwake_up() can cause UAF.
The diagram depicting the race: T1 T2 T3 == == == lockvmaunderrcu maswalk <VMA gets removed from mm> mmap <the same VMA is reallocated> vmastartread __refcountincnot_zerolimitedacquire munmap __vmaenterlocked refcount_addnotzero vmaendread vmarefcountput _refcountdecandtest rcuwaitwaitevent <finish operation> rcuwaitwakeup [UAF]
Note that rcuwaitwaitevent() in T3 does not block because refcount was already dropped by T1. At this point T3 can exit and free the mm causing UAF in T1.
To avoid this we move vma->vmmm verification into vmastartread() and grab vma->vmmm to stabilize it before vmarefcountput() operation.
[surenb@google.com: v3]
{
"cna_assigner": "Linux",
"osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2025/38xxx/CVE-2025-38554.json"
}