In the Linux kernel, the following vulnerability has been resolved:
binder: fix UAF of alloc->vma in race with munmap()
[ cmllamas: clean forward port from commit 015ac18be7de ("binder: fix UAF of alloc->vma in race with munmap()") in 5.10 stable. It is needed in mainline after the revert of commit a43cfc87caaf ("android: binder: stop saving a pointer to the VMA") as pointed out by Liam. The commit log and tags have been tweaked to reflect this. ]
In commit 720c24192404 ("ANDROID: binder: change downwrite to downread") binder assumed the mmap read lock is sufficient to protect alloc->vma inside binderupdatepagerange(). This used to be accurate until commit dd2283f2605e ("mm: mmap: zap pages with read mmapsem in munmap"), which now downgrades the mmap_lock after detaching the vma from the rbtree in munmap(). Then it proceeds to teardown and free the vma with only the read lock held.
This means that accesses to alloc->vma in binderupdatepagerange() now will race with vmarea_free() in munmap() and can cause a UAF as shown in the following KASAN trace:
================================================================== BUG: KASAN: use-after-free in vminsertpage+0x7c/0x1f0 Read of size 8 at addr ffff16204ad00600 by task server/558
CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1 Hardware name: linux,dummy-virt (DT) Call trace: dumpbacktrace+0x0/0x2a0 showstack+0x18/0x2c dumpstack+0xf8/0x164 printaddressdescription.constprop.0+0x9c/0x538 kasanreport+0x120/0x200 _asanload8+0xa0/0xc4 vminsertpage+0x7c/0x1f0 binderupdatepagerange+0x278/0x50c binderallocnewbuf+0x3f0/0xba0 bindertransaction+0x64c/0x3040 binderthreadwrite+0x924/0x2020 binderioctl+0x1610/0x2e5c _arm64sysioctl+0xd4/0x120 el0svccommon.constprop.0+0xac/0x270 doel0svc+0x38/0xa0 el0svc+0x1c/0x2c el0synchandler+0xe8/0x114 el0_sync+0x180/0x1c0
Allocated by task 559: kasansavestack+0x38/0x6c _kasankmalloc.constprop.0+0xe4/0xf0 kasanslaballoc+0x18/0x2c kmemcachealloc+0x1b0/0x2d0 vmareaalloc+0x28/0x94 mmapregion+0x378/0x920 dommap+0x3f0/0x600 vmmmappgoff+0x150/0x17c ksysmmappgoff+0x284/0x2dc _arm64sysmmap+0x84/0xa4 el0svccommon.constprop.0+0xac/0x270 doel0svc+0x38/0xa0 el0svc+0x1c/0x2c el0synchandler+0xe8/0x114 el0_sync+0x180/0x1c0
Freed by task 560: kasansavestack+0x38/0x6c kasansettrack+0x28/0x40 kasansetfreeinfo+0x24/0x4c _kasanslabfree+0x100/0x164 kasanslabfree+0x14/0x20 kmemcachefree+0xc4/0x34c vmareafree+0x1c/0x2c removevma+0x7c/0x94 _domunmap+0x358/0x710 _vmmunmap+0xbc/0x130 _arm64sysmunmap+0x4c/0x64 el0svccommon.constprop.0+0xac/0x270 doel0svc+0x38/0xa0 el0svc+0x1c/0x2c el0synchandler+0xe8/0x114 el0sync+0x180/0x1c0
[...] ==================================================================
To prevent the race above, revert back to taking the mmap write lock inside binderupdatepage_range(). One might expect an increase of mmap lock contention. However, binder already serializes these calls via top level alloc->mutex. Also, there was no performance impact shown when running the binder benchmark tests.
{
"osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2023/54xxx/CVE-2023-54157.json",
"cna_assigner": "Linux"
}