In the Linux kernel, the following vulnerability has been resolved: mm/vmalloc: fix page mapping if vmareaalloc_pages() with high order fallback to order 0 The _vmappagesrangenoflush() assumes its argument pages** contains pages with the same page shift. However, since commit e9c3cda4d86e ("mm, vmalloc: fix high order __GFPNOFAIL allocations"), if gfpflags includes __GFPNOFAIL with high order in vmareaallocpages() and page allocation failed for high order, the pages** may contain two different page shifts (high order and order-0). This could lead __vmappagesrangenoflush() to perform incorrect mappings, potentially resulting in memory corruption. Users might encounter this as follows (vmapallowhuge = true, 2M is for PMDSIZE): kvmalloc(2M, __GFPNOFAIL|GFPX) __vmallocnoderangenoprof(vmflags=VMALLOWHUGEVMAP) vmareaallocpages(order=9) ---> order-9 allocation failed and fallback to order-0 vmappagesrange() vmappagesrange_noflush() __vmappagesrangenoflush(pageshift = 21) ----> wrong mapping happens We can remove the fallback code because if a high-order allocation fails, _vmallocnoderangenoprof() will retry with order-0. Therefore, it is unnecessary to fallback to order-0 here. Therefore, fix this by removing the fallback code.