In the Linux kernel, the following vulnerability has been resolved: mm/hugememory: fix use of NULL folio in movepageshugepmd() movepageshugepmd() handles UFFDIOMOVE for both normal THPs and huge zero pages. For the huge zero page path, srcfolio is explicitly set to NULL, and is used as a sentinel to skip folio operations like lock and rmap. In the huge zero page branch, srcfolio is NULL, so foliomkpmd(NULL, pgprot) passes NULL through foliopfn() and pagetopfn(). With SPARSEMEMVMEMMAP this silently produces a bogus PFN, installing a PMD pointing to non-existent physical memory. On other memory models it is a NULL dereference. Use pagefolio(srcpage) to obtain the valid huge zero folio from the page, which was obtained from pmdpage() and remains valid throughout. After commit d82d09e48219 ("mm/hugememory: mark PMD mappings of the huge zero folio special"), moved huge zero PMDs must remain special so vmnormalpagepmd() continues to treat them as special mappings. movepageshugepmd() currently reconstructs the destination PMD in the huge zero page branch, which drops PMD state such as pmdspecial() on architectures with CONFIGARCHHASPTESPECIAL. As a result, vmnormalpagepmd() can treat the moved huge zero PMD as a normal page and corrupt its refcount. Instead of reconstructing the PMD from the folio, derive the destination entry from srcpmdval after pmdphugeclearflush(), then handle the PMD metadata the same way movehugepmd() does for moved entries by marking it soft-dirty and clearing uffd-wp.