CVE-2024-50066

Source
https://nvd.nist.gov/vuln/detail/CVE-2024-50066
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-50066.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-50066
Related
Published
2024-10-23T06:15:10Z
Modified
2024-11-05T22:49:46.037081Z
Severity
  • 7.0 (High) CVSS_V3 - CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

mm/mremap: fix movenormalpmd/retractpagetables race

In mremap(), movepagetables() looks at the type of the PMD entry and the specified address range to figure out by which method the next chunk of page table entries should be moved.

At that point, the mmaplock is held in write mode, but no rmap locks are held yet. For PMD entries that point to page tables and are fully covered by the source address range, movepgtentry(NORMALPMD, ...) is called, which first takes rmap locks, then does movenormalpmd(). movenormalpmd() takes the necessary page table locks at source and destination, then moves an entire page table from the source to the destination.

The problem is: The rmap locks, which protect against concurrent page table removal by retractpagetables() in the THP code, are only taken after the PMD entry has been read and it has been decided how to move it. So we can race as follows (with two processes that have mappings of the same tmpfs file that is stored on a tmpfs mount with huge=advise); note that process A accesses page tables through the MM while process B does it through the file rmap:

process A process B ========= ========= mremap mremapto movevma movepagetables getoldpmd allocnewpmd * PREEMPT * madvise(MADVCOLLAPSE) domadvise madvisewalkvmas madvisevmabehavior madvisecollapse hpagecollapsescanfile collapsefile retractpagetables immaplockread(mapping) pmdpcollapseflush immapunlockread(mapping) movepgtentry(NORMALPMD, ...) takermaplocks movenormalpmd droprmaplocks

When this happens, movenormalpmd() can end up creating bogus PMD entries in the line pmd_populate(mm, new_pmd, pmd_pgtable(pmd)). The effect depends on arch-specific and machine-specific details; on x86, you can end up with physical page 0 mapped as a page table, which is likely exploitable for user->kernel privilege escalation.

Fix the race by letting process B recheck that the PMD still points to a page table after the rmap locks have been taken. Otherwise, we bail and let the caller fall back to the PTE-level copying path, which will then bail immediately at the pmd_none() check.

Bug reachability: Reaching this bug requires that you can create shmem/file THP mappings - anonymous THP uses different code that doesn't zap stuff under rmap locks. File THP is gated on an experimental config flag (CONFIGREADONLYTHPFOR_FS), so on normal distro kernels you need shmem THP to hit this bug. As far as I know, getting shmem THP normally requires that you can mount your own tmpfs with the right mount flags, which would require creating your own user+mount namespace; though I don't know if some distros maybe enable shmem THP by default or something like that.

Bug impact: This issue can likely be used for user->kernel privilege escalation when it is reachable.

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.11.5-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}