CVE-2024-38306

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-38306
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-38306.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-38306
Related
Published
2024-06-25T15:15:13Z
Modified
2024-09-18T03:26:25.816649Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

btrfs: protect folio::private when attaching extent buffer folios

[BUG] Since v6.8 there are rare kernel crashes reported by various people, the common factor is bad page status error messages like this:

BUG: Bad page state in process kswapd0 pfn:d6e840 page: refcount:0 mapcount:0 mapping:000000007512f4f2 index:0x2796c2c7c pfn:0xd6e840 aops:btreeaops ino:1 flags: 0x17ffffe0000008(uptodate|node=0|zone=2|lastcpupid=0x3fffff) pagetype: 0xffffffff() raw: 0017ffffe0000008 dead000000000100 dead000000000122 ffff88826d0be4c0 raw: 00000002796c2c7c 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: non-NULL mapping

[CAUSE] Commit 09e6cef19c9f ("btrfs: refactor allocextentbuffer() to allocate-then-attach method") changes the sequence when allocating a new extent buffer.

Previously we always called grabextentbuffer() under mapping->iprivatelock, to ensure the safety on modification on folio::private (which is a pointer to extent buffer for regular sectorsize).

This can lead to the following race:

Thread A is trying to allocate an extent buffer at bytenr X, with 4 4K pages, meanwhile thread B is trying to release the page at X + 4K (the second page of the extent buffer at X).

       Thread A                |                 Thread B

-----------------------------------+------------------------------------- | btreereleasefolio() | | This is for the page at X + 4K, | | Not page X. | | allocextentbuffer() | |- releaseextentbuffer() |- filemapaddfolio() for the | | |- atomicdecandtest(eb->refs) | page at bytenr X (the first | | | | page). | | | | Which returned -EEXIST. | | | | | | | |- filemaplockfolio() | | | | Returned the first page locked. | | | | | | | |- grabextentbuffer() | | | | |- atomicincnotzero() | | | | | Returned false | | | | |- foliodetachprivate() | | |- foliodetachprivate() for X | |- foliotestprivate() | | |- foliotestprivate() | Returned true | | | Returned true |- folioput() | |- folioput()

Now there are two puts on the same folio at folio X, leading to refcount underflow of the folio X, and eventually causing the BUG_ON() on the page->mapping.

The condition is not that easy to hit:

  • The release must be triggered for the middle page of an eb If the release is on the same first page of an eb, page lock would kick in and prevent the race.

  • foliodetachprivate() has a very small race window It's only between foliotestprivate() and folioclearprivate().

That's exactly when mapping->iprivatelock is used to prevent such race, and commit 09e6cef19c9f ("btrfs: refactor allocextentbuffer() to allocate-then-attach method") screwed that up.

At that time, I thought the page lock would kick in as filemapreleasefolio() also requires the page to be locked, but forgot the filemapreleasefolio() only locks one page, not all pages of an extent buffer.

[FIX] Move all the code requiring iprivatelock into attachebfoliotofilemap(), so that everything is done with proper lock protection.

Furthermore to prevent future problems, add an extra lockdepassertlocked() to ensure we're holding the proper lock.

To reproducer that is able to hit the race (takes a few minutes with instrumented code inserting delays to allocextentbuffer()):

#!/bin/sh dropcaches () { while(true); do echo 3 > /proc/sys/vm/dropcaches echo 1 > /proc/sys/vm/compact_memory done }

run_tar () { while(true); do for x in seq 1 80 ; do tar cf /dev/zero /mnt > /dev/null & done wait done }

mkfs.btrfs -f -d single -m single ---truncated---

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.9.7-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1

Ecosystem specific

{
    "urgency": "not yet assigned"
}