CVE-2024-47741

Source
https://nvd.nist.gov/vuln/detail/CVE-2024-47741
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-47741.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-47741
Related
Published
2024-10-21T13:15:04Z
Modified
2024-10-23T22:46:14.687562Z
Severity
  • 7.0 (High) CVSS_V3 - CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

btrfs: fix race setting file private on concurrent lseek using same fd

When doing concurrent lseek(2) system calls against the same file descriptor, using multiple threads belonging to the same process, we have a short time window where a race happens and can result in a memory leak.

The race happens like this:

1) A program opens a file descriptor for a file and then spawns two threads (with the pthreads library for example), lets call them task A and task B;

2) Task A calls lseek with SEEKDATA or SEEKHOLE and ends up at file.c:finddesiredextent() while holding a read lock on the inode;

3) At the start of finddesiredextent(), it extracts the file's private_data pointer into a local variable named 'private', which has a value of NULL;

4) Task B also calls lseek with SEEKDATA or SEEKHOLE, locks the inode in shared mode and enters file.c:finddesiredextent(), where it also extracts file->private_data into its local variable 'private', which has a NULL value;

5) Because it saw a NULL file private, task A allocates a private structure and assigns to the file structure;

6) Task B also saw a NULL file private so it also allocates its own file private and then assigns it to the same file structure, since both tasks are using the same file descriptor.

At this point we leak the private structure allocated by task A.

Besides the memory leak, there's also the detail that both tasks end up using the same cached state record in the private structure (struct btrfsfileprivate::llseekcachedstate), which can result in a use-after-free problem since one task can free it while the other is still using it (only one task took a reference count on it). Also, sharing the cached state is not a good idea since it could result in incorrect results in the future - right now it should not be a problem because it end ups being used only in extent-io-tree.c:countrangebits() where we do range validation before using the cached state.

Fix this by protecting the private assignment and check of a file while holding the inode's spinlock and keep track of the task that allocated the private, so that it's used only by that task in order to prevent user-after-free issues with the cached state record as well as potentially using it incorrectly in the future.

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.11.2-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1

Ecosystem specific

{
    "urgency": "not yet assigned"
}