CVE-2024-35970

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-35970
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-35970.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-35970
Related
Published
2024-05-20T10:15:11Z
Modified
2024-09-18T03:26:21.745948Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

afunix: Clear stale u->oobskb.

syzkaller started to report deadlock of unixgclock after commit 4090fa373f0e ("afunix: Replace garbage collection algorithm."), but it just uncovers the bug that has been there since commit 314001f0bf92 ("afunix: Add OOB support").

The repro basically does the following.

from socket import * from array import array

c1, c2 = socketpair(AFUNIX, SOCKSTREAM) c1.sendmsg([b'a'], [(SOLSOCKET, SCMRIGHTS, array("i", [c2.fileno()]))], MSG_OOB) c2.recv(1) # blocked as no normal data in recv queue

c2.close() # done async and unblock recv() c1.close() # done async and trigger GC

A socket sends its file descriptor to itself as OOB data and tries to receive normal data, but finally recv() fails due to async close().

The problem here is wrong handling of OOB skb in manageoob(). When recvmsg() is called without MSGOOB, manageoob() is called to check if the peeked skb is OOB skb. In such a case, manageoob() pops it out of the receive queue but does not clear unixsock(sk)->oobskb. This is wrong in terms of uAPI.

Let's say we send "hello" with MSGOOB, and "world" without MSGOOB. The 'o' is handled as OOB data. When recv() is called twice without MSG_OOB, the OOB data should be lost.

from socket import * c1, c2 = socketpair(AFUNIX, SOCKSTREAM, 0) c1.send(b'hello', MSGOOB) # 'o' is OOB data 5 c1.send(b'world') 5 c2.recv(5) # OOB data is not received b'hell' c2.recv(5) # OOB date is skipped b'world' c2.recv(5, MSGOOB) # This should return an error b'o'

In the same situation, TCP actually returns -EINVAL for the last recv().

Also, if we do not clear unixsk(sk)->oobskb, unix_poll() always set EPOLLPRI even though the data has passed through by previous recv().

To avoid these issues, we must clear unixsk(sk)->oobskb when dequeuing it from recv queue.

The reason why the old GC did not trigger the deadlock is because the old GC relied on the receive queue to detect the loop.

When it is triggered, the socket with OOB data is marked as GC candidate because file refcount == inflight count (1). However, after traversing all inflight sockets, the socket still has a positive inflight count (1), thus the socket is excluded from candidates. Then, the old GC lose the chance to garbage-collect the socket.

With the old GC, the repro continues to create true garbage that will never be freed nor detected by kmemleak as it's linked to the global inflight list. That's why we couldn't even notice the issue.

References

Affected packages

Debian:12 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.1.90-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.8.9-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}