CVE-2024-43855

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-43855
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-43855.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-43855
Related
Published
2024-08-17T10:15:10Z
Modified
2024-09-18T03:26:35.792693Z
Severity
  • 5.5 (Medium) CVSS_V3 - CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H CVSS Calculator
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

md: fix deadlock between mddev_suspend and flush bio

Deadlock occurs when mddev is being suspended while some flush bio is in progress. It is a complex issue.

T1. the first flush is at the ending stage, it clears 'mddev->flushbio' and tries to submit data, but is blocked because mddev is suspended by T4. T2. the second flush sets 'mddev->flushbio', and attempts to queue mdsubmitflushdata(), which is already running (T1) and won't execute again if on the same CPU as T1. T3. the third flush inc activeio and tries to flush, but is blocked because 'mddev->flushbio' is not NULL (set by T2). T4. mddevsuspend() is called and waits for active_io dec to 0 which is inc by T3.

T1 T2 T3 T4 (flush 1) (flush 2) (third 3) (suspend) mdsubmitflushdata mddev->flushbio = NULL; . . mdflushrequest . mddev->flushbio = bio . queue submitflushes . . . . mdhandlerequest . . activeio + 1 . . mdflushrequest . . wait !mddev->flushbio . . . . mddevsuspend . . wait !activeio . . . submitflushes . queuework mdsubmitflushdata . //mdsubmitflushdata is already running (T1) . mdhandlerequest wait resume

The root issue is non-atomic inc/dec of activeio during flush process. activeio is dec before mdsubmitflushdata is queued, and inc soon after mdsubmitflushdata() run. mdflushrequest activeio + 1 submitflushes activeio - 1 mdsubmitflushdata mdhandlerequest activeio + 1 makerequest active_io - 1

If activeio is dec after mdhandlerequest() instead of within submitflushes(), makerequest() can be called directly intead of mdhandlerequest() in mdsubmitflushdata(), and active_io will only inc and dec once in the whole flush process. Deadlock will be fixed.

Additionally, the only difference between fixing the issue and before is that there is no return error handling of makerequest(). But after previous patch cleaned mdwritestart(), makerequst() only return error in raid5makerequest() by dm-raid, see commit 41425f96d7aa ("dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape)". Since dm always splits data and flush operation into two separate io, io size of flush submitted by dm always is 0, makerequest() will not be called in mdsubmitflushdata(). To prevent future modifications from introducing issues, add WARNON to ensure makerequest() no error is returned in this context.

References

Affected packages

Debian:12 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.1.106-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.10.3-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1

Ecosystem specific

{
    "urgency": "not yet assigned"
}