CVE-2024-36003

See a problem?
Source
https://nvd.nist.gov/vuln/detail/CVE-2024-36003
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2024-36003.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2024-36003
Related
Published
2024-05-20T10:15:14Z
Modified
2024-09-18T03:26:22.486481Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

ice: fix LAG and VF lock dependency in iceresetvf()

9f74a3dfcf83 ("ice: Fix VF Reset paths when interface in a failed over aggregate"), the ice driver has acquired the LAG mutex in iceresetvf(). The commit placed this lock acquisition just prior to the acquisition of the VF configuration lock.

If iceresetvf() acquires the configuration lock via the ICEVFRESETLOCK flag, this could deadlock with icevccfgqs_msg() because it always acquires the locks in the order of the VF configuration lock and then the LAG mutex.

Lockdep reports this violation almost immediately on creating and then removing 2 VF:

====================================================== WARNING: possible circular locking dependency detected

6.8.0-rc6 #54 Tainted: G W O

kworker/60:3/6771 is trying to acquire lock: ff40d43e099380a0 (&vf->cfglock){+.+.}-{3:3}, at: icereset_vf+0x22f/0x4d0 [ice]

but task is already holding lock: ff40d43ea1961210 (&pf->lagmutex){+.+.}-{3:3}, at: icereset_vf+0xb7/0x4d0 [ice]

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&pf->lagmutex){+.+.}-{3:3}: _lockacquire+0x4f8/0xb40 lockacquire+0xd4/0x2d0 _mutexlock+0x9b/0xbf0 icevccfgqsmsg+0x45/0x690 [ice] icevcprocessvfmsg+0x4f5/0x870 [ice] _icecleanctrlq+0x2b5/0x600 [ice] iceservicetask+0x2c9/0x480 [ice] processonework+0x1e9/0x4d0 workerthread+0x1e1/0x3d0 kthread+0x104/0x140 retfromfork+0x31/0x50 retfromfork_asm+0x1b/0x30

-> #0 (&vf->cfglock){+.+.}-{3:3}: checkprevadd+0xe2/0xc50 validatechain+0x558/0x800 _lockacquire+0x4f8/0xb40 lockacquire+0xd4/0x2d0 _mutexlock+0x9b/0xbf0 iceresetvf+0x22f/0x4d0 [ice] iceprocessvflrevent+0x98/0xd0 [ice] iceservicetask+0x1cc/0x480 [ice] processonework+0x1e9/0x4d0 workerthread+0x1e1/0x3d0 kthread+0x104/0x140 retfromfork+0x31/0x50 retfromforkasm+0x1b/0x30

other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pf->lagmutex); lock(&vf->cfglock); lock(&pf->lagmutex); lock(&vf->cfglock);

* DEADLOCK * 4 locks held by kworker/60:3/6771: #0: ff40d43e05428b38 ((wqcompletion)ice){+.+.}-{0:0}, at: processonework+0x176/0x4d0 #1: ff50d06e05197e58 ((workcompletion)(&pf->servtask)){+.+.}-{0:0}, at: processonework+0x176/0x4d0 #2: ff40d43ea1960e50 (&pf->vfs.tablelock){+.+.}-{3:3}, at: iceprocessvflrevent+0x48/0xd0 [ice] #3: ff40d43ea1961210 (&pf->lagmutex){+.+.}-{3:3}, at: iceresetvf+0xb7/0x4d0 [ice]

stack backtrace: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O 6.8.0-rc6 #54 Hardware name: Workqueue: ice iceservicetask [ice] Call Trace: <TASK> dumpstacklvl+0x4a/0x80 checknoncircular+0x12d/0x150 checkprevadd+0xe2/0xc50 ? savetrace+0x59/0x230 ? addchaincache+0x109/0x450 validatechain+0x558/0x800 _lockacquire+0x4f8/0xb40 ? lockdephardirqson+0x7d/0x100 lockacquire+0xd4/0x2d0 ? iceresetvf+0x22f/0x4d0 [ice] ? lockisheldtype+0xc7/0x120 _mutexlock+0x9b/0xbf0 ? iceresetvf+0x22f/0x4d0 [ice] ? iceresetvf+0x22f/0x4d0 [ice] ? rcuiswatching+0x11/0x50 ? iceresetvf+0x22f/0x4d0 [ice] iceresetvf+0x22f/0x4d0 [ice] ? processonework+0x176/0x4d0 iceprocessvflrevent+0x98/0xd0 [ice] iceservicetask+0x1cc/0x480 [ice] processonework+0x1e9/0x4d0 workerthread+0x1e1/0x3d0 ? _pfxworkerthread+0x10/0x10 kthread+0x104/0x140 ? _pfxkthread+0x10/0x10 retfromfork+0x31/0x50 ? _pfxkthread+0x10/0x10 retfromfork_asm+0x1b/0x30 </TASK>

To avoid deadlock, we must acquire the LAG ---truncated---

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.8.9-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}