CVE-2025-37772

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-37772
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-37772.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2025-37772
Related
Published
2025-05-01T14:15:40Z
Modified
2025-05-05T17:54:09.636231Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

RDMA/cma: Fix workqueue crash in cmaneteventwork_handler

struct rdmacmid has member "struct workstruct network" that is reused for enqueuing cmaneteventworkhandler()s onto cmawq.

Below crash[1] can occur if more than one call to cmaneteventcallback() occurs in quick succession, which further enqueues cmaneteventworkhandler()s for the same rdmacmid, overwriting any previously queued work-item(s) that was just scheduled to run i.e. there is no guarantee the queued work item may run between two successive calls to cmaneteventcallback() and the 2nd INITWORK would overwrite the 1st work item (for the same rdmacmid), despite grabbing idtablelock during enqueue.

Also drgn analysis [2] indicates the work item was likely overwritten.

Fix this by moving the INITWORK() to _rdmacreateid(), so that it doesn't race with any existing queue_work() or its worker thread.

[1] Trimmed crash stack:

BUG: kernel NULL pointer dereference, address: 0000000000000008 kworker/u256:6 ... 6.12.0-0... Workqueue: cmaneteventworkhandler [rdmacm] (rdmacm) RIP: 0010:processonework+0xba/0x31a Call Trace: workerthread+0x266/0x3a0 kthread+0xcf/0x100 retfromfork+0x31/0x50

retfromfork_asm+0x1a/0x30

[2] drgn crash analysis:

trace = prog.crashedthread().stacktrace() trace (0) crashsetupregs (./arch/x86/include/asm/kexec.h:111:15) (1) _crashkexec (kernel/crashcore.c:122:4) (2) panic (kernel/panic.c:399:3) (3) oopsend (arch/x86/kernel/dumpstack.c:382:3) ... (8) processonework (kernel/workqueue.c:3168:2) (9) processscheduledworks (kernel/workqueue.c:3310:3) (10) worker_thread (kernel/workqueue.c:3391:4) (11) kthread (kernel/kthread.c:389:9)

Line workqueue.c:3168 for this kernel version is in processonework(): 3168 strscpy(worker->desc, pwq->wq->name, WORKERDESCLEN);

trace[8]["work"] *(struct workstruct *)0xffff92577d0a21d8 = { .data = (atomiclongt){ .counter = (s64)536870912, <=== Note }, .entry = (struct listhead){ .next = (struct listhead *)0xffff924d075924c0, .prev = (struct listhead *)0xffff924d075924c0, }, .func = (workfunct)cmaneteventwork_handler+0x0 = 0xffffffffc2cec280, }

Suspicion is that pwq is NULL:

trace[8]["pwq"] (struct pool_workqueue *)<absent>

In processonework(), pwq is assigned from: struct poolworkqueue *pwq = getwork_pwq(work);

and getworkpwq() is: static struct poolworkqueue *getworkpwq(struct workstruct *work) { unsigned long data = atomiclongread(&work->data);

if (data & WORK_STRUCT_PWQ)
    return work_struct_pwq(data);
else
    return NULL;

}

WORKSTRUCTPWQ is 0x4:

print(repr(prog['WORKSTRUCTPWQ'])) Object(prog, 'enum work_flags', value=4)

But work->data is 536870912 which is 0x20000000. So, getworkpwq() returns NULL and we crash in processonework():

3168 strscpy(worker->desc, pwq->wq->name, WORKERDESCLEN);

References

Affected packages

Debian:12 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.1.135-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.1.119-1
6.1.123-1
6.1.124-1
6.1.128-1
6.1.129-1
6.1.133-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.12.25-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.1.119-1
6.1.123-1
6.1.124-1
6.1.128-1
6.1.129-1
6.1.133-1
6.1.135-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1
6.11.5-1~bpo12+1
6.11.5-1
6.11.6-1
6.11.7-1
6.11.9-1
6.11.10-1~bpo12+1
6.11.10-1
6.12~rc6-1~exp1
6.12.3-1
6.12.5-1
6.12.6-1
6.12.8-1
6.12.9-1~bpo12+1
6.12.9-1
6.12.9-1+alpha
6.12.10-1
6.12.11-1
6.12.11-1+alpha
6.12.11-1+alpha.1
6.12.12-1~bpo12+1
6.12.12-1
6.12.13-1
6.12.15-1
6.12.16-1
6.12.17-1
6.12.19-1
6.12.20-1
6.12.21-1
6.12.22-1~bpo12+1
6.12.22-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}