CVE-2025-21983

Source
https://nvd.nist.gov/vuln/detail/CVE-2025-21983
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2025-21983.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2025-21983
Related
Published
2025-04-01T16:15:29Z
Modified
2025-04-01T22:45:35.238502Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

mm/slab/kvfreercu: Switch to WQMEM_RECLAIM wq

Currently kvfreercu() APIs use a system workqueue which is "systemunbound_wq" to driver RCU machinery to reclaim a memory.

Recently, it has been noted that the following kernel warning can be observed:

<snip> workqueue: WQMEMRECLAIM nvme-wq:nvmescanwork is flushing !WQMEMRECLAIM eventsunbound:kfreercuwork WARNING: CPU: 21 PID: 330 at kernel/workqueue.c:3719 checkflushdependency+0x112/0x120 Modules linked in: inteluncorefrequency(E) inteluncorefrequencycommon(E) skxedac(E) ... CPU: 21 UID: 0 PID: 330 Comm: kworker/u144:6 Tainted: G E 6.13.2-0g925d379822da #1 Hardware name: Wiwynn Twin Lakes MP/Twin Lakes Passive MP, BIOS YMM20 02/01/2023 Workqueue: nvme-wq nvmescanwork RIP: 0010:checkflushdependency+0x112/0x120 Code: 05 9a 40 14 02 01 48 81 c6 c0 00 00 00 48 8b 50 18 48 81 c7 c0 00 00 00 48 89 f9 48 ... RSP: 0018:ffffc90000df7bd8 EFLAGS: 00010082 RAX: 000000000000006a RBX: ffffffff81622390 RCX: 0000000000000027 RDX: 00000000fffeffff RSI: 000000000057ffa8 RDI: ffff88907f960c88 RBP: 0000000000000000 R08: ffffffff83068e50 R09: 000000000002fffd R10: 0000000000000004 R11: 0000000000000000 R12: ffff8881001a4400 R13: 0000000000000000 R14: ffff88907f420fb8 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff88907f940000(0000) knlGS:0000000000000000 CR2: 00007f60c3001000 CR3: 000000107d010005 CR4: 00000000007726f0 PKRU: 55555554 Call Trace: <TASK> ? _warn+0xa4/0x140 ? checkflushdependency+0x112/0x120 ? reportbug+0xe1/0x140 ? checkflushdependency+0x112/0x120 ? handlebug+0x5e/0x90 ? excinvalidop+0x16/0x40 ? asmexcinvalidop+0x16/0x20 ? timerrecalcnextexpiry+0x190/0x190 ? checkflushdependency+0x112/0x120 ? checkflushdependency+0x112/0x120 _flushwork.llvm.1643880146586177030+0x174/0x2c0 flushrcuwork+0x28/0x30 kvfreercubarrier+0x12f/0x160 kmemcachedestroy+0x18/0x120 biosetexit+0x10c/0x150 diskrelease.llvm.6740012984264378178+0x61/0xd0 devicerelease+0x4f/0x90 kobjectput+0x95/0x180 nvmeputns+0x23/0xc0 nvmeremoveinvalidnamespaces+0xb3/0xd0 nvmescanwork+0x342/0x490 processscheduledworks+0x1a2/0x370 workerthread+0x2ff/0x390 ? pwqreleaseworkfn+0x1e0/0x1e0 kthread+0xb1/0xe0 ? _kthreadparkme+0x70/0x70 retfromfork+0x30/0x40 ? _kthreadparkme+0x70/0x70 retfromforkasm+0x11/0x20 </TASK> ---[ end trace 0000000000000000 ]--- <snip>

To address this switch to use of independent WQMEMRECLAIM workqueue, so the rules are not violated from workqueue framework point of view.

Apart of that, since kvfreercu() does reclaim memory it is worth to go with WQMEM_RECLAIM type of wq because it is designed for this purpose.

References

Affected packages

Debian:13 / linux

Package

Name
linux
Purl
pkg:deb/debian/linux?arch=source

Affected ranges

Type
ECOSYSTEM
Events
Introduced
0Unknown introduced version / All previous versions are affected
Fixed
6.12.20-1

Affected versions

6.*

6.1.27-1
6.1.37-1
6.1.38-1
6.1.38-2~bpo11+1
6.1.38-2
6.1.38-3
6.1.38-4~bpo11+1
6.1.38-4
6.1.52-1
6.1.55-1~bpo11+1
6.1.55-1
6.1.64-1
6.1.66-1
6.1.67-1
6.1.69-1~bpo11+1
6.1.69-1
6.1.76-1~bpo11+1
6.1.76-1
6.1.82-1
6.1.85-1
6.1.90-1~bpo11+1
6.1.90-1
6.1.94-1~bpo11+1
6.1.94-1
6.1.98-1
6.1.99-1
6.1.106-1
6.1.106-2
6.1.106-3
6.1.112-1
6.1.115-1
6.1.119-1
6.1.123-1
6.1.124-1
6.1.128-1
6.1.129-1
6.3.1-1~exp1
6.3.2-1~exp1
6.3.4-1~exp1
6.3.5-1~exp1
6.3.7-1~bpo12+1
6.3.7-1
6.3.11-1
6.4~rc6-1~exp1
6.4~rc7-1~exp1
6.4.1-1~exp1
6.4.4-1~bpo12+1
6.4.4-1
6.4.4-2
6.4.4-3~bpo12+1
6.4.4-3
6.4.11-1
6.4.13-1
6.5~rc4-1~exp1
6.5~rc6-1~exp1
6.5~rc7-1~exp1
6.5.1-1~exp1
6.5.3-1~bpo12+1
6.5.3-1
6.5.6-1
6.5.8-1
6.5.10-1~bpo12+1
6.5.10-1
6.5.13-1
6.6.3-1~exp1
6.6.4-1~exp1
6.6.7-1~exp1
6.6.8-1
6.6.9-1
6.6.11-1
6.6.13-1~bpo12+1
6.6.13-1
6.6.15-1
6.6.15-2
6.7-1~exp1
6.7.1-1~exp1
6.7.4-1~exp1
6.7.7-1
6.7.9-1
6.7.9-2
6.7.12-1~bpo12+1
6.7.12-1
6.8.9-1
6.8.11-1
6.8.12-1~bpo12+1
6.8.12-1
6.9.2-1~exp1
6.9.7-1~bpo12+1
6.9.7-1
6.9.8-1
6.9.9-1
6.9.10-1~bpo12+1
6.9.10-1
6.9.11-1
6.9.12-1
6.10-1~exp1
6.10.1-1~exp1
6.10.3-1
6.10.4-1
6.10.6-1~bpo12+1
6.10.6-1
6.10.7-1
6.10.9-1
6.10.11-1~bpo12+1
6.10.11-1
6.10.12-1
6.11~rc4-1~exp1
6.11~rc5-1~exp1
6.11-1~exp1
6.11.2-1
6.11.4-1
6.11.5-1~bpo12+1
6.11.5-1
6.11.6-1
6.11.7-1
6.11.9-1
6.11.10-1~bpo12+1
6.11.10-1
6.12~rc6-1~exp1
6.12.3-1
6.12.5-1
6.12.6-1
6.12.8-1
6.12.9-1~bpo12+1
6.12.9-1
6.12.9-1+alpha
6.12.10-1
6.12.11-1
6.12.11-1+alpha
6.12.11-1+alpha.1
6.12.12-1~bpo12+1
6.12.12-1
6.12.13-1
6.12.15-1
6.12.16-1
6.12.17-1
6.12.19-1

Ecosystem specific

{
    "urgency": "not yet assigned"
}