CVE-2023-53151

Source
https://nvd.nist.gov/vuln/detail/CVE-2023-53151
Import Source
https://storage.googleapis.com/cve-osv-conversion/osv-output/CVE-2023-53151.json
JSON Data
https://api.osv.dev/v1/vulns/CVE-2023-53151
Downstream
Published
2025-09-15T14:15:37Z
Modified
2025-09-15T19:00:20Z
Summary
[none]
Details

In the Linux kernel, the following vulnerability has been resolved:

md/raid10: prevent soft lockup while flush writes

Currently, there is no limit for raid1/raid10 plugged bio. While flushing writes, raid1 has cond_resched() while raid10 doesn't, and too many writes can cause soft lockup.

Follow up soft lockup can be triggered easily with writeback test for raid10 with ramdisks:

watchdog: BUG: soft lockup - CPU#10 stuck for 27s! [md0raid10:1293] Call Trace: <TASK> callrcu+0x16/0x20 putobject+0x41/0x80 _deleteobject+0x50/0x90 deleteobjectfull+0x2b/0x40 kmemleakfree+0x46/0xa0 slabfreefreelisthook.constprop.0+0xed/0x1a0 kmemcachefree+0xfd/0x300 mempoolfreeslab+0x1f/0x30 mempoolfree+0x3a/0x100 biofree+0x59/0x80 bioput+0xcf/0x2c0 freer10bio+0xbf/0xf0 raidendbioio+0x78/0xb0 onewritedone+0x8a/0xa0 raid10endwriterequest+0x1b4/0x430 bioendio+0x175/0x320 brdsubmitbio+0x3b9/0x9b7 [brd] _submitbio+0x69/0xe0 submitbionoacctnocheck+0x1e6/0x5a0 submitbionoacct+0x38c/0x7e0 flushpending_writes+0xf0/0x240 raid10d+0xac/0x1ed0

Fix the problem by adding cond_resched() to raid10 like what raid1 did.

Note that unlimited plugged bio still need to be optimized, for example, in the case of lots of dirty pages writeback, this will take lots of memory and io will spend a long time in plug, hence io latency is bad.

References

Affected packages