In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix race in cpumap on PREEMPT_RT
On PREEMPTRT kernels, the per-CPU xdpbulk_queue (bq) can be accessed concurrently by multiple preemptible tasks on the same CPU.
The original code assumes bq_enqueue() and __cpumapflush() run atomically with respect to each other on the same CPU, relying on localbhdisable() to prevent preemption. However, on PREEMPTRT, localbhdisable() only calls migratedisable() (when PREEMPTRTNEEDSBHLOCK is not set) and does not disable preemption, which allows CFS scheduling to preempt a task during bqflushtoqueue(), enabling another task on the same CPU to enter bqenqueue() and operate on the same per-CPU bq concurrently.
This leads to several races:
Double __listdelclearprev(): after bq->count is reset in bqflushtoqueue(), a preempting task can call bqenqueue() -> bqflushtoqueue() on the same bq when bq->count reaches CPUMAPBULKSIZE. Both tasks then call _listdelclearprev() on the same bq->flushnode, the second call dereferences the prev pointer that was already set to NULL by the first.
bq->count and bq->q[] races: concurrent bqenqueue() can corrupt the packet queue while bqflushtoqueue() is processing it.
The race between task A (_cpumapflush -> bqflushtoqueue) and task B (bqenqueue -> bqflushtoqueue) on the same CPU:
Task A (xdpdoflush) Task B (cpumapenqueue) ---------------------- ------------------------ bqflushtoqueue(bq) spinlock(&q->producerlock) /* flush bq->q[] to ptrring */ bq->count = 0 spinunlock(&q->producerlock) bqenqueue(rcpu, xdpf) <-- CFS preempts Task A --> bq->q[bq->count++] = xdpf /* ... more enqueues until full ... */ bqflushtoqueue(bq) spinlock(&q->producerlock) /* flush to ptrring */ spinunlock(&q->producer_lock) listdelclearprev(flushnode) /* sets flushnode.prev = NULL / <-- Task A resumes --> __listdelclearprev(flushnode) flushnode.prev->next = ... / prev is NULL -> kernel oops */
Fix this by adding a locallockt to xdpbulkqueue and acquiring it in bq_enqueue() and __cpumapflush(). These paths already run under localbhdisable(), so use locallocknestedbh() which on non-RT is a pure annotation with no overhead, and on PREEMPTRT provides a per-CPU sleeping lock that serializes access to the bq.
To reproduce, insert an mdelay(100) between bq->count = 0 and _listdelclearprev() in bqflushtoqueue(), then run reproducer provided by syzkaller.
{
"cna_assigner": "Linux",
"osv_generated_from": "https://github.com/CVEProject/cvelistV5/tree/main/cves/2026/23xxx/CVE-2026-23342.json"
}