In the Linux kernel, the following vulnerability has been resolved:
xsk: Fix race condition in AF_XDP generic RX path
Move rxlock from xsksocket to xskbuffpool. Fix synchronization for shared umem mode in generic RX path where multiple sockets share single xskbuffpool.
RX queue is exclusive to xsk_socket, while FILL queue can be shared between multiple sockets. This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem.
Protect both queues by acquiring spinlock in shared xskbuffpool.
Lock contention may be minimized in the future by some per-thread FQ buffering.
It's safe and necessary to move spinlockbh(rxlock) after xskrcvcheck(): * xs->pool and spinlockinit is synchronized by xskbind() -> xskisbound() memory barriers. * xskrcvcheck() may return true at the moment of xskrelease() or xskunbinddev(), however this will not cause any data races or race conditions. xskunbinddev() removes xdp socket from all maps and waits for completion of all outstanding rx operations. Packets in RX path will either complete safely or drop.