In the Linux kernel, the following vulnerability has been resolved: net/smc: fix deadlock triggered by canceldelayedworksyn() The following LOCKDEP was detected: Workqueue: events smclgrfreework [smc] WARNING: possible circular locking dependency detected 6.1.0-20221027.rc2.git8.56bc5b569087.300.fc36.s390x+debug #1 Not tainted ------------------------------------------------------ kworker/3:0/176251 is trying to acquire lock: 00000000f1467148 ((wqcompletion)smctx_wq-00000000#2){+.+.}-{0:0}, at: __flushworkqueue+0x7a/0x4f0 but task is already holding lock: 0000037fffe97dc8 ((workcompletion)(&(&lgr->freework)->work)){+.+.}-{0:0}, at: processonework+0x232/0x730 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 ((workcompletion)(&(&lgr->free_work)->work)){+.+.}-{0:0}: __lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lock_acquire+0xac/0x1c8 __flush_work+0x76/0xf0 __cancelworktimer+0x170/0x220 __smclgrterminate.part.0+0x34/0x1c0 [smc] smcconnectrdma+0x15e/0x418 [smc] __smcconnect+0x234/0x480 [smc] smcconnect+0x1d6/0x230 [smc] __sys_connect+0x90/0xc0 __dosyssocketcall+0x186/0x370 __dosyscall+0x1da/0x208 systemcall+0x82/0xb0 -> #3 (smcclientlgr_pending){+.+.}-{3:3}: __lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lock_acquire+0xac/0x1c8 __mutexlock+0x96/0x8e8 mutexlocknested+0x32/0x40 smcconnect_rdma+0xa4/0x418 [smc] __smcconnect+0x234/0x480 [smc] smcconnect+0x1d6/0x230 [smc] __sys_connect+0x90/0xc0 __dosyssocketcall+0x186/0x370 __dosyscall+0x1da/0x208 systemcall+0x82/0xb0 -> #2 (sklock-AFSMC){+.+.}-{0:0}: __lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 locksocknested+0x46/0xa8 smctxwork+0x34/0x50 [smc] processonework+0x30c/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 __retfromfork+0x3c/0x58 retfromfork+0xa/0x40 -> #1 ((workcompletion)(&(&smc->conn.txwork)->work)){+.+.}-{0:0}: __lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 processonework+0x2bc/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 __retfromfork+0x3c/0x58 retfromfork+0xa/0x40 -> #0 ((wqcompletion)smctxwq-00000000#2){+.+.}-{0:0}: checkprevadd+0xd8/0xe88 validatechain+0x70c/0xb20 __lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lock_acquire+0xac/0x1c8 __flushworkqueue+0xaa/0x4f0 drainworkqueue+0xaa/0x158 destroyworkqueue+0x44/0x2d8 smclgrfree+0x9e/0xf8 [smc] processonework+0x30c/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 _retfromfork+0x3c/0x58 retfromfork+0xa/0x40 other info that might help us debug this: Chain exists of: (wqcompletion)smctxwq-00000000#2 --> smcclientlgrpending --> (workcompletion)(&(&lgr->freework)->work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((workcompletion)(&(&lgr->freework)->work)); lock(smcclientlgrpending); lock((workcompletion) (&(&lgr->freework)->work)); lock((wqcompletion)smctxwq-00000000#2); *** DEADLOCK *** 2 locks held by kworker/3:0/176251: #0: 0000000080183548 ((wqcompletion)events){+.+.}-{0:0}, at: processonework+0x232/0x730 #1: 0000037fffe97dc8 ((workcompletion) (&(&lgr->freework)->work)){+.+.}-{0:0}, at: processonework+0x232/0x730 stack backtr ---truncated---