In the Linux kernel, the following vulnerability has been resolved: net/smc: fix deadlock triggered by canceldelayedworksyn() The following LOCKDEP was detected: Workqueue: events smclgrfreework [smc] WARNING: possible circular locking dependency detected 6.1.0-20221027.rc2.git8.56bc5b569087.300.fc36.s390x+debug #1 Not tainted ------------------------------------------------------ kworker/3:0/176251 is trying to acquire lock: 00000000f1467148 ((wqcompletion)smctxwq-00000000#2){+.+.}-{0:0}, at: _flushworkqueue+0x7a/0x4f0 but task is already holding lock: 0000037fffe97dc8 ((workcompletion)(&(&lgr->freework)->work)){+.+.}-{0:0}, at: processonework+0x232/0x730 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 ((workcompletion)(&(&lgr->freework)->work)){+.+.}-{0:0}: _lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 _flushwork+0x76/0xf0 _cancelworktimer+0x170/0x220 _smclgrterminate.part.0+0x34/0x1c0 [smc] smcconnectrdma+0x15e/0x418 [smc] _smcconnect+0x234/0x480 [smc] smcconnect+0x1d6/0x230 [smc] _sysconnect+0x90/0xc0 _dosyssocketcall+0x186/0x370 _dosyscall+0x1da/0x208 systemcall+0x82/0xb0 -> #3 (smcclientlgrpending){+.+.}-{3:3}: _lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 _mutexlock+0x96/0x8e8 mutexlocknested+0x32/0x40 smcconnectrdma+0xa4/0x418 [smc] _smcconnect+0x234/0x480 [smc] smcconnect+0x1d6/0x230 [smc] _sysconnect+0x90/0xc0 _dosyssocketcall+0x186/0x370 _dosyscall+0x1da/0x208 systemcall+0x82/0xb0 -> #2 (sklock-AFSMC){+.+.}-{0:0}: _lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 locksocknested+0x46/0xa8 smctxwork+0x34/0x50 [smc] processonework+0x30c/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 _retfromfork+0x3c/0x58 retfromfork+0xa/0x40 -> #1 ((workcompletion)(&(&smc->conn.txwork)->work)){+.+.}-{0:0}: _lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 processonework+0x2bc/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 _retfromfork+0x3c/0x58 retfromfork+0xa/0x40 -> #0 ((wqcompletion)smctxwq-00000000#2){+.+.}-{0:0}: checkprevadd+0xd8/0xe88 validatechain+0x70c/0xb20 _lockacquire+0x58e/0xbd8 lockacquire.part.0+0xe2/0x248 lockacquire+0xac/0x1c8 _flushworkqueue+0xaa/0x4f0 drainworkqueue+0xaa/0x158 destroyworkqueue+0x44/0x2d8 smclgrfree+0x9e/0xf8 [smc] processonework+0x30c/0x730 workerthread+0x62/0x420 kthread+0x138/0x150 _retfromfork+0x3c/0x58 retfromfork+0xa/0x40 other info that might help us debug this: Chain exists of: (wqcompletion)smctxwq-00000000#2 --> smcclientlgrpending --> (workcompletion)(&(&lgr->freework)->work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((workcompletion)(&(&lgr->freework)->work)); lock(smcclientlgrpending); lock((workcompletion) (&(&lgr->freework)->work)); lock((wqcompletion)smctxwq-00000000#2); * DEADLOCK * 2 locks held by kworker/3:0/176251: #0: 0000000080183548 ((wqcompletion)events){+.+.}-{0:0}, at: processonework+0x232/0x730 #1: 0000037fffe97dc8 ((workcompletion) (&(&lgr->freework)->work)){+.+.}-{0:0}, at: processone_work+0x232/0x730 stack backtr ---truncated---