In the Linux kernel, the following vulnerability has been resolved:
ocfs2: dlmfs: fix error handling of userdlmdestroy_lock
When userdlmdestroylock failed, it didn't clean up the flags it set before exit. For USERLOCKINTEARDOWN, if this function fails because of lock is still in used, next time when unlink invokes this function, it will return succeed, and then unlink will remove inode and dentry if lock is not in used(file closed), but the dlm lock is still linked in dlm lock resource, then when bast come in, it will trigger a panic due to user-after-free. See the following panic call trace. To fix this, USERLOCKINTEARDOWN should be reverted if fail. And also error should be returned if USERLOCKINTEARDOWN is set to let user know that unlink fail.
For the case of ocfs2dlmunlock failure, besides USERLOCKINTEARDOWN, USERLOCKBUSY is also required to be cleared. Even though spin lock is released in between, but USERLOCKINTEARDOWN is still set, for USERLOCKBUSY, if before every place that waits on this flag, USERLOCKINTEARDOWN is checked to bail out, that will make sure no flow waits on the busy flag set by userdlmdestroylock(), then we can simplely revert USERLOCKBUSY when ocfs2dlmunlock fails. Fix userdlmcluster_lock() which is the only function not following this.
[ 941.336392] (python,26174,16):dlmfsunlink:562 ERROR: unlink 004fb0000060000b5a90b8c847b72e1, error -16 from destroy [ 989.757536] ------------[ cut here ]------------ [ 989.757709] kernel BUG at fs/ocfs2/dlmfs/userdlm.c:173! [ 989.757876] invalid opcode: 0000 [#1] SMP [ 989.758027] Modules linked in: ksplice2zhuk2jribipoibnew(O) ksplice2zhuk2jr(O) mptctl mptbase xennetback xenblkback xengntalloc xengntdev xenevtchn cdcether usbnet mii ocfs2 jbd2 rpcsecgsskrb5 authrpcgss nfsv4 nfsv3 nfsacl nfs fscache lockd grace ocfs2dlmfs ocfs2stacko2cb ocfs2dlm ocfs2nodemanager ocfs2stackglue configfs bnx2fc fcoe libfcoe libfc scsitransportfc sunrpc ipmidevintf bridge stp llc rdsrdma rds bonding ibsdp ibipoib rdmaucm ibucm ibuverbs ibumad rdmacm ibcm iwcm falconlsmserviceable(PE) falconnfnetcontain(PE) mlx4vnic falconkal(E) falconlsmpinned13402(E) mlx4ib ibsa ibmad ibcore ibaddr xenfs xenprivcmd dmmultipath iTCOwdt iTCOvendorsupport pcspkr sbedac edaccore i2ci801 lpcich mfdcore ipmissif i2ccore ipmisi ipmimsghandler [ 989.760686] ioatdma sg ext3 jbd mbcache sdmod ahci libahci ixgbe dca ptp ppscore vxlan udptunnel ip6udptunnel megaraidsas mlx4core crc32cintel be2iscsi bnx2i cnic uio cxgb4i cxgb4 cxgb3i libcxgbi ipv6 cxgb3 mdio libiscsitcp qla4xxx iscsibootsysfs libiscsi scsitransportiscsi wmi dmmirror dmregionhash dmlog dmmod [last unloaded: ksplice2zhuk2jribipoibold] [ 989.761987] CPU: 10 PID: 19102 Comm: dlmthread Tainted: P OE 4.1.12-124.57.1.el6uek.x8664 #2 [ 989.762290] Hardware name: Oracle Corporation ORACLE SERVER X5-2/ASM,MOTHERBOARD,1U, BIOS 30350100 06/17/2021 [ 989.762599] task: ffff880178af6200 ti: ffff88017f7c8000 task.ti: ffff88017f7c8000 [ 989.762848] RIP: e030:[<ffffffffc07d4316>] [<ffffffffc07d4316>] _userdlmqueuelockres.part.4+0x76/0x80 [ocfs2dlmfs] [ 989.763185] RSP: e02b:ffff88017f7cbcb8 EFLAGS: 00010246 [ 989.763353] RAX: 0000000000000000 RBX: ffff880174d48008 RCX: 0000000000000003 [ 989.763565] RDX: 0000000000120012 RSI: 0000000000000003 RDI: ffff880174d48170 [ 989.763778] RBP: ffff88017f7cbcc8 R08: ffff88021f4293b0 R09: 0000000000000000 [ 989.763991] R10: ffff880179c8c000 R11: 0000000000000003 R12: ffff880174d48008 [ 989.764204] R13: 0000000000000003 R14: ffff880179c8c000 R15: ffff88021db7a000 [ 989.764422] FS: 0000000000000000(0000) GS:ffff880247480000(0000) knlGS:ffff880247480000 [ 989.764685] CS: e033 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 989.764865] CR2: ffff8000007f6800 CR3: 0000000001ae0000 CR4: 0000000000042660 [ 989.765081] Stack: [ 989.765167] 00000000000 ---truncated---