In the Linux kernel, the following vulnerability has been resolved: netfs: Fix unbuffered write error handling If all the subrequests in an unbuffered write stream fail, the subrequest collector doesn't update the stream->transferred value and it retains its initial LONGMAX value. Unfortunately, if all active streams fail, then we take the smallest value of { LONGMAX, LONGMAX, ... } as the value to set in wreq->transferred - which is then returned from ->writeiter(). LONGMAX was chosen as the initial value so that all the streams can be quickly assessed by taking the smallest value of all stream->transferred - but this only works if we've set any of them. Fix this by adding a flag to indicate whether the value in stream->transferred is valid and checking that when we integrate the values. stream->transferred can then be initialised to zero. This was found by running the generic/750 xfstest against cifs with cache=none. It splices data to the target file. Once (if) it has used up all the available scratch space, the writes start failing with ENOSPC. This causes ->writeiter() to fail. However, it was returning wreq->transferred, i.e. LONGMAX, rather than an error (because it thought the amount transferred was non-zero) and iterfilesplicewrite() would then try to clean up that amount of pipe bufferage - leading to an oops when it overran. The kernel log showed: CIFS: VFS: Send error in write = -28 followed by: BUG: kernel NULL pointer dereference, address: 0000000000000008 with: RIP: 0010:iterfilesplicewrite+0x3a4/0x520 dosplice+0x197/0x4e0 or: RIP: 0010:pipebufrelease (include/linux/pipefsi.h:282) iterfilesplicewrite (fs/splice.c:755) Also put a warning check into splice to announce if ->writeiter() returned that it had written more than it was asked to.