In the Linux kernel, the following vulnerability has been resolved: RDMA/siw: Fix the sendmsg byte count in siwtcpsendpages Ever since commit c2ff29e99a76 ("siw: Inline dotcpsendpages()"), we have been doing this: static int siwtcpsendpages(struct socket s, struct page *page, int offset, sizet size) [...] /* Calculate the number of bytes we need to push, for this page * specifically */ sizet bytes = mint(sizet, PAGESIZE - offset, size); /* If we can't splice it, then copy it in, as normal */ if (!sendpageok(page[i])) msg.msgflags &= ~MSGSPLICEPAGES; /* Set the bvec pointing to the page, with len $bytes */ bvecsetpage(&bvec, page[i], bytes, offset); /* Set the iter to $size, aka the size of the whole sendpages (!!!) */ ioviterbvec(&msg.msgiter, ITERSOURCE, &bvec, 1, size); trypageagain: locksock(sk); /* Sendmsg with $size size (!!!) / rv = tcp_sendmsg_locked(sk, &msg, size); This means we've been sending oversized iov_iters and tcp_sendmsg calls for a while. This has a been a benign bug because sendpage_ok() always returned true. With the recent slab allocator changes being slowly introduced into next (that disallow sendpage on large kmalloc allocations), we have recently hit out-of-bounds crashes, due to slight differences in iov_iter behavior between the MSG_SPLICE_PAGES and "regular" copy paths: (MSG_SPLICE_PAGES) skb_splice_from_iter iov_iter_extract_pages iov_iter_extract_bvec_pages uses i->nr_segs to correctly stop in its tracks before OoB'ing everywhere skb_splice_from_iter gets a "short" read (!MSG_SPLICE_PAGES) skb_copy_to_page_nocache copy=iov_iter_count [...] copy_from_iter / this doesn't help */ if (unlikely(iter->count < len)) len = iter->count; iteratebvec ... and we run off the bvecs Fix this by properly setting the ioviter's byte count, plus sending the correct byte count to tcpsendmsglocked.