In the Linux kernel, the following vulnerability has been resolved:
RDMA/siw: Fix the sendmsg byte count in siwtcpsendpages
Ever since commit c2ff29e99a76 ("siw: Inline dotcpsendpages()"), we have been doing this:
static int siwtcpsendpages(struct socket s, struct page *page, int offset, sizet size) [...] /* Calculate the number of bytes we need to push, for this page * specifically */ sizet bytes = mint(sizet, PAGESIZE - offset, size); /* If we can't splice it, then copy it in, as normal */ if (!sendpageok(page[i])) msg.msgflags &= ~MSGSPLICEPAGES; /* Set the bvec pointing to the page, with len $bytes */ bvecsetpage(&bvec, page[i], bytes, offset); /* Set the iter to $size, aka the size of the whole sendpages (!!!) */ ioviterbvec(&msg.msgiter, ITERSOURCE, &bvec, 1, size); trypageagain: locksock(sk); /* Sendmsg with $size size (!!!) */ rv = tcpsendmsglocked(sk, &msg, size);
This means we've been sending oversized ioviters and tcpsendmsg calls for a while. This has a been a benign bug because sendpageok() always returned true. With the recent slab allocator changes being slowly introduced into next (that disallow sendpage on large kmalloc allocations), we have recently hit out-of-bounds crashes, due to slight differences in ioviter behavior between the MSGSPLICEPAGES and "regular" copy paths:
(MSGSPLICEPAGES) skbsplicefromiter ioviterextractpages ioviterextractbvecpages uses i->nrsegs to correctly stop in its tracks before OoB'ing everywhere skbsplicefromiter gets a "short" read
(!MSGSPLICEPAGES) skbcopytopagenocache copy=iovitercount [...] copyfromiter /* this doesn't help */ if (unlikely(iter->count < len)) len = iter->count; iterate_bvec ... and we run off the bvecs
Fix this by properly setting the ioviter's byte count, plus sending the correct byte count to tcpsendmsg_locked.