In the Linux kernel, the following vulnerability has been resolved:
bpf: support non-r10 register spill/fill to/from stack in precision tracking
Use instruction (jump) history to record instructions that performed register spill/fill to/from stack, regardless if this was done through read-only r10 register, or any other register after copying r10 into it and potentially adjusting offset.
To make this work reliably, we push extra per-instruction flags into instruction history, encoding stack slot index (spi) and stack frame number in extra 10 bit flags we take away from prev_idx in instruction history. We don't touch idx field for maximum performance, as it's checked most frequently during backtracking.
This change removes basically the last remaining practical limitation of precision backtracking logic in BPF verifier. It fixes known deficiencies, but also opens up new opportunities to reduce number of verified states, explored in the subsequent patches.
There are only three differences in selftests' BPF object files according to veristat, all in the positive direction (less states).
File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
testclsredirectdynptr.bpf.linked3.o clsredirect 2987 2864 -123 (-4.12%) 240 231 -9 (-3.75%) xdpsynproxykern.bpf.linked3.o syncookietc 82848 82661 -187 (-0.23%) 5107 5073 -34 (-0.67%) xdpsynproxykern.bpf.linked3.o syncookiexdp 85116 84964 -152 (-0.18%) 5162 5130 -32 (-0.62%)
Note, I avoided renaming jmphistory to more generic insnhist to minimize number of lines changed and potential merge conflicts between bpf and bpf-next trees.
Notice also curhistentry pointer reset to NULL at the beginning of instruction verification loop. This pointer avoids the problem of relying on last jump history entry's insnidx to determine whether we already have entry for current instruction or not. It can happen that we added jump history entry because current instruction isjmp_point(), but also we need to add instruction flags for stack access. In this case, we don't want to entries, so we need to reuse last added entry, if it is present.
Relying on insn_idx comparison has the same ambiguity problem as the one that was fixed recently in [0], so we avoid that.
[0] https://patchwork.kernel.org/project/netdevbpf/patch/20231110002638.4168352-3-andrii@kernel.org/