On 04/07/17 23:28, Daniel Borkmann wrote:
Have you tried with cilium's BPF code? The kernel selftests are quite small,Results from the next (in-progress) version of the patch series, with the
so not really pushing processed insns too far. I can send you a BPF obj file
if that's easier for testing.
'id' bugfix I mentioned in my other mail, and rebased onto an updated
net-next (0e72582). Numbers collected with:
# tc filter add dev lo egress bpf da obj /path/to/bpf_object.o sec $section verb 2>&1 | grep "processed" | awk -e 'BEGIN { N = 0; }' -e '{ N += $2; }' -e 'END { print N; }'
Program net-next short full
bpf_lb_opt_-DLB_L3.o 4707 5872 6515
bpf_lb_opt_-DLB_L4.o 7662 8652 8976
bpf_lb_opt_-DUNKNOWN.o 727 2972 2960
bpf_lxc_opt_-DDROP_ALL.o 57725 85750 95412
bpf_lxc_opt_-DUNKNOWN.o 93676 134043 141706
bpf_netdev.o 14702 24665 24251
bpf_overlay.o 7303 10939 10999
Conclusion: the ptr&const and full-range min/max tracking make little
difference (10% increase at most, sometimes a decrease); most of the
increase comes from the basic "replace imm and aux_off/align with tnums"
patch.
So based on what Alexei was saying earlier, it sounds like the answer for
now is to up the limit (say to a round 128k), get this series merged,
then start work on pruning optimisation so we can hopefully bring that
limit back down again later. Sound reasonable?