Yeah, I still remember that "Who needs cpumap nowadays" (c), but anyway.^^^^^^^^^
__xdp_build_skb_from_frame() missed the moment when the networking stack
became able to recycle skb pages backed by a Page Pool. This was making
e.g. cpumap redirect even less effective than simple %XDP_PASS. veth was
also affected in some scenarios.
A lot of drivers use skb_mark_for_recycle() already, it's been almost
two years and seems like there are no issues in using it in the generic
code too. {__,}xdp_release_frame() can be then removed as it losts its
last user.
Page Pool becomes then zero-alloc (or almost) in the abovementioned
cases, too. Other memory type models (who needs them at this point)
have no changes.
Some numbers on 1 Xeon Platinum core bombed with 27 Mpps of 64-byte
IPv6 UDP:
Plain %XDP_PASS on baseline, Page Pool driver:
src cpu Rx drops dst cpu Rx
2.1 Mpps N/A 2.1 Mpps
cpumap redirect (w/o leaving its node) on baseline:
6.8 Mpps 5.0 Mpps 1.8 Mpps
cpumap redirect with skb PP recycling:
7.9 Mpps 5.7 Mpps 2.2 Mpps +22%
Alexander Lobakin (2):
xdp: recycle Page Pool backed skbs built from XDP frames
xdp: remove unused {__,}xdp_release_frame()
include/net/xdp.h | 29 -----------------------------
net/core/xdp.c | 19 ++-----------------
2 files changed, 2 insertions(+), 46 deletions(-)