Re: [PATCH RFC bpf-next 32/52] bpf, cpumap: switch to GRO from netif_receive_skb_list()
From: Daniel Xu
Date: Wed Aug 07 2024 - 16:38:41 EST
Hi Alexander,
On Tue, Jun 28, 2022, at 12:47 PM, Alexander Lobakin wrote:
> cpumap has its own BH context based on kthread. It has a sane batch
> size of 8 frames per one cycle.
> GRO can be used on its own, adjust cpumap calls to the
> upper stack to use GRO API instead of netif_receive_skb_list() which
> processes skbs by batches, but doesn't involve GRO layer at all.
> It is most beneficial when a NIC which frame come from is XDP
> generic metadata-enabled, but in plenty of tests GRO performs better
> than listed receiving even given that it has to calculate full frame
> checksums on CPU.
> As GRO passes the skbs to the upper stack in the batches of
> @gro_normal_batch, i.e. 8 by default, and @skb->dev point to the
> device where the frame comes from, it is enough to disable GRO
> netdev feature on it to completely restore the original behaviour:
> untouched frames will be being bulked and passed to the upper stack
> by 8, as it was with netif_receive_skb_list().
>
> Signed-off-by: Alexander Lobakin <alexandr.lobakin@xxxxxxxxx>
> ---
> kernel/bpf/cpumap.c | 43 ++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 38 insertions(+), 5 deletions(-)
>
AFAICT the cpumap + GRO is a good standalone improvement. I think
cpumap is still missing this.
I have a production use case for this now. We want to do some intelligent
RX steering and I think GRO would help over list-ified receive in some cases.
We would prefer steer in HW (and thus get existing GRO support) but not all
our NICs support it. So we need a software fallback.
Are you still interested in merging the cpumap + GRO patches?
Thanks,
Daniel