Re: [RFC PATCH net-next V2 0/6] XDP rx handler
From: Alexei Starovoitov
Date: Wed Aug 15 2018 - 22:49:22 EST
On Wed, Aug 15, 2018 at 03:04:35PM +0800, Jason Wang wrote:
>
> > > 3 Deliver XDP buff to userspace through macvtap.
> > I think I'm getting what you're trying to achieve.
> > You actually don't want any bpf programs in there at all.
> > You want macvlan builtin logic to act on raw packet frames.
>
> The built-in logic is just used to find the destination macvlan device. It
> could be done by through another bpf program. Instead of inventing lots of
> generic infrastructure on kernel with specific userspace API, built-in logic
> has its own advantages:
>
> - support hundreds or even thousands of macvlans
are you saying xdp bpf program cannot handle thousands macvlans?
> - using exist tools to configure network
> - immunity to topology changes
what do you mean specifically?
>
> Besides the usage for containers, we can implement macvtap RX handler which
> allows a fast packet forwarding to userspace.
and try to reinvent af_xdp? the motivation for the patchset still escapes me.
> Actually, the idea is not limited to macvlan but for all device that is
> based on rx handler. Consider the case of bonding, this allows to set a very
> simple XDP program on slaves and keep a single main logic XDP program on the
> bond instead of duplicating it in all slaves.
I think such mixed environment of hardcoded in-kernel things like bond
mixed together with xdp programs will be difficult to manage and debug.
How admin suppose to debug it? Say something in the chain of
nic -> native xdp -> bond with your xdp rx -> veth -> xdp prog -> consumer
is dropping a packet. If all forwarding decisions are done by bpf progs
the progs will have packet tracing facility (like cilium does) to
show packet flow end-to-end. It works briliantly like traceroute within a host.
But when you have things like macvlan, bond, bridge in the middle
that can also act on packet, the admin will have a hard time.
Essentially what you're proposing is to make all kernel builtin packet
steering/forwarding facilities to understand raw xdp frames. That's a lot of code
and at the end of the chain you'd need fast xdp frame consumer otherwise
perf benefits are lost. If that consumer is xdp bpf program
why bother with xdp-fied macvlan or bond? If that consumer is tcp stack
than forwarding via xdp-fied bond is no faster than via skb-based bond.