Re: [PATCH] bpf: provide map key to BPF program after redirect
From: Florian Kauer
Date: Mon Jul 08 2024 - 11:29:42 EST
On 7/5/24 15:18, Toke Høiland-Jørgensen wrote:
> Florian Kauer <florian.kauer@xxxxxxxxxxxxx> writes:
>
>> On 7/5/24 13:01, Toke Høiland-Jørgensen wrote:
>>> Florian Kauer <florian.kauer@xxxxxxxxxxxxx> writes:
>>>
>>>> Both DEVMAP as well as CPUMAP provide the possibility
>>>> to attach BPF programs to their entries that will be
>>>> executed after a redirect was performed.
>>>>
>>>> With BPF_F_BROADCAST it is in also possible to execute
>>>> BPF programs for multiple clones of the same XDP frame
>>>> which is, for example, useful for establishing redundant
>>>> traffic paths by setting, for example, different VLAN tags
>>>> for the replicated XDP frames.
>>>>
>>>> Currently, this program itself has no information about
>>>> the map entry that led to its execution. While egress_ifindex
>>>> can be used to get this information indirectly and can
>>>> be used for path dependent processing of the replicated frames,
>>>> it does not work if multiple entries share the same egress_ifindex.
>>>>
>>>> Therefore, extend the xdp_md struct with a map_key
>>>> that contains the key of the associated map entry
>>>> after performing a redirect.
>>>>
>>>> See
>>>> https://lore.kernel.org/xdp-newbies/5eb6070c-a12e-4d4c-a9f0-a6a6fafa41d1@xxxxxxxxxxxxx/T/#u
>>>> for the discussion that led to this patch.
>>>>
>>>> Signed-off-by: Florian Kauer <florian.kauer@xxxxxxxxxxxxx>
>>>> ---
>>>> include/net/xdp.h | 3 +++
>>>> include/uapi/linux/bpf.h | 2 ++
>>>> kernel/bpf/devmap.c | 6 +++++-
>>>> net/core/filter.c | 18 ++++++++++++++++++
>>>> 4 files changed, 28 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/include/net/xdp.h b/include/net/xdp.h
>>>> index e6770dd40c91..e70f4dfea1a2 100644
>>>> --- a/include/net/xdp.h
>>>> +++ b/include/net/xdp.h
>>>> @@ -86,6 +86,7 @@ struct xdp_buff {
>>>> struct xdp_txq_info *txq;
>>>> u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/
>>>> u32 flags; /* supported values defined in xdp_buff_flags */
>>>> + u64 map_key; /* set during redirect via a map */
>>>> };
>>>>
>>>> static __always_inline bool xdp_buff_has_frags(struct xdp_buff *xdp)
>>>> @@ -175,6 +176,7 @@ struct xdp_frame {
>>>> struct net_device *dev_rx; /* used by cpumap */
>>>> u32 frame_sz;
>>>> u32 flags; /* supported values defined in xdp_buff_flags */
>>>> + u64 map_key; /* set during redirect via a map */
>>>> };
>>>
>>> struct xdp_frame is size constrained, so we shouldn't be using precious
>>> space on this. Besides, it's not information that should be carried
>>> along with the packet after transmission. So let's put it into struct
>>> xdp_txq_info and read it from there the same way we do for egress_ifindex :)
>>
>> Very reasonable, but do you really mean struct xdp_frame or xdp_buff?
>> Only the latter has the xdp_txq_info?
>
> Well, we should have the field in neither, but xdp_frame is the one that
> is size constrained. Whenever a cpumap/devmap program is run (in
> xdp_bq_bpf_prog_run() and dev_map_bpf_prog_run_skb()), a struct
> xdp_txq_info is prepared on the stack, so you'll just need to add
> setting of the new value to that...
Ok, that makes sense, thanks!
However, I still need to pass the key via the xdp_dev_bulk_queue from
dev_map_enqueue_clone (and other places).
Just a single map_key per xdp_dev_bulk_queue won't work because the
same queue can be fed from different map entries if the tx interfaces
are the same. So I would tend towards a
u64 map_key[DEV_MAP_BULK_SIZE];
That being said, isn't there already a problem now with different
xdp_progs for two entries that have the same interface but different
xdp_progs? Looking at the code, I would expect that the first
xdp_prog will be executed for both entries.
There is already a long comment in devmap.c's bq_enqueue,
but does it really cover this situation?
Thanks,
Florian
> -Toke
>
>