On Sat, 24 Feb 2018 11:32:25 +0800
Jason Wang <jasowang@xxxxxxxxxx> wrote:
Except for tuntap, all other drivers' XDP was implemented at NAPIThere is a typo in the defined name "BFP_MAP_TYPE_PERCPU_ARRAY".
poll() routine in a bh. This guarantees all XDP operation were done at
the same CPU which is required by e.g BFP_MAP_TYPE_PERCPU_ARRAY. But
Besides it is NOT a requirement that comes from the map type
BPF_MAP_TYPE_PERCPU_ARRAY.
The requirement comes from the bpf_redirect_map helper (and only partly
devmap + cpumap types), as the BPF helper/program stores information in
the per-cpu redirect_info struct (see filter.c), that is used by
xdp_do_redirect() and xdp_do_flush_map().
struct redirect_info {
u32 ifindex;
u32 flags;
struct bpf_map *map;
struct bpf_map *map_to_flush;
unsigned long map_owner;
};
static DEFINE_PER_CPU(struct redirect_info, redirect_info);
[...]
void xdp_do_flush_map(void)
{
struct redirect_info *ri = this_cpu_ptr(&redirect_info);
struct bpf_map *map = ri->map_to_flush;
[...]
Notice the same redirect_info is used by the TC clsbpf system...
for tuntap, we do it in process context and we try to protect XDPI guess, this could pamper over the problem...
processing by RCU reader lock. This is insufficient since
CONFIG_PREEMPT_RCU can preempt the RCU reader critical section which
breaks the assumption that all XDP were processed in the same CPU.
Fixing this by simply disabling preemption during XDP processing.
But I generally find it problematic that the tuntap is not invoking XDP
from NAPI poll() routine in BH-context, as that context provided us
with some protection that allow certain kind of optimizations (like
this flush API). I hope this will not limit us in the future, that
tuntap driver violate the XDP call context.
Fixes: 761876c857cb ("tap: XDP support")$ git describe --contains 761876c857cb
v4.14-rc1~130^2~270^2