Re: [PATCH v9 net-next 15/15] net: Move per-CPU flush-lists to bpf_net_context on PREEMPT_RT.
From: Jakub Kicinski
Date: Fri Jun 21 2024 - 22:06:08 EST
On Thu, 20 Jun 2024 15:22:05 +0200 Sebastian Andrzej Siewior wrote:
> void __cpu_map_flush(void)
> {
> - struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
> + struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list();
> struct xdp_bulk_queue *bq, *tmp;
>
> list_for_each_entry_safe(bq, tmp, flush_list, flush_node) {
Most of the time we'll init the flush list just to walk its (empty)
self. It feels really tempting to check the init flag inside
xdp_do_flush() already. Since the various sub-flush handles may not get
inlined - we could save ourselves not only the pointless init, but
also the function calls. So the code would potentially be faster than
before the changes?
Can be a follow up, obviously.