Re: [PATCH v6 net-next 1/5] xdp: allow same allocator usage

From: Ivan Khoronzhuk
Date: Thu Jul 04 2019 - 13:12:04 EST

On Thu, Jul 04, 2019 at 02:41:44PM +0200, Jesper Dangaard Brouer wrote:
On Thu, 4 Jul 2019 13:22:40 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx> wrote:

On Wed, Jul 03, 2019 at 07:40:13PM +0200, Jesper Dangaard Brouer wrote:
>On Wed, 3 Jul 2019 13:18:59 +0300
>Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx> wrote:
>> First of all, it is an absolute requirement that each RX-queue have
>> their own page_pool object/allocator. And this change is intendant
>> to handle special case, where a single RX-queue can receive packets
>> from two different net_devices.
>> In order to protect against using same allocator for 2 different rx
>> queues, add queue_index to xdp_mem_allocator to catch the obvious
>> mistake where queue_index mismatch, as proposed by Jesper Dangaard
>> Brouer.
>> Adding this on xdp allocator level allows drivers with such dependency
>> change the allocators w/o modifications.
>> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@xxxxxxxxxx>
>> ---
>> include/net/xdp_priv.h | 2 ++
>> net/core/xdp.c | 55 ++++++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 57 insertions(+)
>> diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h
>> index 6a8cba6ea79a..9858a4057842 100644
>> --- a/include/net/xdp_priv.h
>> +++ b/include/net/xdp_priv.h
>> @@ -18,6 +18,8 @@ struct xdp_mem_allocator {
>> struct rcu_head rcu;
>> struct delayed_work defer_wq;
>> unsigned long defer_warn;
>> + unsigned long refcnt;
>> + u32 queue_index;
>> };
>I don't like this approach, because I think we need to extend struct
>xdp_mem_allocator with a net_device pointer, for doing dev_hold(), to
>correctly handle lifetime issues. (As I tried to explain previously).
>This will be much harder after this change, which is why I proposed the
>other patch.
My concern comes not from zero also.
It's partly continuation of not answered questions from here:

"For me it's important to know only if it means that alloc.count is
freed at first call of __mem_id_disconnect() while shutdown.
The workqueue for the rest is connected only with ring cache protected
by ring lock and not supposed that alloc.count can be changed while
workqueue tries to shutdonwn the pool."

Yes. The alloc.count is only freed on first call. I considered
changing the shutdown API, to have two shutdown calls, where the call
used from the work-queue will not have the loop emptying alloc.count,
but instead have a WARN_ON(alloc.count), as it MUST be empty (once is
code running from work-queue).

So patch you propose to leave works only because of luck, because fast
cache is cleared before workqueue is scheduled and no races between two
workqueues for fast cache later. I'm not really against this patch, but
I have to try smth better.

It is not "luck". It does the correct thing as we never enter the
while loop in __page_pool_request_shutdown() from a work-queue, but it
is not obvious from the code. The not-so-nice thing is that two
work-queue shutdowns will be racing with each-other, in the multi
netdev use-case, but access to the ptr_ring is safe/locked.

So, having this, and being prudent to generic code changes, lets roll back
to idea from v.4:
but use changes from following patch, reintroducing page destroy:
with appropriate small modifications for cpsw.

In case of some issue connected with it (not supposed), or two/more
allocators used by cpsw, or one more driver having such multi ndev
capabilities (supposed), would be nice to use this link as reference
and it can be base for similar modifications.

Unless Jesper disagrees with this ofc.

I will send v7 soon after verification is completed.

Ivan Khoronzhuk