Re: [net-next v5 PATCH] page_pool: handle page recycle for NUMA_NO_NODE condition
From: Jesper Dangaard Brouer
Date: Fri Dec 20 2019 - 05:41:29 EST
On Fri, 20 Dec 2019 12:23:14 +0200
Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx> wrote:
> Hi Jesper,
>
> I like the overall approach since this moves the check out of the hotpath.
> @Saeed, since i got no hardware to test this on, would it be possible to check
> that it still works fine for mlx5?
>
> [...]
> > + struct ptr_ring *r = &pool->ring;
> > + struct page *page;
> > + int pref_nid; /* preferred NUMA node */
> > +
> > + /* Quicker fallback, avoid locks when ring is empty */
> > + if (__ptr_ring_empty(r))
> > + return NULL;
> > +
> > + /* Softirq guarantee CPU and thus NUMA node is stable. This,
> > + * assumes CPU refilling driver RX-ring will also run RX-NAPI.
> > + */
> > + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid;
>
> One of the use cases for this is that during the allocation we are not
> guaranteed to pick up the correct NUMA node.
> This will get automatically fixed once the driver starts recycling packets.
>
> I don't feel strongly about this, since i don't usually like hiding value
> changes from the user but, would it make sense to move this into
> __page_pool_alloc_pages_slow() and change the pool->p.nid?
>
> Since alloc_pages_node() will replace NUMA_NO_NODE with numa_mem_id()
> regardless, why not store the actual node in our page pool information?
> You can then skip this and check pool->p.nid == numa_mem_id(), regardless of
> what's configured.
This single code line helps support that drivers can control the nid
themselves. This is a feature that is only used my mlx5 AFAIK.
I do think that is useful to allow the driver to "control" the nid, as
pinning/preferring the pages to come from the NUMA node that matches
the PCI-e controller hardware is installed in does have benefits.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer