Re: [RFC PATCH net-next v6 02/15] net: page_pool: create hooks for custom page providers
From: Mina Almasry
Date: Fri Mar 22 2024 - 13:41:11 EST
Hi Christoph,
Sorry for the late reply, I've been out for a few days.
On Mon, Mar 18, 2024 at 4:22 PM Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
>
> On Sun, Mar 17, 2024 at 07:49:43PM -0700, David Wei wrote:
> > I'm working on a similar proposal for zero copy Rx but to host memory
> > and depend on this memory provider API.
>
> How do you need a different provider for that vs just udmabuf?
>
This was discussed on the io_uring ZC RFC in one of the earliest RFCs.
Here is a link to Pavel's response:
https://patchwork.kernel.org/project/netdevbpf/patch/20231106024413.2801438-6-almasrymina@xxxxxxxxxx/#25589471
The UAPI of wrapping io_uring memory into a udmabuf just to use it
with devmem TCP only for the user to have to unwrap it is undesirable
to him.
> > Jakub also designed this API for hugepages too IIRC. Basically there's
> > going to be at least three fancy ways of providing pages (one of which
> > isn't actually pages, hence the merged netmem_t series) to drivers.
>
> How do hugepages different from a normal page allocation? They should
> just a different ordered passed to the page allocator.
>
Yes, that's more-or-less what's what the hugepage memory provider
Jakub proposed does. The memory provider would allocate a hugepage and
hold a reference to it. Then when the page_pool needs a page, it would
allocate a PAGE_SIZE page from said hugepage region and provide it to
the page_pool, and the pool back to the driver. This allows the
hugepages to work without the page_pool and driver to be hugepage
aware and to insert huge page specific processing in it.
Other designs for this hugepage use case are possible, I'm just
describing Jakub's idea for it as a potential use-case for these
hooks. For example technically the page_pool at the moment does
support non-0 order allocations, but most drivers simply set the order
to 0 and use the page pool only for PAGE_SIZE allocations. An
alternative design could be to use this support in the page pool, but
that requires every driver to adopt this rather than a core networking
change that can apply transparently (to a large extent) to all
page_pool drivers.
--
Thanks,
Mina