Re: [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider
From: Mina Almasry
Date: Sun Nov 12 2023 - 23:54:36 EST
On Fri, Nov 10, 2023 at 3:16 PM Jakub Kicinski <kuba@xxxxxxxxxx> wrote:
>
> On Sun, 5 Nov 2023 18:44:05 -0800 Mina Almasry wrote:
> > +static int mp_dmabuf_devmem_init(struct page_pool *pool)
> > +{
> > + struct netdev_dmabuf_binding *binding = pool->mp_priv;
> > +
> > + if (!binding)
> > + return -EINVAL;
> > +
> > + if (pool->p.flags & PP_FLAG_DMA_MAP ||
> > + pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > + return -EOPNOTSUPP;
>
> This looks backwards, we should _force_ the driver to use the dma
> mapping built into the page pool APIs, to isolate the driver from
> how the DMA addr actually gets obtained. Right?
>
> Maybe seeing driver patches would illuminate.
The full tree with driver patches is here:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
This is probably the most relevant patch, it implements POC page-pool
support into GVE + devmem support:
https://github.com/torvalds/linux/commit/3c27aa21eb3374f2f1677ece6258f046da234443
But, to answer your question, yes, this is a mistake. devmem doesn't
need to be mapped, which is why I disabled the flag. Actually what
should happen is like you said, we should enforce that PP_FLAG_DMA_MAP
is on, and have it be a no-op, so the driver doesn't try to map the
devmem on its own.
--
Thanks,
Mina