Re: [PATCH net-next 3/7] pool_pool: avoid calling compound_head() for skb frag page

From: Ilias Apalodimas
Date: Thu Sep 23 2021 - 07:47:41 EST


On Thu, 23 Sept 2021 at 14:24, Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote:
>
> On 2021/9/23 16:33, Ilias Apalodimas wrote:
> > On Wed, Sep 22, 2021 at 05:41:27PM +0800, Yunsheng Lin wrote:
> >> As the pp page for a skb frag is always a head page, so make
> >> sure skb_pp_recycle() passes a head page to avoid calling
> >> compound_head() for skb frag page case.
> >
> > Doesn't that rely on the driver mostly (i.e what's passed in skb_frag_set_page() ?
> > None of the current netstack code assumes bv_page is the head page of a
> > compound page. Since our page_pool allocator can will allocate compound
> > pages for order > 0, why should we rely on it ?
>
> As the page pool alloc function return 'struct page *' to the caller, which
> is the head page of a compound pages for order > 0, so I assume the caller
> will pass that to skb_frag_set_page().

Yea that's exactly the assumption I was afraid of.
Sure not passing the head page might seem weird atm and the assumption
stands, but the point is we shouldn't blow up the entire network stack
if someone does that eventually.

>
> For non-pp page, I assume it is ok whether the page is a head page or tail
> page, as the pp_magic for both of them are not set with PP_SIGNATURE.

Yea that's true, although we removed the checking for coalescing
recyclable and non-recyclable SKBs, the next patch first checks the
signature before trying to do anything with the skb.

>
> Or should we play safe here, and do the trick as skb_free_head() does in
> patch 6?

I don't think the &1 will even be measurable, so I'd suggest just
dropping this and play safe?

Cheers
/Ilias
>
> >
> > Thanks
> > /Ilias
> >>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@xxxxxxxxxx>
> >> ---
> >> include/linux/skbuff.h | 2 +-
> >> net/core/page_pool.c | 2 --
> >> 2 files changed, 1 insertion(+), 3 deletions(-)
> >>
> >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> >> index 6bdb0db3e825..35eebc2310a5 100644
> >> --- a/include/linux/skbuff.h
> >> +++ b/include/linux/skbuff.h
> >> @@ -4722,7 +4722,7 @@ static inline bool skb_pp_recycle(struct sk_buff *skb, void *data)
> >> {
> >> if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle)
> >> return false;
> >> - return page_pool_return_skb_page(virt_to_page(data));
> >> + return page_pool_return_skb_page(virt_to_head_page(data));
> >> }
> >>
> >> #endif /* __KERNEL__ */
> >> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> >> index f7e71dcb6a2e..357fb53343a0 100644
> >> --- a/net/core/page_pool.c
> >> +++ b/net/core/page_pool.c
> >> @@ -742,8 +742,6 @@ bool page_pool_return_skb_page(struct page *page)
> >> {
> >> struct page_pool *pp;
> >>
> >> - page = compound_head(page);
> >> -
> >> /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
> >> * in order to preserve any existing bits, such as bit 0 for the
> >> * head page of compound page and bit 1 for pfmemalloc page, so
> >> --
> >> 2.33.0
> >>
> > .
> >