Re: [RFC net-next 0/2] mvpp2: page_pool support

From: Ilias Apalodimas
Date: Tue Dec 24 2019 - 04:52:37 EST


On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> This patches change the memory allocator of mvpp2 from the frag allocator to
> the page_pool API. This change is needed to add later XDP support to mvpp2.
>
> The reason I send it as RFC is that with this changeset, mvpp2 performs much
> more slower. This is the tc drop rate measured with a single flow:
>
> stock net-next with frag allocator:
> rx: 900.7 Mbps 1877 Kpps
>
> this patchset with page_pool:
> rx: 423.5 Mbps 882.3 Kpps
>
> This is the perf top when receiving traffic:
>
> 27.68% [kernel] [k] __page_pool_clean_page

This seems extremly high on the list.

> 9.79% [kernel] [k] get_page_from_freelist
> 7.18% [kernel] [k] free_unref_page
> 4.64% [kernel] [k] build_skb
> 4.63% [kernel] [k] __netif_receive_skb_core
> 3.83% [mvpp2] [k] mvpp2_poll
> 3.64% [kernel] [k] eth_type_trans
> 3.61% [kernel] [k] kmem_cache_free
> 3.03% [kernel] [k] kmem_cache_alloc
> 2.76% [kernel] [k] dev_gro_receive
> 2.69% [mvpp2] [k] mvpp2_bm_pool_put
> 2.68% [kernel] [k] page_frag_free
> 1.83% [kernel] [k] inet_gro_receive
> 1.74% [kernel] [k] page_pool_alloc_pages
> 1.70% [kernel] [k] __build_skb
> 1.47% [kernel] [k] __alloc_pages_nodemask
> 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0
> 1.29% [kernel] [k] tcf_action_exec
>
> I tried Ilias patches for page_pool recycling, I get an improvement
> to ~1100, but I'm still far than the original allocator.

Can you post the recycling perf for comparison?

>
> Any idea on why I get such bad numbers?

Nop but it's indeed strange

>
> Another reason to send it as RFC is that I'm not fully convinced on how to
> use the page_pool given the HW limitation of the BM.

I'll have a look right after holidays

>
> The driver currently uses, for every CPU, a page_pool for short packets and
> another for long ones. The driver also has 4 rx queue per port, so every
> RXQ #1 will share the short and long page pools of CPU #1.
>

I am not sure i am following the hardware config here

> This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice,
> on two different page_pool, can this be a problem?
>
> As usual, ideas are welcome.
>
> Matteo Croce (2):
> mvpp2: use page_pool allocator
> mvpp2: memory accounting
>
> drivers/net/ethernet/marvell/Kconfig | 1 +
> drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 7 +
> .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 142 +++++++++++++++---
> 3 files changed, 125 insertions(+), 25 deletions(-)
>
> --
> 2.24.1
>
Cheers
/Ilias