Re: [PATCH 1/2] mm/page_alloc: free order-0 pages through PCP in page_frag_free()
From: Ilias Apalodimas
Date: Mon Nov 05 2018 - 05:47:08 EST
Hi Aaron,
> page_frag_free() calls __free_pages_ok() to free the page back to
> Buddy. This is OK for high order page, but for order-0 pages, it
> misses the optimization opportunity of using Per-Cpu-Pages and can
> cause zone lock contention when called frequently.
>
> PaweÅ Staszewski recently shared his result of 'how Linux kernel
> handles normal traffic'[1] and from perf data, Jesper Dangaard Brouer
> found the lock contention comes from page allocator:
>
> mlx5e_poll_tx_cq
> |
> --16.34%--napi_consume_skb
> |
> |--12.65%--__free_pages_ok
> | |
> | --11.86%--free_one_page
> | |
> | |--10.10%--queued_spin_lock_slowpath
> | |
> | --0.65%--_raw_spin_lock
> |
> |--1.55%--page_frag_free
> |
> --1.44%--skb_release_data
>
> Jesper explained how it happened: mlx5 driver RX-page recycle
> mechanism is not effective in this workload and pages have to go
> through the page allocator. The lock contention happens during
> mlx5 DMA TX completion cycle. And the page allocator cannot keep
> up at these speeds.[2]
>
> I thought that __free_pages_ok() are mostly freeing high order
> pages and thought this is an lock contention for high order pages
> but Jesper explained in detail that __free_pages_ok() here are
> actually freeing order-0 pages because mlx5 is using order-0 pages
> to satisfy its page pool allocation request.[3]
>
> The free path as pointed out by Jesper is:
> skb_free_head()
> -> skb_free_frag()
> -> skb_free_frag()
> -> page_frag_free()
> And the pages being freed on this path are order-0 pages.
>
> Fix this by doing similar things as in __page_frag_cache_drain() -
> send the being freed page to PCP if it's an order-0 page, or
> directly to Buddy if it is a high order page.
>
> With this change, PaweÅ hasn't noticed lock contention yet in
> his workload and Jesper has noticed a 7% performance improvement
> using a micro benchmark and lock contention is gone.
I did the same tests on a 'low' speed 1Gbit interface on an cortex-a53.
I used socionext's netsec driver and switched buffer allocation from the
current scheme to using page_pool API (which by default allocates order0
pages).
Running 'perf top' pre and post patch got me the same results.
__free_pages_ok() disappeared from perf top and i got an ~11%
performance boost testing with 64byte packets.
Acked-by: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx>
Tested-by: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx>