On 2025/1/14 22:31, Jesper Dangaard Brouer wrote:
On 10/01/2025 14.06, Yunsheng Lin wrote:
This patchset fix a possible time window problem for page_pool and
the dma API misuse problem as mentioned in [1], and try to avoid the
overhead of the fixing using some optimization.
From the below performance data, the overhead is not so obvious
due to performance variations for time_bench_page_pool01_fast_path()
and time_bench_page_pool02_ptr_ring, and there is about 20ns overhead
for time_bench_page_pool03_slow() for fixing the bug.
My benchmarking on x86_64 CPUs looks significantly different.
- CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
Benchmark (bench_page_pool_simple) results from before and after patchset:
| Test name | Cycles | | |Nanosec | | | % |
| (tasklet_*)| Before | After |diff| Before | After | diff | change |
|------------+--------+-------+----+--------+--------+-------+--------|
| fast_path | 19 | 24 | 5| 5.399 | 6.928 | 1.529 | 28.3 |
| ptr_ring | 54 | 79 | 25| 15.090 | 21.976 | 6.886 | 45.6 |
| slow | 238 | 299 | 61| 66.134 | 83.298 |17.164 | 26.0 |
#+TBLFM: $4=$3-$2::$7=$6-$5::$8=(($7/$5)*100);%.1f
My above testing show a clear performance regressions across three
different page_pool operating modes.
I retested it on arm64 server patch by patch as the raw performance
data in the attachment, it seems the result seemed similar as before.
Before this patchset:
fast_path ptr_ring slow
1. 31.171 ns 60.980 ns 164.917 ns
2. 28.824 ns 60.891 ns 170.241 ns
3. 14.236 ns 60.583 ns 164.355 ns
With patch 1-4:
4. 31.443 ns 53.242 ns 210.148 ns
5. 31.406 ns 53.270 ns 210.189 ns
With patch 1-5:
6. 26.163 ns 53.781 ns 189.450 ns
7. 26.189 ns 53.798 ns 189.466 ns
With patch 1-8:
8. 28.108 ns 68.199 ns 202.516 ns
9. 16.128 ns 55.904 ns 202.711 ns
I am not able to get hold of a x86 server yet, I might be able
to get one during weekend.
Theoretically, patch 1-4 or 1-5 should not have much performance
impact for fast_path and ptr_ring except for the rcu_lock mentioned
in page_pool_napi_local(), so it would be good if patch 1-5 is also
tested in your testlab with the rcu_lock removing in
page_pool_napi_local().
Data also available in:
- https://github.com/xdp-project/xdp-project/blob/main/areas/mem/page_pool07_bench_DMA_fix.org