RE: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support

From: Shenwei Wang
Date: Fri Oct 07 2022 - 15:18:46 EST


Hi Jesper and Ilias,

The driver has a macro to configure the RX ring size. After testing
with the RX ring size, I found the strange result may has something
to do with this ring size.

I just tested with the application of xdpsock.
-- Native here means running command of "xdpsock -i eth0"
-- SKB-Mode means running command of "xdpsock -S -i eth0"

RX Ring Size 16 32 64 128
Native 230K 227K 196K 160K
SKB-Mode 207K 208K 203K 204K

It seems the smaller the ring size, the better the performance. This
is also a strange result to me.

The following is the iperf testing result.

RX Ring Size 16 64 128
iperf 300Mbps 830Mbps 933Mbps

Thanks,
Shenwei

> -----Original Message-----
> From: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx>
> Sent: Friday, October 7, 2022 3:08 AM
> To: Jesper Dangaard Brouer <jbrouer@xxxxxxxxxx>
> Cc: Shenwei Wang <shenwei.wang@xxxxxxx>; Andrew Lunn
> <andrew@xxxxxxx>; brouer@xxxxxxxxxx; David S. Miller
> <davem@xxxxxxxxxxxxx>; Eric Dumazet <edumazet@xxxxxxxxxx>; Jakub
> Kicinski <kuba@xxxxxxxxxx>; Paolo Abeni <pabeni@xxxxxxxxxx>; Alexei
> Starovoitov <ast@xxxxxxxxxx>; Daniel Borkmann <daniel@xxxxxxxxxxxxx>;
> Jesper Dangaard Brouer <hawk@xxxxxxxxxx>; John Fastabend
> <john.fastabend@xxxxxxxxx>; netdev@xxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx; imx@xxxxxxxxxxxxxxx; Magnus Karlsson
> <magnus.karlsson@xxxxxxxxx>; Björn Töpel <bjorn@xxxxxxxxxx>
> Subject: Re: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support
>
> Caution: EXT Email
>
> Hi Jesper,
>
> On Thu, 6 Oct 2022 at 11:37, Jesper Dangaard Brouer <jbrouer@xxxxxxxxxx>
> wrote:
> >
> >
> >
> > On 05/10/2022 14.40, Shenwei Wang wrote:
> > > Hi Jesper,
> > >
> > > Here is the summary of "xdp_rxq_info" testing.
> > >
> > > skb_mark_for_recycle page_pool_release_page
> > >
> > > Native SKB-Mode Native SKB-Mode
> > > XDP_DROP 460K 220K 460K 102K
> > > XDP_PASS 80K 113K 60K 62K
> > >
> >
> > It is very pleasing to see the *huge* performance benefit that
> > page_pool provide when recycling pages for SKBs (via skb_mark_for_recycle).
> > I did expect a performance boost, but not around a x2 performance boost.
>
> Indeed that's a pleasant surprise. Keep in mind that if we convert more driver
> we can also get rid of copy_break code sprinkled around in drivers.
>
> Thanks
> /Ilias
> >
> > I guess this platform have a larger overhead for DMA-mapping and
> > page-allocation.
> >
> > IMHO it would be valuable to include this result as part of the patch
> > description when you post the XDP patch again.
> >
> > Only strange result is XDP_PASS 'Native' is slower that 'SKB-mode'. I
> > cannot explain why, as XDP_PASS essentially does nothing and just
> > follow normal driver code to netstack.
> >
> > Thanks a lot for doing these tests.
> > --Jesper
> >
> > > The following are the testing log.
> > >
> > > Thanks,
> > > Shenwei
> > >
> > > ### skb_mark_for_recycle solution ###
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 466,553 0
> > > XDP-RX CPU total 466,553
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 226,272 0
> > > XDP-RX CPU total 226,272
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 80,518 0
> > > XDP-RX CPU total 80,518
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 113,681 0
> > > XDP-RX CPU total 113,681
> > >
> > >
> > > ### page_pool_release_page solution ###
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 463,145 0
> > > XDP-RX CPU total 463,145
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 104,443 0
> > > XDP-RX CPU total 104,443
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 60,539 0
> > > XDP-RX CPU total 60,539
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats CPU pps issue-pps
> > > XDP-RX CPU 0 62,566 0
> > > XDP-RX CPU total 62,566
> > >
> > >> -----Original Message-----
> > >> From: Shenwei Wang
> > >> Sent: Tuesday, October 4, 2022 8:34 AM
> > >> To: Jesper Dangaard Brouer <jbrouer@xxxxxxxxxx>; Andrew Lunn
> > >> <andrew@xxxxxxx>
> > >> Cc: brouer@xxxxxxxxxx; David S. Miller <davem@xxxxxxxxxxxxx>; Eric
> > >> Dumazet <edumazet@xxxxxxxxxx>; Jakub Kicinski <kuba@xxxxxxxxxx>;
> > >> Paolo Abeni <pabeni@xxxxxxxxxx>; Alexei Starovoitov
> > >> <ast@xxxxxxxxxx>; Daniel Borkmann <daniel@xxxxxxxxxxxxx>; Jesper
> > >> Dangaard Brouer <hawk@xxxxxxxxxx>; John Fastabend
> > >> <john.fastabend@xxxxxxxxx>; netdev@xxxxxxxxxxxxxxx; linux-
> > >> kernel@xxxxxxxxxxxxxxx; imx@xxxxxxxxxxxxxxx; Magnus Karlsson
> > >> <magnus.karlsson@xxxxxxxxx>; Björn Töpel <bjorn@xxxxxxxxxx>; Ilias
> > >> Apalodimas <ilias.apalodimas@xxxxxxxxxx>
> > >> Subject: RE: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP
> > >> support
> > >>
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Shenwei Wang
> > >>> Sent: Tuesday, October 4, 2022 8:13 AM
> > >>> To: Jesper Dangaard Brouer <jbrouer@xxxxxxxxxx>; Andrew Lunn
> > >> ...
> > >>> I haven't tested xdp_rxq_info yet, and will have a try sometime later today.
> > >>> However, for the XDP_DROP test, I did try xdp2 test case, and the
> > >>> testing result looks reasonable. The performance of Native mode is
> > >>> much higher than skb- mode.
> > >>>
> > >>> # xdp2 eth0
> > >>> proto 0: 475362 pkt/s
> > >>>
> > >>> # xdp2 -S eth0 (page_pool_release_page solution)
> > >>> proto 17: 71999 pkt/s
> > >>>
> > >>> # xdp2 -S eth0 (skb_mark_for_recycle solution)
> > >>> proto 17: 72228 pkt/s
> > >>>
> > >>
> > >> Correction for xdp2 -S eth0 (skb_mark_for_recycle solution)
> > >> proto 0: 0 pkt/s
> > >> proto 17: 122473 pkt/s
> > >>
> > >> Thanks,
> > >> Shenwei
> > >
> >