Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics

From: Jakub Kicinski
Date: Wed Aug 04 2021 - 08:36:56 EST


On Tue, 03 Aug 2021 16:57:22 -0700 Saeed Mahameed wrote:
> On Tue, 2021-08-03 at 13:49 -0700, Jakub Kicinski wrote:
> > On Tue,  3 Aug 2021 18:36:23 +0200 Alexander Lobakin wrote:
> > > Most of the driver-side XDP enabled drivers provide some statistics
> > > on XDP programs runs and different actions taken (number of passes,
> > > drops, redirects etc.).
> >
> > Could you please share the statistics to back that statement up?
> > Having uAPI for XDP stats is pretty much making the recommendation
> > that drivers should implement such stats. The recommendation from
> > Alexei and others back in the day (IIRC) was that XDP programs should
> > implement stats, not the drivers, to avoid duplication.
>
> There are stats "mainly errors*" that are not even visible or reported
> to the user prog,

Fair point, exceptions should not be performance critical.

> for that i had an idea in the past to attach an
> exception_bpf_prog provided by the user, where driver/stack will report
> errors to this special exception_prog.

Or maybe we should turn trace_xdp_exception() into a call which
unconditionally collects exception stats? I think we can reasonably
expect the exception_bpf_prog to always be attached, right?

> > > Regarding that it's almost pretty the same across all the drivers
> > > (which is obvious), we can implement some sort of "standardized"
> > > statistics using Ethtool standard stats infra to eliminate a lot
> > > of code and stringsets duplication, different approaches to count
> > > these stats and so on.
> >
> > I'm not 100% sold on the fact that these should be ethtool stats.
> > Why not rtnl_fill_statsinfo() stats? Current ethtool std stats are
> > all pretty Ethernet specific, and all HW stats. Mixing HW and SW
> > stats
> > is what we're trying to get away from.
>
> XDP is going to always be eBPF based ! why not just report such stats
> to a special BPF_MAP ? BPF stack can collect the stats from the driver
> and report them to this special MAP upon user request.

Do you mean replacing the ethtool-netlink / rtnetlink etc. with
a new BPF_MAP? I don't think adding another category of uAPI thru
which netdevice stats are exposed would do much good :( Plus it
doesn't address the "yet another cacheline" concern.

To my understanding the need for stats recognizes the fact that (in
large organizations) fleet monitoring is done by different teams than
XDP development. So XDP team may have all the stats they need, but the
team doing fleet monitoring has no idea how to get to them.

To bridge the two worlds we need a way for the infra team to ask the
XDP for well-defined stats. Maybe we should take a page from the BPF
iterators book and create a program type for bridging the two worlds?
Called by networking core when duping stats to extract from the
existing BPF maps all the relevant stats and render them into a well
known struct? Users' XDP design can still use a single per-cpu map with
all the stats if they so choose, but there's a way to implement more
optimal designs and still expose well-defined stats.

Maybe that's too complex, IDK.