Re: [PATCH net-next 03/21] ethtool, stats: introduce standard XDP statistics

From: Jakub Kicinski
Date: Wed Aug 04 2021 - 12:44:39 EST


On Wed, 4 Aug 2021 10:17:56 -0600 David Ahern wrote:
> On 8/4/21 6:36 AM, Jakub Kicinski wrote:
> >> XDP is going to always be eBPF based ! why not just report such stats
> >> to a special BPF_MAP ? BPF stack can collect the stats from the driver
> >> and report them to this special MAP upon user request.
> > Do you mean replacing the ethtool-netlink / rtnetlink etc. with
> > a new BPF_MAP? I don't think adding another category of uAPI thru
> > which netdevice stats are exposed would do much good :( Plus it
> > doesn't address the "yet another cacheline" concern.
> >
> > To my understanding the need for stats recognizes the fact that (in
> > large organizations) fleet monitoring is done by different teams than
> > XDP development. So XDP team may have all the stats they need, but the
> > team doing fleet monitoring has no idea how to get to them.
> >
> > To bridge the two worlds we need a way for the infra team to ask the
> > XDP for well-defined stats. Maybe we should take a page from the BPF
> > iterators book and create a program type for bridging the two worlds?
> > Called by networking core when duping stats to extract from the
> > existing BPF maps all the relevant stats and render them into a well
> > known struct? Users' XDP design can still use a single per-cpu map with
> > all the stats if they so choose, but there's a way to implement more
> > optimal designs and still expose well-defined stats.
> >
> > Maybe that's too complex, IDK.
>
> I was just explaining to someone internally how to get stats at all of
> the different points in the stack to track down reasons for dropped packets:
>
> ethtool -S for h/w and driver
> tc -s for drops by the qdisc
> /proc/net/softnet_stat for drops at the backlog layer
> netstat -s for network and transport layer
>
> yet another command and API just adds to the nightmare of explaining and
> understanding these stats.

Are you referring to RTM_GETSTATS when you say "yet another command"?
RTM_GETSTATS exists and is used by offloads today.

I'd expect ip -s (-s) to be extended to run GETSTATS and display the xdp
stats. (Not sure why ip -s was left out of your list :))

> There is real value in continuing to use ethtool API for XDP stats. Not
> saying this reorg of the XDP stats is the right thing to do, only that
> the existing API has real user benefits.

RTM_GETSTATS is an existing API. New ethtool stats are intended to be HW
stats. I don't want to go back to ethtool being a dumping ground for all
stats because that's what the old interface encouraged.

> Does anyone have data that shows bumping a properly implemented counter
> causes a noticeable performance degradation and if so by how much? You
> mention 'yet another cacheline' but collecting stats on stack and
> incrementing the driver structs at the end of the napi loop should not
> have a huge impact versus the value the stats provide.

Not sure, maybe Jesper has some numbers. Maybe Intel folks do?

I'm just allergic to situations when there is a decision made and
then months later patches are posted disregarding the decision,
without analysis on why that decision was wrong. And while the
maintainer who made the decision is on vacation.