Re: [PATCH net-next v4 5/6] page_pool: add a lockdep check for recycling in hardirq
From: Alexander Duyck
Date: Tue Aug 08 2023 - 14:24:33 EST
On Tue, Aug 8, 2023 at 8:06 AM Alexander Lobakin
<aleksander.lobakin@xxxxxxxxx> wrote:
>
> From: Alexander Duyck <alexander.duyck@xxxxxxxxx>
> Date: Tue, 8 Aug 2023 07:52:32 -0700
>
> > On Tue, Aug 8, 2023 at 6:59 AM Alexander Lobakin
> > <aleksander.lobakin@xxxxxxxxx> wrote:
> >>
> >> From: Alexander Duyck <alexander.duyck@xxxxxxxxx>
> >> Date: Tue, 8 Aug 2023 06:45:26 -0700
>
> [...]
>
> >>>>> Secondly rather than returning an error is there any reason why we
> >>>>> couldn't just look at not returning page and instead just drop into the
> >>>>> release path which wouldn't take the locks in the first place? Either
> >>>>
> >>>> That is exception path to quickly catch broken drivers and fix them, why
> >>>> bother? It's not something we have to live with.
> >>>
> >>> My concern is that the current "fix" consists of stalling a Tx ring.
> >>> We need to have a way to allow forward progress when somebody mixes
> >>> xdp_frame and skb traffic as I suspect we will end up with a number of
> >>> devices doing this since they cannot handle recycling the pages in
> >>> hardirq context.
> >>
> >> You could've seen that several vendors already disabled recycling XDP
> >> buffers when in hardirq (= netpoll) in their drivers. hardirq is in
> >> general not for networking-related operations.
> >
> > The whole idea behind the netpoll cleanup is to get the Tx buffers out
> > of the way so that we can transmit even after the system has crashed.
> > The idea isn't to transmit XDP buffers, but to get the buffers out of
> > the way in the cases where somebody is combining both xdp_frame and
> > sk_buff on the same queue due to a limited number of rings being
> > present on the device.
>
> I see now, thanks a lot!
>
> >
> > My concern is that at some point in the near future somebody is going
> > to have a system crash and instead of being able to get the crash log
> > message out via their netconsole it is going to get cut off because
> > the driver stopped cleaning the Tx ring because somebody was also
> > using it as an XDP redirect destination.
> >
> >>>
> >>> The only reason why the skbs don't have the problem is that they are
> >>> queued and then cleaned up in the net_tx_action. That is why I wonder
> >>> if we shouldn't look at adding some sort of support for doing
> >>> something like that with xdp_frame as well. Something like a
> >>> dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.
> >>
> >> I still don't get why we may need to clean XDP buffers in hardirq, maybe
> >> someone could give me some links to read why we may need this and how
> >> that happens? netpoll is a very specific thing for some debug
> >> operations, isn't it? XDP shouldn't in general be enabled when this
> >> happens, should it?
> >
> > I think I kind of explained it above. It isn't so much about cleaning
> > the XDP buffers as getting them off of the ring and out of the way. If
> > we block a Tx queue because of an XDP buffer then we cannot use that
> > Tx queue. I would be good with us just deferring the cleanup like we
> > do with an sk_buff in dev_kfree_skb_irq, the only issue is we don't
> > have the ability to put them on a queue since they don't have
> > prev/next pointers.
> >
> > I suppose an alternative to cleaning them might be to make a mandatory
> > requirement that you cannot support netpoll and mix xdp_frame and
> > sk_buff on the same queue. If we enforced that then my concern about
> > them blocking a queue would be addressed.
>
> I'm leaning more towards this one TBH. I don't feel sole netpoll as
> a solid argument for introducing XDP frame deferred queues :s
That was kind of my line of thought as well. That is why I was
thinking that instead of bothering with a queue it might work just as
well to just throw all recycling out the window and just call put_page
if we are dealing with XDP in netpoll and just force it into the free
path. Then it becomes more of an "_any" type handler.