RE: [EXT] Re: [4.20 PATCH] Revert "mwifiex: restructure rx_reorder_tbl_lock usage"

From: Ganapathi Bhat
Date: Mon Jun 03 2019 - 23:08:03 EST

Hi Brian,

> > netif_rx_ni+0xe8/0x120
> > mwifiex_recv_packet+0xfc/0x10c [mwifiex]
> > mwifiex_process_rx_packet+0x1d4/0x238 [mwifiex]
> > mwifiex_11n_dispatch_pkt+0x190/0x1ac [mwifiex]
> > mwifiex_11n_rx_reorder_pkt+0x28c/0x354 [mwifiex]
> TL;DR: the problem was right here ^^^
> where you started running mwifiex_11n_dispatch_pkt() (via
> mwifiex_11n_scan_and_dispatch()) while holding a spinlock.
> When you do that, you eventually call netif_rx_ni(), which specifically defers
> to softirq contexts. Then, if you happen to have your flush timer expiring just
> before that, you end up in mwifiex_flush_data(), which also needs that
> spinlock.

Understood; Thanks for this detail;

> There are a few possible ways to handle this:
> (a) prevent processing softirqs in that context; e.g., with
> local_bh_disable(). This seems somewhat of a hack.
> (Side note: I think most of the locks in this driver really could be
> spin_lock_bh(), not spin_lock_irqsave() -- we don't really care
> about hardirq context for 99% of these locks.)
> (b) restructure so that packet processing (e.g., netif_rx_ni()) is done
> outside of the spinlock.
> It's actually not that hard to do (b). You can just queue your skb's up in a
> temporary sk_buff_head list and process them all at once after you've
> finished processing the reorder table. I have a local patch to do this, and I
> might send it your way if I can give it a bit more testing.

OK; That will be good; We will run a complete test after the patch; (OR we can work on this, share for review);