RE: [RFC 0/7] Add support to process rx packets in thread

From: Rakesh Pillai
Date: Tue Jul 28 2020 - 12:59:31 EST




> -----Original Message-----
> From: David Laight <David.Laight@xxxxxxxxxx>
> Sent: Sunday, July 26, 2020 4:46 PM
> To: 'Sebastian Gottschall' <s.gottschall@xxxxxxxxxx>; Hillf Danton
> <hdanton@xxxxxxxx>
> Cc: Andrew Lunn <andrew@xxxxxxx>; Rakesh Pillai <pillair@xxxxxxxxxxxxxx>;
> netdev@xxxxxxxxxxxxxxx; linux-wireless@xxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx; ath10k@xxxxxxxxxxxxxxxxxxx;
> dianders@xxxxxxxxxxxx; Markus Elfring <Markus.Elfring@xxxxxx>;
> evgreen@xxxxxxxxxxxx; kuba@xxxxxxxxxx; johannes@xxxxxxxxxxxxxxxx;
> davem@xxxxxxxxxxxxx; kvalo@xxxxxxxxxxxxxx
> Subject: RE: [RFC 0/7] Add support to process rx packets in thread
>
> From: Sebastian Gottschall <s.gottschall@xxxxxxxxxx>
> > Sent: 25 July 2020 16:42
> > >> i agree. i just can say that i tested this patch recently due this
> > >> discussion here. and it can be changed by sysfs. but it doesnt work for
> > >> wifi drivers which are mainly using dummy netdev devices. for this i
> > >> made a small patch to get them working using napi_set_threaded
> manually
> > >> hardcoded in the drivers. (see patch bellow)
>
> > > By CONFIG_THREADED_NAPI, there is no need to consider what you did
> here
> > > in the napi core because device drivers know better and are responsible
> > > for it before calling napi_schedule(n).
>
> > yeah. but that approach will not work for some cases. some stupid
> > drivers are using locking context in the napi poll function.
> > in that case the performance will runto shit. i discovered this with the
> > mvneta eth driver (marvell) and mt76 tx polling (rx works)
> > for mvneta is will cause very high latencies and packet drops. for mt76
> > it causes packet stop. doesnt work simply (on all cases no crashes)
> > so the threading will only work for drivers which are compatible with
> > that approach. it cannot be used as drop in replacement from my point of
> > view.
> > its all a question of the driver design
>
> Why should it make (much) difference whether the napi callbacks (etc)
> are done in the context of the interrupted process or that of a
> specific kernel thread.
> The process flags (or whatever) can even be set so that it appears
> to be the expected 'softint' context.
>
> In any case running NAPI from a thread will just show up the next
> piece of code that runs for ages in softint context.
> I think I've seen the tail end of memory being freed under rcu
> finally happening under softint and taking absolutely ages.
>
> David
>

Hi All,

Is the threaded NAPI change posted to kernel ?
Is the conclusion of this discussion that " we cannot use threads for processing packets " ??


> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes,
> MK1 1PT, UK
> Registration No: 1397386 (Wales)