Re: [PATCH net-next 1/3] net: stmmac: Fix NAPI poll in TX path when in multi-queue
From: Lendacky, Thomas
Date: Sat Feb 16 2019 - 12:22:09 EST
On 2/15/19 6:35 PM, Florian Fainelli wrote:
> On 2/15/19 5:42 AM, Jose Abreu wrote:
>> Commit 8fce33317023 introduced the concept of NAPI per-channel and
>> independent cleaning of TX path.
>>
>> This is currently breaking performance in some cases. The scenario
>> happens when all packets are being received in Queue 0 but the TX is
>> performed in Queue != 0.
>>
>> Fix this by using different NAPI instances per each TX and RX queue, as
>> suggested by Florian.
>>
>> Signed-off-by: Jose Abreu <joabreu@xxxxxxxxxxxx>
>> Cc: Florian Fainelli <f.fainelli@xxxxxxxxx>
>> Cc: Joao Pinto <jpinto@xxxxxxxxxxxx>
>> Cc: David S. Miller <davem@xxxxxxxxxxxxx>
>> Cc: Giuseppe Cavallaro <peppe.cavallaro@xxxxxx>
>> Cc: Alexandre Torgue <alexandre.torgue@xxxxxx>
>> ---
>
> [snip]
>
>> - if (work_done < budget && napi_complete_done(napi, work_done)) {
>> - int stat;
>> + priv->xstats.napi_poll++;
>>
>> + work_done = stmmac_tx_clean(priv, budget, chan);
>> + if (work_done < budget && napi_complete_done(napi, work_done))
>
> You should not be bounding your TX queue against the NAPI budge, it
> should run unbound and clean as much as it can, which could be the
> entire ring size if that is how many packets you pushed between
> interrupts. That could be the cause of poor performance as well.
Won't returning the budget value cause this napi_poll routine to be called
again, where the driver can continue to clean TX packets? I thought this
just gives other drivers the opportunity to run their napi_poll routines
in between so as not to be starved.
Thanks,
Tom
>