Re: [PATCH 1/3] sfc: revert "reduce the number of requested xdp ev queues"

From: Íñigo Huguet
Date: Wed Jul 07 2021 - 07:50:00 EST


On Wed, Jul 7, 2021 at 1:23 PM Edward Cree <ecree.xilinx@xxxxxxxxx> wrote:
> Should we then be using min(tx_per_ev, EFX_MAX_TXQ_PER_CHANNEL) in the
> DIV_ROUND_UP?

Could be another possibility, but currently that will always result in
EFX_MAX_TXQ_PER_CHANNEL, because tx_per_ev will be 4 or 8 depending on
the model. Anyway, I will add this change to v2, just in case any
constant is changed in the future.

> And on line 184 probably we need to set efx->xdp_tx_per_channel to the
> same thing, rather than blindly to EFX_MAX_TXQ_PER_CHANNEL as at
> present — I suspect the issue you mention in patch #2 stemmed from
> that.
> Note that if we are in fact hitting this limitation (i.e. if
> tx_per_ev > EFX_MAX_TXQ_PER_CHANNEL), we could readily increase
> EFX_MAX_TXQ_PER_CHANNEL at the cost of a little host memory, enabling
> us to make more efficient use of our EVQs and thus retain XDP TX
> support up to a higher number of CPUs.

Yes, that was a possibility I was thinking of as long term solution,
or even allocate the queues dynamically. Would this be a problem?
What's the reason for them being statically allocated? Also, what's
the reason for the channels being limited to 32? The hardware can be
configured to provide more than that, but the driver has this constant
limit.

Another question I have, thinking about the long term solution: would
it be a problem to use the standard TX queues for XDP_TX/REDIRECT? At
least in the case that we're hitting the resources limits, I think
that they could be enqueued to these queues. I think that just taking
netif_tx_lock would avoid race conditions, or a per-queue lock.

In any case, these are 2 different things: one is fixing this bug as
soon as possible, and another thinking and implementing the long term
solution to the short-of-resources problem.

Regards
--
Íñigo Huguet