Re: [PATCH 4/5] netdev: implement infrastructure for threadable napi irq

From: Eric Dumazet
Date: Thu Jun 16 2016 - 07:19:44 EST


On Thu, Jun 16, 2016 at 3:39 AM, Paolo Abeni <pabeni@xxxxxxxxxx> wrote:
> We used a different setup to explicitly avoid the (guest) userspace
> starvation issue. Using a guest with 2vCPUs (or more) and a single queue
> avoids the starvation issue, because the scheduler moves the user space
> processes on a different vCPU in respect to the ksoftirqd thread.
>
> In the hypervisor, with a vanilla kernel, the qemu process receives a
> fair share of the cpu time, but considerably less 100%, and his
> performances are bounded to a considerable lower throughput than the
> theoretical one.
>

Completely different setup than last time. I am kind of lost.

Are you trying to find the optimal way to demonstrate your patch can be useful ?

In a case with 2 vcpus, then the _standard_ kernel will migrate the
user thread on the cpu not used by the IRQ,
once process scheduler can see two threads competing on one cpu
(ksoftirqd and the user thread), and the other cpu being idle.

Trying to shift the IRQ 'thread' is not nice, since the hardware IRQ
will be delivered on the wrong cpu.

Unless user space forces cpu pinning ? Then tell the user it should not.

The natural choice is to put both producer and consumer on same cpu
for cache locality reasons (wake affine),
but in stress mode allow to run the consumer on another cpu if available.

If the process scheduler fails to migrate the producer, then there is
a bug needing to be fixed.

Trying to migrate the producer, while hardware IRQ are generally stick
to one cpu is counter intuitive and source of reorders.

(Think of tunneling processing, re-injecting packets to the stack with
netif_rx())