Re: Virtio-scsi multiqueue irq affinity
From: xuyihang
Date: Tue May 11 2021 - 08:38:54 EST
Hi Thomas,
The previous experiment require a device driver to enable managed irq,
which I could not easily install on a most recent branch of OS.
Actually what I was asking is whether we could change the managed irq
behaviour a little bit, rather than reporting a bug.
So, to better illustrate this problem I do another test to simulate this
scenario.
This time I wrote a kernel module, and in the module_init function, I use
request_irq to register a irq. In the irq_handler it put a work in the
workqueue.
And the work_handler would print "work handler called".
1. Register a irq for a fake new deivce and queue a work_handler
when irq arrives.
/ # insmod request_irq.ko
2. Bind the irq to CPU3
/ # echo 8 > /proc/irq/7/smp_affinity
3. Start a full CPU usage RT process and bind to CPU3
./test.sh &
/ # taskset -p 8 100
pid 100's current affinity mask: f
pid 100's new affinity mask: 8
/ # chrt -f -p 1 100
pid 100's current scheduling policy: SCHED_OTHER
pid 100's current scheduling priority: 0
pid 100's new scheduling policy: SCHED_FIFO
pid 100's new scheduling priority: 1
/ # echo -1 >/proc/sys/kernel/sched_rt_runtime_us
/ # echo -1 >/proc/sys/kernel/sched_rt_period_us
/ # top
Mem: 27376K used, 73224K free, 0K shrd, 0K buff, 8368K cached
CPU0: 0.0% usr 0.0% sys 0.0% nic 100% idle 0.0% io 0.0% irq 0.0% sirq
CPU1: 0.0% usr 0.0% sys 0.0% nic 100% idle 0.0% io 0.0% irq 0.0% sirq
CPU2: 0.0% usr 0.0% sys 0.0% nic 100% idle 0.0% io 0.0% irq 0.0% sirq
CPU3: 100% usr 0.0% sys 0.0% nic 0.0% idle 0.0% io 0.0% irq 0.0% sirq
Load average: 4.00 4.00 4.00 5/62 126
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
100 1 0 R 3252 3.2 3 26.3 {exe} ash ./test.sh
126 1 0 R 3252 3.2 1 0.8 top
...
/ # echo -n trigger > /sys/kernel/debug/irq/irqs/7
From the demsg we can tell the queued work_handler is not called.
I could understand the behaviour is as expected, but in pratice, let's say
people work on the RT team could be a totally different team for device
driver. It feels like it is nice to have a feature to exclude some CPU from
managed irq driver.
在 2021/5/11 3:56, Thomas Gleixner 写道:
Again. Please provide reports against the most recent mainline version
and not against some randomly picked kernel variant.
This time I try it on current master branch.
Linux (none) 5.12.0-next-20210506+ #3 SMP Tue May 11 14:53:58 HKT 2021
x86_64 GNU/Linux
If we make some change on this experiment:
1. Make this RT application use less CPU time instead of 100%, the problem
disappear.
2, If we change rq_affinity to 2, in order to avoid handle softirq on
the same core of RT thread, the problem also disappear. However, this approach
result in about 10%-30% random write proformance deduction comparing
to rq_affinity = 1, since it may has better cache utilization.
echo 2 > /sys/block/sda/queue/rq_affinity
Therefore, I want to exclude some CPU from managed irq on boot
parameter,
Why has this realtime thread to run on CPU0 and cannot move to some
other CPU?
Yes, this realtime thread could move to other CPU, but I think maybe it's
not so good to dodge the managed irq CPU. It also seems OS does not
give so much hint to indicate RT thread should not run on this CPU. I
think the kernel should be able to schedule the irq workqueue handler
a little bit, since RT thread is more like a user application and driver
works
within kernel space.
which has simliar approach to 11ea68f553e2 ("genirq, sched/isolation:
Isolate from handling managed interrupts").
Why can't you use the existing isolation mechanisms?
Isolation of CPU forbids other process from utilizing this CPU. Sometimes
the RT thread may not use up all CPU time, so other process could schedule
to this CPU and run for a little while.
Thanks for your time,
Yihang