RE: [PATCH 1/2] block: default to rq_affinity=2 for blk-mq

From: Elliott, Robert (Server Storage)
Date: Wed Sep 10 2014 - 15:36:59 EST




> -----Original Message-----
> From: Jens Axboe [mailto:axboe@xxxxxxxxx]
> Sent: Wednesday, 10 September, 2014 1:15 PM
> To: Robert Elliott; Elliott, Robert (Server Storage); hch@xxxxxx;
> linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH 1/2] block: default to rq_affinity=2 for blk-mq
>
> On 09/09/2014 06:18 PM, Robert Elliott wrote:
> > From: Robert Elliott <elliott@xxxxxx>
> >
> > One change introduced by blk-mq is that it does all
> > the completion work in hard irq context rather than
> > soft irq context.
> >
> > On a 6 core system, if all interrupts are routed to
> > one CPU, then you can easily run into this:
> > * 5 CPUs submitting IOs
> > * 1 CPU spending 100% of its time in hard irq context
> > processing IO completions, not able to submit anything
> > itself
> >
> > Example with CPU5 receiving all interrupts:
> > CPU usage: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5
> > %usr: 0.00 3.03 1.01 2.02 2.00 0.00
> > %sys: 14.58 75.76 14.14 4.04 78.00 0.00
> > %irq: 0.00 0.00 0.00 1.01 0.00 100.00
> > %soft: 0.00 0.00 0.00 0.00 0.00 0.00
> > %iowait idle: 85.42 21.21 84.85 92.93 20.00 0.00
> > %idle: 0.00 0.00 0.00 0.00 0.00 0.00
> >
> > When the submitting CPUs are forced to process their own
> > completion interrupts, this steals time from new
> > submissions and self-throttles them.
> >
> > Without that, there is no direct feedback to the
> > submitters to slow down. The only feedback is:
> > * reaching max queue depth
> > * lots of timeouts, resulting in aborts, resets, soft
> > lockups and self-detected stalls on CPU5, bogus
> > clocksource tsc unstable reports, network
> > drop-offs, etc.
> >
> > The SCSI LLD can set affinity_hint for each of its
> > interrupts to request that a program like irqbalance
> > route the interrupts back to the submitting CPU.
> > The latest version of irqbalance ignores those hints,
> > though, instead offering an option to run a policy
> > script that could honor them. Otherwise, it balances
> > them based on its own algorithms. So, we cannot rely
> > on this.
> >
> > Hardware might perform interrupt coalescing to help,
> > but it cannot help 1 CPU keep up with the work
> > generated by many other CPUs.
> >
> > rq_affinity=2 helps by pushing most of the block layer
> > and SCSI midlayer completion work back to the submitting
> > CPU (via an IPI).
> >
> > Change the default rq_affinity=2 under blk-mq
> > so there's at least some feedback to slow down the
> > submitters.
>
> I don't think we should do this generically. For "sane" devices with
> multiple completion queues, and with proper affinity setting in the
> driver, this is going to be a loss.
>
> So lets not add it to QUEUE_FLAG_MQ_DEFAULT, but we can make it
> default
> for nr_hw_queues == 1. I think that would be way saner.
>
> --
> Jens Axboe

If the interrupt does arrive on the submitting CPU, then it
meets the criteria for all the cases:
* 1: complete on any CPU
* 2: complete on submitting CPU's node (QUEUE_FLAG_SAME_COMP)
* 3: complete on submitting CPU (QUEUE_FLAG_SAME_FORCE)

and _blk_complete_request handles it locally rather
than sending an IPI.

if (req->cpu != -1) {
ccpu = req->cpu;
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags))
shared = cpus_share_cache(cpu, ccpu);
} else
ccpu = cpu;
...
if (ccpu == cpu || shared) {
struct list_head *list;
do_local:
...
} else if (raise_blk_irq(ccpu, req))
goto do_local;


Are you saying you want the blk_queue_bio submission to
not even set the req->cpu field (which defaulted to -1):
if (test_bit(QUEUE_FLAG_SAME_COMP, &q->queue_flags))
req->cpu = raw_smp_processor_id();

when you expect the interrupt routing is good so that
_blk_complete_request can avoid the test_bit and
cpus_share_cache calls?

With irqbalance no longer honoring affinity_hint
by default, I'm worried that most LLDs will not find
their interrupts routed that way anymore. That's
how we ran into this; scsi-mq + kernel-3.17 on an
up-to-date RHEL 6.5 distro (which now carries the
new irqbalance).

We plan to create a policyscript for the new irqbalance
for hpsa devices, but other high-IOPS drivers will hit
the same problem.

---
Rob Elliott HP Server Storage