Re: [PATCH v7] NVMe: conversion to blk-mq

From: Jens Axboe
Date: Fri Jun 13 2014 - 16:57:26 EST


On 2014-06-13 13:29, Jens Axboe wrote:
On 06/13/2014 01:22 PM, Keith Busch wrote:
On Fri, 13 Jun 2014, Jens Axboe wrote:
OK, same setup as mine. The affinity hint is really screwing us over, no
question about it. We just need a:

irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector,
hctx->cpumask);

in the ->init_hctx() methods to fix that up.

That brings us to roughly the same performance, except for the cases
where the dd is run on the thread sibling of the core handling the
interrupt. And granted, with the 16 queues used, that'll happen on
blk-mq. But since you have 32 threads and just 31 IO queues, the non
blk-mq driver must end up sharing for some cases, too.

So what do we care most about here? Consistency, or using all queues at
all costs?

I think we want to use all h/w queues regardless of mismatched sharing. A
24 thread server shouldn't use more of the hardware than a 32.

You're right, the current driver shares the queues on anything with 32
or more cpus with this NVMe controller, but we wrote an algorithm that
allocates the most and tries to group them with their nearest neighbors.

One performance oddity we observe is that servicing the interrupt on the
thread sibling of the core that submitted the I/O is the worst performing
cpu you can chose; it's actually better to use a different core on the
same node. At least that's true as long as you're not utilizing the cpus
for other work, so YMMV.

I played around with the mappings, and stumbled upon some pretty ugly
results. The back story is that on this test box, I limit max C state to
C1 to avoid having too much of a bad time with power management. Running
the dd on a specific core, yields somewhere around 52MB/sec for me.
That's with the right CPU affinity for the irq. If I purposely put it
somewhere else, I end up at 380-390MB/sec. Or if I leave it on the right
CPU but simply do:

perf record -o /dev/null dd if= ...

and run the same thing just traced, I get the high performance as well.

Indeed... So I went to take a look at what is going on. For the slow
case, turbostat tells me I'm spending 80% in C1. For the fast case,
we're down to 20% in C1.

I then turn off C1, but low and behold, it's still slow and sucky even
if turbostat now verifies that it's spending 0% time in C1.

Now, this smells like scheduling artifacts. I'm going to turn off all
power junk and see what happens. Because at 8x differences between fast
and slow, irq mappings don't really matter at all here. In fact it shows
results contrary to what you'd like to see.

OK, so I think I know what is going on here. If we slow down the next issue just a little bit, the device will have cached the next read. Essentially getting some parallellism out of a sync read, since it is sequential. For random 4k reads, it behaves like expected.

For reference, the attached patch brings back the affinity to what we want it to be.

We can always diddle with the utilization of the number of hardware queues later, I don't see that as a huge issue at all.

--
Jens Axboe

diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index ee48ac5..8dc5d36 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -178,6 +178,9 @@ static int nvme_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
nvmeq->hctx = hctx;
else
WARN_ON(nvmeq->hctx->tags != hctx->tags);
+
+ irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector,
+ hctx->cpumask);
hctx->driver_data = nvmeq;
return 0;
}
@@ -581,6 +584,7 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
enum dma_data_direction dma_dir;
int psegs = req->nr_phys_segments;
int result = BLK_MQ_RQ_QUEUE_BUSY;
+
/*
* Requeued IO has already been prepped
*/
@@ -1788,6 +1792,7 @@ static struct nvme_ns *nvme_alloc_ns(struct nvme_dev *dev, unsigned nsid,
queue_flag_set_unlocked(QUEUE_FLAG_DEFAULT, ns->queue);
queue_flag_set_unlocked(QUEUE_FLAG_NOMERGES, ns->queue);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, ns->queue);
+ queue_flag_set_unlocked(QUEUE_FLAG_VIRT_HOLE, ns->queue);
queue_flag_clear_unlocked(QUEUE_FLAG_IO_STAT, ns->queue);
ns->dev = dev;
ns->queue->queuedata = ns;
@@ -1801,7 +1806,6 @@ static struct nvme_ns *nvme_alloc_ns(struct nvme_dev *dev, unsigned nsid,
lbaf = id->flbas & 0xf;
ns->lba_shift = id->lbaf[lbaf].ds;
ns->ms = le16_to_cpu(id->lbaf[lbaf].ms);
- blk_queue_max_segments(ns->queue, 1);
blk_queue_logical_block_size(ns->queue, 1 << ns->lba_shift);
if (dev->max_hw_sectors)
blk_queue_max_hw_sectors(ns->queue, dev->max_hw_sectors);