Re: [PATCH 3/3] virtio-blk: Add bio-based IO path for virtio-blk
From: Rusty Russell
Date: Mon Jul 02 2012 - 18:52:38 EST
On Mon, 02 Jul 2012 10:45:05 +0800, Asias He <asias@xxxxxxxxxx> wrote:
> On 07/02/2012 07:54 AM, Rusty Russell wrote:
> > Confused. So, without merging we get 6k exits (per second?) How many
> > do we get when we use the request-based IO path?
> Sorry for the confusion. The numbers were collected from request-based
> IO path where we can have the guest block layer merge the requests.
> With the same workload in guest, the guest fires 200K requests to host
> with merges enabled in guest (echo 0 > /sys/block/vdb/queue/nomerges),
> while the guest fires 40000K requests to host with merges disabled in
> guest (echo 2 > /sys/block/vdb/queue/nomerges). This show that the merge
> in block layer reduces the total number of requests fire to host a lot
> (40000K / 200K = 20).
> The guest fires 200K requests to host with merges enabled in guest (echo
> 0 > /sys/block/vdb/queue/nomerges), the host fires 6K interrupts in
> total for the 200K requests. This show that the ratio of interrupts
> coalesced (200K / 6K = 33).
OK, got it! Guest merging cuts requests by a factor of 20. EVENT_IDX
cuts interrupts by a factor of 33.
> > If your device is slow, then you won't be able to make many requests per
> > second: why worry about exit costs?
> If a device is slow, the merge would merge more requests and reduce the
> total number of requests to host. This saves exit costs, no?
Sure, our guest merging might save us 100x as many exits as no merging.
But since we're not doing many requests, does it matter?
Ideally we'd merge requests only if the device queue is full. But that
> > If your device is fast (eg. ram),
> > you've already shown that your patch is a win, right?
> Yes. Both on ramdisk and fast SSD device (e.g. FusionIO).
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/