Re: [PATCH 02/20] blkio: Change CFQ to use CFS like queue time stamps
From: Corrado Zoccolo
Date: Thu Nov 05 2009 - 03:36:42 EST
Hi Vivek,
On Wed, Nov 4, 2009 at 11:25 PM, Vivek Goyal <vgoyal@xxxxxxxxxx> wrote:
> Thanks. I am looking at your patches right now. Got one question about
> following commit.
>
> ****************************************************************
> commit a6d44e982d3734583b3b4e1d36921af8cfd61fc0
> Author: Corrado Zoccolo <czoccolo@xxxxxxxxx>
> Date: Â Mon Oct 26 22:45:11 2009 +0100
>
> Â Âcfq-iosched: enable idling for last queue on priority class
>
> Â Âcfq can disable idling for queues in various circumstances.
> Â ÂWhen workloads of different priorities are competing, if the higher
> Â Âpriority queue has idling disabled, lower priority queues may steal
> Â Âits disk share. For example, in a scenario with an RT process
> Â Âperforming seeky reads vs a BE process performing sequential reads,
> Â Âon an NCQ enabled hardware, with low_latency unset,
> Â Âthe RT process will dispatch only the few pending requests every full
> Â Âslice of service for the BE process.
>
> Â ÂThe patch solves this issue by always performing idle on the last
> Â Âqueue at a given priority class > idle. If the same process, or one
> Â Âthat can pre-empt it (so at the same priority or higher), submits a
> Â Ânew request within the idle window, the lower priority queue won't
> Â Âdispatch, saving the disk bandwidth for higher priority ones.
>
> Â ÂNote: this doesn't touch the non_rotational + NCQ case (no hardware
> Â Âto test if this is a benefit in that case).
> *************************************************************************
>
[snipping questions I answered in the combo mail]
> On top of that, even if we don't idle for RT reader, we will always
> preempt BE reader immediately and get the disk. The only side affect
> is that on rotational media, disk head might have moved and bring the
> overall throughput down.
You bring down throughput, and also increase latency, not only on
rotational media, so you may not want to enable it on servers.
Without low_latency, I saw this bug in current 'fairness' policy in
CFQ, so this patch fixes it.
>
> So my concern is that with this idling on last queue, we are targetting
> fairness issue for the random seeky readers with thinktime with-in 8ms.
> That can be easily solved by setting low_latency=1. Why are we going
> to this lenth then?
Maybe on the servers where you want to run RT tasks you don't want the
aforementioned drawbacks of low_latency.
Since I was going to change the implications of low_latency in
following patches, I fixed the 'bug' here, so I was free to change the
implementation in the following, without reintroducing this bug (it
was present for long, before being fixed by the introduction of
low_latency).
Thanks
Corrado
>
> Thanks
> Vivek
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/