Re: [RFC]cfq-iosched: quantum check tweak

From: Shaohua Li
Date: Sun Dec 27 2009 - 22:36:11 EST


On Fri, Dec 25, 2009 at 05:44:40PM +0800, Corrado Zoccolo wrote:
> On Fri, Dec 25, 2009 at 10:10 AM, Shaohua Li <shaohua.li@xxxxxxxxx> wrote:
> > Currently a queue can only dispatch up to 4 requests if there are other queues.
> > This isn't optimal, device can handle more requests, for example, AHCI can
> > handle 31 requests. I can understand the limit is for fairness, but we could
> > do some tweaks:
> > 1. if the queue still has a lot of slice left, sounds we could ignore the limit
> ok. You can even scale the limit proportionally to the remaining slice
> (see below).
I can't understand the meaning of below scale. cfq_slice_used_soon() means
dispatched requests can finish before slice is used, so other queues will not be
impacted. I thought/hope a cfq_slice_idle time is enough to finish the
dispatched requests.

> > 2. we could keep the check only when cfq_latency is on. For uses who don't care
> > about latency should be happy to have device fully piped on.
> I wouldn't overload low_latency with this meaning. You can obtain the
> same by setting the quantum to 32.
As this impact fairness, so natually thought we could use low_latency. I'll remove
the check in next post.

> > I have a test of random direct io of two threads, each has 32 requests one time
> > without patch: 78m/s
> > with tweak 1: 138m/s
> > with two tweaks and disable latency: 156m/s
>
> Please, test also with competing seq/random(depth1)/async workloads,
> and measure also introduced latencies.
depth1 should be ok, as if device can only send one request, it should not require
more requests from ioscheduler.
I'll do more checks. The time is hard to choose (I choose cfq_slice-idle here) to
balance thoughput and latency. Do we have creteria to measure this? See the patch
passes some tests, so it's ok for latency.

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/