Re: Time sliced CFQ io scheduler

From: Jens Axboe
Date: Fri Dec 03 2004 - 05:50:25 EST


On Fri, Dec 03 2004, Prakash K. Cheemplavam wrote:
> Jens Axboe schrieb:
> >On Fri, Dec 03 2004, Jens Axboe wrote:
> >
> >>Funky. It looks like another case of the io scheduler being at the wrong
> >>place - if raid sends dependent reads to different drives, it screws up
> >>the io scheduling. The right way to fix that would be to io scheduler
> >>before raid (reverse of what we do now), but that is a lot of work. A
> >>hack would be to try and tie processes to one md component for periods
> >>of time, sort of like cfq slicing.
> >
> >
> >It makes sense to split the slice period for sync and async requests,
> >since async requests usually get a lot of requests queued in a short
> >period of time. Might even make sense to introduce a slice_rq value as
> >well, limiting the number of requests queued in a given slice.
> >
> >But at least this patch lets you set slice_sync and slice_async
> >seperately, if you want to experiement.
>
> An idea, which values I should try?

Just see if the default ones work (or how they work :-)

> In generell I rather have the impression the problem I am experiencing
> is not the problem of the io scheduler alone or why do all show the same
> problem?

It is not, but some io schedulers perform better than others.

> BTW, I just did my little test on the ide drive and it shows the same
> problem, so it is not sata / libata related.

Single read/writer case works fine here for me, about half the bandwidth
for each. Please show some vmstats for this case, too. Right now I'm not
terribly interested in problems with raid alone, as I can poke holes in
that. If the single drive case is correct, then we can focus on raid.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/