Re: Time sliced CFQ io scheduler
From: Nick Piggin
Date: Wed Dec 08 2004 - 02:11:14 EST
On Wed, 2004-12-08 at 07:55 +0100, Jens Axboe wrote:
> On Tue, Dec 07 2004, Andrew Morton wrote:
> > Andrea Arcangeli <andrea@xxxxxxx> wrote:
> > >
> > > The desktop is ok with "as" simply because it's
> > > normally optimal to stop writes completely
> >
> > AS doesn't "stop writes completely". With the current settings it
> > apportions about 1/3 of the disk's bandwidth to writes.
> >
> > This thing Jens has found is for direct-io writes only. It's a bug.
>
> Indeed. It's a special case one, but nasty for that case.
>
> > The other problem with AS is that it basically doesn't work at all with a
> > TCQ depth greater than four or so, and lots of people blindly look at
> > untuned SCSI benchmark results without realising that. If a distro is
>
> That's pretty easy to fix. I added something like that to cfq, and it's
> not a lot of lines of code (grep for rq_in_driver and cfq_max_depth).
>
> > always selecting CFQ then they've probably gone and deoptimised all their
> > IDE users.
>
> Andrew, AS has other issues, it's not a case of AS always being faster
> at everything.
>
> > AS needs another iteration of development to fix these things. Right now
> > it's probably the case that we need CFQ or deadline for servers and AS for
> > desktops. That's awkward.
>
> Currently I think the time sliced cfq is the best all around. There's
> still a few kinks to be shaken out, but generally I think the concept is
> sounder than AS.
>
But aren't you basically unconditionally allowing a 4ms idle time after
reads? The complexity of AS (other than all the work we had to do to get
the block layer to cope with it), is getting it to turn off at (mostly)
the right times. Other than that, it is basically the deadline
scheduler.
I could be wrong, but it looks like you'll just run into the same sorts
of performance problems as AS initially had.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/