Re: [PATCH] block/elevator updates + deadline i/o scheduler

From: Adam Kropelin (akropel1@rochester.rr.com)
Date: Fri Jul 26 2002 - 20:22:31 EST


On Fri, Jul 26, 2002 at 02:02:48PM +0200, Jens Axboe wrote:
> The 2nd version of the deadline i/o scheduler is ready for some testing.
...
> Finally, I've done some testing on it. No testing on whether this really
> works well in real life (that's what I want testers to do), and no
> testing on benchmark performance changes etc. What I have done is
> beat-up testing, making sure it works without corrupting your data. I'm
> fairly confident that does. Most testing was on SCSI (naturally),
> however IDE has also been tested briefly.

Hi, Jens,

I'm interested in i/o performance (though more so in throughput than latency)
so I tried out the patches. I needed an extra bit (below) to get a compile
on 2.5.28.

My performance testing showed essentially no change with the deadline i/o
scheduler, but there are a multitude of possible reasons for that, including
user stupdity. Here's an outline of what I tried and what I observed:

The concept of deadline scheduling makes me think "latency" rather than
"throughput" so I tried to run tests involving small i/os in the
presence of large streaming reads. I expected to see an improvement in
the time taken to service the small i/os. (Obviously, let me know if my
assumptions are all washed up ;) At the same time I wanted to see if there was
any impact (negative or positive) on the streaming workload.

Test #1: Simultaneously untarring two kernel trees (untar only, no gzip). The
idea here was that reading the source tarball was essentially a streaming read
while writing the output was a large number of relatively small writes. There
was less than 3 seconds difference in overall time between stock 2.5.28 and
-deadline.

Test #2: Same as Test #1 with the addition of reading each *.c file in another
kernel tree while the untarring is going in the background. The idea here was
to see if the large set of small reads in the readonly workload would benefit.
Again, the difference was only a few seconds over several minutes.

Part of the issue here may be my test setup... All i/o was to the same (rather
slow) IDE disk. (The good news being 2.5 IDE did not blow up in my face under
all this stress.) Machine is a 2 CPU SMP PPro.

If these results are totally bogus, feel free to ignore them. If you have ideas
for other tests I should run, let me know and I'll oblige.

--Adam

--- elevator.h.orig Fri Jul 26 20:59:06 2002
+++ elevator.h Fri Jul 26 20:59:15 2002
@@ -93,10 +93,4 @@
 #define ELEVATOR_FRONT_MERGE 1
 #define ELEVATOR_BACK_MERGE 2
 
-/*
- * will change once we move to a more complex data structure than a simple
- * list for pending requests
- */
-#define elv_queue_empty(q) list_empty(&(q)->queue_head)
-
 #endif

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Jul 30 2002 - 14:00:25 EST