Re: lantency scheduling benchmarks of audio playing tasks during

Andrea Arcangeli (andrea@suse.de)
Mon, 21 Jun 1999 13:16:36 +0200 (CEST)


On Mon, 21 Jun 1999, Benno Senoner wrote:

>--------------+-------------+---------------+-------------+--------------+
>buffer size |proc (top) | disk write | disk copy | disk read |
>--------------+-------------+---------------+-------------+--------------+
>2x1024(11.6ms)| 15.1ms (307)| 9610.0ms (221)| 20.5ms ( 2)| 867.3ms ( 10)|
^^^^^^ ^^^^^
Could you explain the meaning of the two fields? Which bench are you using
to generate the numbers?

>the andrea patch behaves very well during disk copy (cp file1 file2) operations,
>but on write only , or read only operations , it gives extremely high
>latencies.

This sounds to me as a bit weird. A copy always imply a read and a
write... so it should be the slower one. This made me to think that the
numbers got fooled by a far different working set across benchmarks run.
Is this possible?

If not I would like if you could try without the wait_for_IO hack.

To do that apply this patch:

ftp://ftp.suse.com/pub/people/andrea/kernel-patches/2.2.10_andrea-VM8.gz

And then apply the below diff against 2.2.10_andrea-VM8:

Index: linux//fs/buffer.c
===================================================================
RCS file: /var/cvs/linux/fs/buffer.c,v
retrieving revision 1.1.1.13.2.9
diff -u -r1.1.1.13.2.9 buffer.c
--- linux//fs/buffer.c 1999/06/19 13:42:05 1.1.1.13.2.9
+++ linux//fs/buffer.c 1999/06/21 11:01:56
@@ -126,8 +126,6 @@
(size_buffers_type[BUF_DIRTY] >> PAGE_SHIFT > \
nr_free_pages+(buffermem>>(PAGE_SHIFT+1)))

-atomic_t wait_for_IO = ATOMIC_INIT(0);
-
/*
* Rewrote the wait-routines to use the "new" wait-queue functionality,
* and getting rid of the cli-sti pairs. The wait-queue routines still
@@ -144,7 +142,6 @@

bh->b_count++;
wait.task = tsk;
- atomic_inc(&wait_for_IO);
add_wait_queue(&bh->b_wait, &wait);
do {
tsk->state = TASK_UNINTERRUPTIBLE;
@@ -155,7 +152,6 @@
} while (buffer_locked(bh));
tsk->state = TASK_RUNNING;
remove_wait_queue(&bh->b_wait, &wait);
- atomic_dec(&wait_for_IO);
bh->b_count--;
}

@@ -1701,8 +1697,6 @@
bh->b_count++;
ll_rw_block(WRITE, 1, &bh);
bh->b_count--;
- if (atomic_read(&wait_for_IO))
- wait_on_buffer(bh);
if (bdflush_tsk->need_resched)
schedule();
next->b_count--;
Index: linux//include/linux/fs.h
===================================================================
RCS file: /var/cvs/linux/include/linux/fs.h,v
retrieving revision 1.1.1.10.2.2
diff -u -r1.1.1.10.2.2 fs.h
--- linux//include/linux/fs.h 1999/06/09 19:36:59 1.1.1.10.2.2
+++ linux//include/linux/fs.h 1999/06/21 11:02:04
@@ -25,8 +25,6 @@

struct poll_table_struct;

-extern atomic_t wait_for_IO;
-
/*
* It's silly to have NR_OPEN bigger than NR_FILE, but I'll fix
* that later. Anyway, now the file code is no longer dependent
Index: linux//mm/filemap.c
===================================================================
RCS file: /var/cvs/linux/mm/filemap.c,v
retrieving revision 1.1.1.12.2.7
diff -u -r1.1.1.12.2.7 filemap.c
--- linux//mm/filemap.c 1999/06/20 14:19:14 1.1.1.12.2.7
+++ linux//mm/filemap.c 1999/06/21 11:02:12
@@ -362,7 +362,6 @@
struct wait_queue wait;

wait.task = tsk;
- atomic_inc(&wait_for_IO);
add_wait_queue(&page->wait, &wait);
do {
tsk->state = TASK_UNINTERRUPTIBLE;
@@ -373,7 +372,6 @@
} while (PageLocked(page));
tsk->state = TASK_RUNNING;
remove_wait_queue(&page->wait, &wait);
- atomic_dec(&wait_for_IO);
}

#if 0

The wait_for_IO hack currently is decreasing the write throughput but it's
also avoiding you stalling for looong times waiting for I/O completation.
It's an ugly band-aid of course but it's better than nothing right now.

Andrea Arcangeli

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/