[yet another attempt] Re: 2.0.31-pre5: Couldn't get a free page..... (fwd)

Gadi Oxman (gadio@netvision.net.il)
Fri, 15 Aug 1997 19:05:27 +0400 (IDT)


Here is another attempt to address the "couldn't get a free page" effect..
I think that the basic problem during "bonnie" is:

- We are being flooded with dirty buffers.

- The buffer cache can't recycle unused pages from the page cache
directly. Instead, grow_buffers() can only reclaim the small
"completely free pages" pool, and wait for kswapd() to provide
the dynamic "refill free pages pool" response.

==> we should sleep in grow_buffers(), to allow kswapd() to
refill the free pages pool.

The following patch will:

- avoid GFP_ATOMIC in grow_buffers(). Instead, we attempt to get
clean buffers by sleeping and by waiting for kflushd() to flush
dirty buffers.

- call wakeup_bdflush(1) more often. This introduces two functions:

- the above mentioned "sleep and wait for kswapd()" in
grow_buffers().
- wait for the flushing of dirty buffers.

- avoid calling wake_up(&bdflush_done) pre-maturely. Instead of
calling it on each kflushd() cycle, we will wake the sleeping
process when kflushd() decides to sleep (dirty buffers < 60%).

Gadi

--- vpre-2.0.31-6/linux/fs/buffer.c Fri Aug 15 10:22:41 1997
+++ linux/fs/buffer.c Fri Aug 15 11:07:34 1997
@@ -672,6 +672,7 @@
};
}

+#if 0
/*
* In order to protect our reserved pages,
* return now if we got any buffers.
@@ -682,6 +683,8 @@
/* and repeat until we find something good */
if (!grow_buffers(GFP_ATOMIC, size))
wakeup_bdflush(1);
+#endif
+ wakeup_bdflush(1);

/* decrease needed even if there is no success */
needed -= PAGE_SIZE;
@@ -1719,11 +1722,11 @@
continue;
}
run_task_queue(&tq_disk);
- wake_up(&bdflush_done);

/* If there are still a lot of dirty buffers around, skip the sleep
and flush some more */
if(nr_buffers_type[BUF_DIRTY] <= nr_buffers * bdf_prm.b_un.nfract/100) {
+ wake_up(&bdflush_done);
current->signal = 0;
interruptible_sleep_on(&bdflush_wait);
}