[PATCH] keep lru_list_lock around osync_buffers_list

From: Christoph Hellwig (hch@sgi.com)
Date: Mon Dec 02 2002 - 23:47:54 EST


There's exactly one caller of osync_buffers_list (and it's static
to buffer.c), so we can keep holding lru_list_lock over the call
to it instead of releasing and reacquiring it.

--- 1.77/fs/buffer.c Sun Aug 25 21:28:55 2002
+++ edited/fs/buffer.c Mon Dec 2 21:48:11 2002
@@ -869,8 +869,8 @@
                 spin_lock(&lru_list_lock);
         }
         
- spin_unlock(&lru_list_lock);
         err2 = osync_buffers_list(list);
+ spin_unlock(&lru_list_lock);
 
         if (err)
                 return err;
@@ -887,6 +887,8 @@
  * you dirty the buffers, and then use osync_buffers_list to wait for
  * completion. Any other dirty buffers which are not yet queued for
  * write will not be flushed to disk by the osync.
+ *
+ * Expects lru_list_lock to be held at entry.
  */
 static int osync_buffers_list(struct list_head *list)
 {
@@ -894,8 +896,6 @@
         struct list_head *p;
         int err = 0;
 
- spin_lock(&lru_list_lock);
-
  repeat:
         list_for_each_prev(p, list) {
                 bh = BH_ENTRY(p);
@@ -911,7 +911,6 @@
                 }
         }
 
- spin_unlock(&lru_list_lock);
         return err;
 }
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Dec 07 2002 - 22:00:14 EST