[PATCH] mm: Avoid resetting wb_start after each writeback round

From: Jan Kara
Date: Tue Aug 03 2010 - 06:35:02 EST


WB_SYNC_NONE writeback is done in rounds of 1024 pages so that we don't write
out some huge inode for too long while starving writeout of other inodes. To
avoid livelocks, we record time we started writeback in wbc->wb_start and do
not write out inodes which were dirtied after this time. But currently,
writeback_inodes_wb() resets wb_start each time it is called thus effectively
invalidating this logic and making any WB_SYNC_NONE writeback prone to
livelocks.

This patch makes sure wb_start is set only once when we start writeback.

Signed-off-by: Jan Kara <jack@xxxxxxx>
---
fs/fs-writeback.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 6bdc924..aa59394 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -530,7 +530,8 @@ void writeback_inodes_wb(struct bdi_writeback *wb,
{
int ret = 0;

- wbc->wb_start = jiffies; /* livelock avoidance */
+ if (!wbc->wb_start)
+ wbc->wb_start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
if (!wbc->for_kupdate || list_empty(&wb->b_io))
queue_io(wb, wbc->older_than_this);
@@ -559,7 +560,6 @@ static void __writeback_inodes_sb(struct super_block *sb,
{
WARN_ON(!rwsem_is_locked(&sb->s_umount));

- wbc->wb_start = jiffies; /* livelock avoidance */
spin_lock(&inode_lock);
if (!wbc->for_kupdate || list_empty(&wb->b_io))
queue_io(wb, wbc->older_than_this);
@@ -625,6 +625,7 @@ static long wb_writeback(struct bdi_writeback *wb,
wbc.range_end = LLONG_MAX;
}

+ wbc.wb_start = jiffies; /* livelock avoidance */
for (;;) {
/*
* Stop writeback when nr_pages has been consumed
--
1.6.0.2


--bp/iNruPH9dso1Pn--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/