[PATCH 3/4] fs: Avoid data corruption with blocksize < pagesize
From: Jan Kara
Date: Tue Mar 17 2009 - 14:41:51 EST
Assume the following situation:
Filesystem with blocksize < pagesize - suppose blocksize = 1024,
pagesize = 4096. File 'f' has first four blocks already allocated.
(line with "state:" contains the state of buffers in the page - m = mapped,
u = uptodate, d = dirty)
process 1: process 2:
write to 'f' bytes 0 - 1024
state: |mud,-,-,-|, page dirty
write to 'f' bytes 1024 - 4096:
__block_prepare_write() maps blocks
state: |mud,m,m,m|, page dirty
we fail to copy data -> copied = 0
block_write_end() does nothing
page gets unlocked
writepage() is called on the page
block_write_full_page() writes buffers with garbage
This patch fixes the problem by skipping !uptodate buffers in
block_write_full_page().
CC: Nick Piggin <npiggin@xxxxxxx>
Signed-off-by: Jan Kara <jack@xxxxxxx>
---
fs/buffer.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 9f69741..22c0144 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1774,7 +1774,12 @@ static int __block_write_full_page(struct inode *inode, struct page *page,
} while (bh != head);
do {
- if (!buffer_mapped(bh))
+ /*
+ * Parallel write could have already mapped the buffers but
+ * it then had to restart before copying in new data. We
+ * must avoid writing garbage so just skip the buffer.
+ */
+ if (!buffer_mapped(bh) || !buffer_uptodate(bh))
continue;
/*
* If it's a fully non-blocking write attempt and we cannot
--
1.6.0.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/