[PATCH 12/12] xfs: eagerly remove vmap mappings to avoid upsetting Xen

From: Jeremy Fitzhardinge
Date: Mon Oct 15 2007 - 17:50:51 EST


XFS leaves stray mappings around when it vmaps memory to make it
virtually contigious. This upsets Xen if one of those pages is being
recycled into a pagetable, since it finds an extra writable mapping of
the page.

This patch solves the problem in a brute force way, by making XFS
always eagerly unmap its mappings. David Chinner says this shouldn't
have any performance impact on filesystems with default block sizes;
it will only affect filesystems with large block sizes.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xxxxxxxxxxxxx>
Cc: David Chinner <dgc@xxxxxxx>
Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx>
Cc: XFS masters <xfs-masters@xxxxxxxxxxx>
Cc: Stable kernel <stable@xxxxxxxxxx>
Cc: Morten =?utf-8?q?B=C3=B8geskov?= <xen-users@xxxxxxxxxxxxxxxxxx>
Cc: Mark Williamson <mark.williamson@xxxxxxxxxxxx>

---
fs/xfs/linux-2.6/xfs_buf.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

===================================================================
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -186,6 +186,19 @@ free_address(
void *addr)
{
a_list_t *aentry;
+
+#ifdef CONFIG_XEN
+ /*
+ * Xen needs to be able to make sure it can get an exclusive
+ * RO mapping of pages it wants to turn into a pagetable. If
+ * a newly allocated page is also still being vmap()ed by xfs,
+ * it will cause pagetable construction to fail. This is a
+ * quick workaround to always eagerly unmap pages so that Xen
+ * is happy.
+ */
+ vunmap(addr);
+ return;
+#endif

aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT);
if (likely(aentry)) {

--

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/