[PATCH v8 2/9] dax: fix conversion of holes to PMDs

From: Ross Zwisler
Date: Fri Jan 08 2016 - 00:30:51 EST


When we get a DAX PMD fault for a write it is possible that there could be
some number of 4k zero pages already present for the same range that were
inserted to service reads from a hole. These 4k zero pages need to be
unmapped from the VMAs and removed from the struct address_space radix tree
before the real DAX PMD entry can be inserted.

For PTE faults this same use case also exists and is handled by a
combination of unmap_mapping_range() to unmap the VMAs and
delete_from_page_cache() to remove the page from the address_space radix
tree.

For PMD faults we do have a call to unmap_mapping_range() (protected by a
buffer_new() check), but nothing clears out the radix tree entry. The
buffer_new() check is also incorrect as the current ext4 and XFS filesystem
code will never return a buffer_head with BH_New set, even when allocating
new blocks over a hole. Instead the filesystem will zero the blocks
manually and return a buffer_head with only BH_Mapped set.

Fix this situation by removing the buffer_new() check and adding a call to
truncate_inode_pages_range() to clear out the radix tree entries before we
insert the DAX PMD.

Signed-off-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
Reported-by: Dan Williams <dan.j.williams@xxxxxxxxx>
Tested-by: Dan Williams <dan.j.williams@xxxxxxxxx>
---
fs/dax.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 513bba5..5b84a46 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -589,6 +589,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
bool write = flags & FAULT_FLAG_WRITE;
struct block_device *bdev;
pgoff_t size, pgoff;
+ loff_t lstart, lend;
sector_t block;
int result = 0;

@@ -643,15 +644,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
goto fallback;
}

- /*
- * If we allocated new storage, make sure no process has any
- * zero pages covering this hole
- */
- if (buffer_new(&bh)) {
- i_mmap_unlock_read(mapping);
- unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0);
- i_mmap_lock_read(mapping);
- }
+ /* make sure no process has any zero pages covering this hole */
+ lstart = pgoff << PAGE_SHIFT;
+ lend = lstart + PMD_SIZE - 1; /* inclusive */
+ i_mmap_unlock_read(mapping);
+ unmap_mapping_range(mapping, lstart, PMD_SIZE, 0);
+ truncate_inode_pages_range(mapping, lstart, lend);
+ i_mmap_lock_read(mapping);

/*
* If a truncate happened while we were allocating blocks, we may
@@ -665,7 +664,8 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
goto out;
}
if ((pgoff | PG_PMD_COLOUR) >= size) {
- dax_pmd_dbg(&bh, address, "pgoff unaligned");
+ dax_pmd_dbg(&bh, address,
+ "offset + huge page size > file size");
goto fallback;
}

--
2.5.0