Re: [PATCH v8 2/9] dax: fix conversion of holes to PMDs

From: Jan Kara
Date: Tue Jan 12 2016 - 04:44:50 EST


On Thu 07-01-16 22:27:52, Ross Zwisler wrote:
> When we get a DAX PMD fault for a write it is possible that there could be
> some number of 4k zero pages already present for the same range that were
> inserted to service reads from a hole. These 4k zero pages need to be
> unmapped from the VMAs and removed from the struct address_space radix tree
> before the real DAX PMD entry can be inserted.
>
> For PTE faults this same use case also exists and is handled by a
> combination of unmap_mapping_range() to unmap the VMAs and
> delete_from_page_cache() to remove the page from the address_space radix
> tree.
>
> For PMD faults we do have a call to unmap_mapping_range() (protected by a
> buffer_new() check), but nothing clears out the radix tree entry. The
> buffer_new() check is also incorrect as the current ext4 and XFS filesystem
> code will never return a buffer_head with BH_New set, even when allocating
> new blocks over a hole. Instead the filesystem will zero the blocks
> manually and return a buffer_head with only BH_Mapped set.
>
> Fix this situation by removing the buffer_new() check and adding a call to
> truncate_inode_pages_range() to clear out the radix tree entries before we
> insert the DAX PMD.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
> Reported-by: Dan Williams <dan.j.williams@xxxxxxxxx>
> Tested-by: Dan Williams <dan.j.williams@xxxxxxxxx>

Just two nits below. Nothing serious so you can add:

Reviewed-by: Jan Kara <jack@xxxxxxx>

> ---
> fs/dax.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index 513bba5..5b84a46 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -589,6 +589,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
> bool write = flags & FAULT_FLAG_WRITE;
> struct block_device *bdev;
> pgoff_t size, pgoff;
> + loff_t lstart, lend;
> sector_t block;
> int result = 0;
>
> @@ -643,15 +644,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
> goto fallback;
> }
>
> - /*
> - * If we allocated new storage, make sure no process has any
> - * zero pages covering this hole
> - */
> - if (buffer_new(&bh)) {
> - i_mmap_unlock_read(mapping);
> - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0);
> - i_mmap_lock_read(mapping);
> - }
> + /* make sure no process has any zero pages covering this hole */
> + lstart = pgoff << PAGE_SHIFT;
> + lend = lstart + PMD_SIZE - 1; /* inclusive */
> + i_mmap_unlock_read(mapping);

Just a nit but is there reason why we grab i_mmap_lock_read(mapping) only
to release it a few lines below? The bh checks inside the locked region
don't seem to rely on i_mmap_lock...

> + unmap_mapping_range(mapping, lstart, PMD_SIZE, 0);
> + truncate_inode_pages_range(mapping, lstart, lend);

These two calls can be shortened as:

truncate_pagecache_range(inode, lstart, lend);


Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR