[PATCHv4 14/43] filemap: handle huge pages in do_generic_file_read()
From: Kirill A. Shutemov
Date: Mon Oct 24 2016 - 20:14:28 EST
Most of work happans on head page. Only when we need to do copy data to
userspace we find relevant subpage.
We are still limited by PAGE_SIZE per iteration. Lifting this limitation
would require some more work.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
---
mm/filemap.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index f8387488636f..ca4536f2035e 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1906,6 +1906,7 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
if (unlikely(page == NULL))
goto no_cached_page;
}
+ page = compound_head(page);
if (PageReadahead(page)) {
page_cache_async_readahead(mapping,
ra, filp, page,
@@ -1984,7 +1985,8 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos,
* now we can copy it to user space...
*/
- ret = copy_page_to_iter(page, offset, nr, iter);
+ ret = copy_page_to_iter(page + index - page->index, offset,
+ nr, iter);
offset += ret;
index += offset >> PAGE_SHIFT;
offset &= ~PAGE_MASK;
@@ -2402,6 +2404,7 @@ int filemap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
* because there really aren't any performance issues here
* and we need to check for errors.
*/
+ page = compound_head(page);
ClearPageError(page);
error = mapping->a_ops->readpage(file, page);
if (!error) {
--
2.9.3