Re: [PATCHv3 13/41] truncate: make sure invalidate_mapping_pages() can discard huge pages

From: Kirill A. Shutemov
Date: Tue Oct 11 2016 - 18:07:34 EST


On Tue, Oct 11, 2016 at 05:58:15PM +0200, Jan Kara wrote:
> On Thu 15-09-16 14:54:55, Kirill A. Shutemov wrote:
> > invalidate_inode_page() has expectation about page_count() of the page
> > -- if it's not 2 (one to caller, one to radix-tree), it will not be
> > dropped. That condition almost never met for THPs -- tail pages are
> > pinned to the pagevec.
> >
> > Let's drop them, before calling invalidate_inode_page().
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > ---
> > mm/truncate.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/mm/truncate.c b/mm/truncate.c
> > index a01cce450a26..ce904e4b1708 100644
> > --- a/mm/truncate.c
> > +++ b/mm/truncate.c
> > @@ -504,10 +504,21 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
> > /* 'end' is in the middle of THP */
> > if (index == round_down(end, HPAGE_PMD_NR))
> > continue;
> > + /*
> > + * invalidate_inode_page() expects
> > + * page_count(page) == 2 to drop page from page
> > + * cache -- drop tail pages references.
> > + */
> > + get_page(page);
> > + pagevec_release(&pvec);
>
> I'm not quite sure why this is needed. When you have multiorder entry in
> the radix tree for your huge page, then you should not get more entries in
> the pagevec for your huge page. What do I miss?

For compatibility reason find_get_entries() (which is called by
pagevec_lookup_entries()) collects all subpages of huge page in the range
(head/tails). See patch [07/41]

So huge page, which is fully in the range it will be pinned up to
PAGEVEC_SIZE times.

--
Kirill A. Shutemov