Re: [PATCH v2 05/12] mm: Add and use find_lock_entries

From: Jan Kara
Date: Wed Sep 30 2020 - 06:40:23 EST


On Tue 29-09-20 13:48:06, Matthew Wilcox wrote:
> On Tue, Sep 29, 2020 at 10:58:55AM +0200, Jan Kara wrote:
> > On Mon 14-09-20 14:00:35, Matthew Wilcox (Oracle) wrote:
> > > We have three functions (shmem_undo_range(), truncate_inode_pages_range()
> > > and invalidate_mapping_pages()) which want exactly this function, so
> > > add it to filemap.c.
> > >
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> > > diff --git a/mm/shmem.c b/mm/shmem.c
> > ...
> > > index b65263d9bb67..a73ce8ce28e3 100644
> > > --- a/mm/shmem.c
> > > +++ b/mm/shmem.c
> > > @@ -905,12 +905,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
> > >
> > > pagevec_init(&pvec);
> > > index = start;
> > > - while (index < end) {
> > > - pvec.nr = find_get_entries(mapping, index,
> > > - min(end - index, (pgoff_t)PAGEVEC_SIZE),
> > > - pvec.pages, indices);
> > > - if (!pvec.nr)
> > > - break;
> > > + while (index < end && find_lock_entries(mapping, index, end - 1,
> > > + &pvec, indices)) {
> > > for (i = 0; i < pagevec_count(&pvec); i++) {
> > > struct page *page = pvec.pages[i];
> > >
> > > @@ -925,18 +921,10 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
> > > index, page);
> > > continue;
> > > }
> > > + index += thp_nr_pages(page) - 1;
> > >
> > > - VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page);
> > > -
> > > - if (!trylock_page(page))
> > > - continue;
> > > -
> > > - if ((!unfalloc || !PageUptodate(page)) &&
> > > - page_mapping(page) == mapping) {
> > > - VM_BUG_ON_PAGE(PageWriteback(page), page);
> > > - if (shmem_punch_compound(page, start, end))
> > > - truncate_inode_page(mapping, page);
> > > - }
> > > + if (!unfalloc || !PageUptodate(page))
> > > + truncate_inode_page(mapping, page);
> >
> > Is dropping shmem_punch_compound() really safe? AFAICS it can also call
> > split_huge_page() which will try to split THP to be able to truncate it.
> > That being said there's another loop in shmem_undo_range() which will try
> > again so what you did might make a difference with performance but not much
> > else. But still it would be good to at least comment about this in the
> > changelog...
>
> OK, I need to provide better argumentation in the changelog.
>
> shmem_punch_compound() handles partial THPs. By the end of this series,
> we handle the partial pages in the next part of the function ... the
> part where we're handling partial PAGE_SIZE pages. At this point in
> the series, it's safe to remove the shmem_punch_compound() call because
> the new find_lock_entries() loop will only return THPs that lie entirely
> within the range.

Yes, plus transitioning the first loop in shmem_undo_range() to
find_lock_entries() which skips partial THPs is safe at this point in the
series because the second loop in find_lock_entries() still uses
find_get_entries() and shmem_punch_compound() and so properly treats
partial THPs.

Anyway, I'm now convinced the patch is fine so feel free to add:

Reviewed-by: Jan Kara <jack@xxxxxxx>

after expanding the changelog.

Honza

--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR