Re: [PATCHv2 1/1] mm: fix unproperly folio_put by changing API in read_pages

From: Zhaoyang Huang
Date: Wed Apr 03 2024 - 01:51:13 EST


On Tue, Apr 2, 2024 at 8:58 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 01.04.24 10:17, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> >
> > An VM_BUG_ON in step 9 of [1] could happen as the refcnt is dropped
> > unproperly during the procedure of read_pages()->readahead_folio->folio_put.
> > This is introduced by commit 9fd472af84ab ("mm: improve cleanup when
> > ->readpages doesn't process all pages")'.
> >
> > key steps of[1] in brief:
> > 2'. Thread_truncate get folio to its local fbatch by find_get_entry in step 2
> > 7'. Last refcnt remained which is not as expect as from alloc_pages
> > but from thread_truncate's local fbatch in step 7
> > 8'. Thread_reclaim succeed to isolate the folio by the wrong refcnt(not
> > the value but meaning) in step 8
> > 9'. Thread_truncate hit the VM_BUG_ON in step 9
> >
> > [1]
> > Thread_readahead:
> > 0. folio = filemap_alloc_folio(gfp_mask, 0);
> > (refcount 1: alloc_pages)
> > 1. ret = filemap_add_folio(mapping, folio, index + i, gfp_mask);
> > (refcount 2: alloc_pages, page_cache)
> >
> > Thread_truncate:
> > 2. folio = find_get_entries(&fbatch_truncate);
> > (refcount 3: alloc_pages, page_cache, fbatch_truncate))
> >
> > Thread_readahead:
> > 3. Then we call read_pages()
> > First we call ->readahead() which for some reason stops early.
> > 4. Then we call readahead_folio() which calls folio_put()
> > (refcount 2: page_cache, fbatch_truncate)
> > 5. Then we call folio_get()
> > (refcount 3: page_cache, fbatch_truncate, read_pages_temp)
> > 6. Then we call filemap_remove_folio()
> > (refcount 2: fbatch_truncate, read_pages_temp)
> > 7. Then we call folio_unlock() and folio_put()
> > (refcount 1: fbatch_truncate)
> >
> > Thread_reclaim:
> > 8. collect the page from LRU and call shrink_inactive_list->isolate_lru_folios
> > shrink_inactive_list
> > {
> > isolate_lru_folios
> > {
> > if (!folio_test_lru(folio)) //false
> > bail out;
> > if (!folio_try_get(folio)) //false
> > bail out;
> > }
> > }
> > (refcount 2: fbatch_truncate, reclaim_isolate)
> >
> > 9. call shrink_folio_list->__remove_mapping
> > shrink_folio_list()
> > {
> > folio_try_lock(folio);
> > __remove_mapping()
> > {
> > if (!folio_ref_freeze(2)) //false
> > bail out;
> > }
> > folio_unlock(folio)
> > list_add(folio, free_folios);
> > }
> > (folio has refcount 0)
> >
> > Thread_truncate:
> > 10. Thread_truncate will hit the refcnt VM_BUG_ON(refcnt == 0) in
> > release_pages->folio_put_testzero
> > truncate_inode_pages_range
> > {
> > folio = find_get_entries(&fbatch_truncate);
> > truncate_inode_pages(&fbatch_truncate);
> > folio_fbatch_release(&fbatch_truncate);
> > {
> > folio_put_testzero(folio); //VM_BUG_ON here
> > }
> > }
> >
> > fix: commit 9fd472af84ab ("mm: improve cleanup when ->readpages doesn't process all pages")'
>
> Something that would help here is an actual reproducer that triggersthis
> issue.
>
> To me, it's unclear at this point if we are talking about an actual
> issue or a theoretical issue?
Thanks for feedback. Above callstack is a theoretical issue so far
which is arised from an ongoing analysis of a practical livelock issue
generated by folio_try_get_rcu which is related to abnormal folio
refcnt state. So do you think this callstack makes sense?

>
> --
> Cheers,
>
> David / dhildenb
>