Re: summarize all information again at bottom//reply: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru

From: Zhaoyang Huang
Date: Mon Mar 18 2024 - 20:49:06 EST


On Mon, Mar 18, 2024 at 8:32 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>
>
> Stop creating new threads. You're really annoying.
>
> On Mon, Mar 18, 2024 at 09:32:32AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> > Summarize all information below to make it more clear(remove thread2 which is not mandatory and make the scenario complex)
>
> You've gone back to over-indenting. STOP IT.
>
> > #thread 0(madivise_cold_and_pageout) #thread1(truncate_inode_pages_range)
>
> This is still an impossible race, and it's the third time I've told you
> this. And madivise_cold_and_pageout does not exist, it's
> madvise_cold_or_pageout_pte_range(). I'm going to stop responding to
> your emails if you keep on uselessly repeating the same mistakes.
>
> So, once again,
>
> For madvise_cold_or_pageout_pte_range() to find a page, it must have
> a PTE pointing to the page. That means there's a mapcount on the page.
> That means there's a refcount on the page.
>
> truncate_inode_pages_range() will indeed attempt to remove a page from
> the page cache. BUT before it does that, it has to shoot down TLB
> entries that refer to the affected folios. That happens like this:
>
> for (i = 0; i < folio_batch_count(&fbatch); i++)
> truncate_cleanup_folio(fbatch.folios[i]);
> truncate_cleanup_folio() -> unmap_mapping_folio ->
> unmap_mapping_range_tree() -> unmap_mapping_range_vma() ->
> zap_page_range_single() -> unmap_single_vma -> unmap_page_range ->
> zap_p4d_range -> zap_pud_range -> zap_pmd_range -> zap_pte_range ->
> pte_offset_map_lock()
Sorry and thanks for the remind. I wonder if it is possible that
madvise_cold_or_pageout_pte_range join these races until
truncate_inode_pages_range finish doing pte cleanup via
truncate_cleanup_folio which seems could still make the bellowing race
timing make sense. BTW, damon_pa_pageout is a potential risk over this
race

>
> > pte_offset_map_lock takes NO lock
> > truncate_inode_folio(refcnt == 2)
> > <decrease the refcnt of page cache>
> > folio_isolate_lru(refcnt == 1)
> > release_pages(refcnt == 1)
> > folio_test_clear_lru
> > <remove folio's PG_lru>
> > folio_put_testzero == true
> > folio_get(refer to isolation)
> > folio_test_lru == false
> > <No lruvec_del_folio>
> > list_add(folio->lru, pages_to_free)
> > ****current folio will break LRU's integrity since it has not been deleted****
> >
> > 0. Folio's refcnt decrease from 2 to 1 by filemap_remove_folio
> > 1. thread 0 calls folio_isolate_lru with refcnt == 1. Folio comes from vm's pte
> > 2. thread 1 calls release_pages with refcnt == 1. Folio comes from address_space
> > (refcnt == 1 make sense for both of folio_isolate_lru and release_pages)
> > 3. thread0 clear folio's PG_lru by folio_test_clear_lru
> > 4. thread1 decrease folio's refcnt from 1 to 0 and get permission to proceed
> > 5. thread1 failed in folio_test_lru and do no list_del(folio)
> > 6. thread1 add folio to pages_to_free wrongly which break the LRU's->list
> > 7. next folio after current one within thread1 experiences list_del_invalid when calling lruvec_del_folio