Re: [PATCH v2] rmap: fix theoretical race between do_wp_page and shrink_active_list

From: Minchan Kim
Date: Tue May 12 2015 - 21:14:33 EST

On Tue, May 12, 2015 at 01:18:39PM +0300, Vladimir Davydov wrote:
> As noted by Paul the compiler is free to store a temporary result in a
> variable on stack, heap or global unless it is explicitly marked as
> volatile, see:
> This can result in a race between do_wp_page() and shrink_active_list()
> as follows.
> In do_wp_page() we can call page_move_anon_rmap(), which sets
> page->mapping as follows:
> anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
> page->mapping = (struct address_space *) anon_vma;
> The page in question may be on an LRU list, because nowhere in
> do_wp_page() we remove it from the list, neither do we take any LRU
> related locks. Although the page is locked, shrink_active_list() can
> still call page_referenced() on it concurrently, because the latter does
> not require an anonymous page to be locked:
> ---- ----
> do_wp_page shrink_active_list
> lock_page page_referenced
> PageAnon->yes, so skip trylock_page
> page_move_anon_rmap
> page->mapping = anon_vma
> rmap_walk
> PageAnon->no
> rmap_walk_file
> page->mapping += PAGE_MAPPING_ANON
> This patch fixes this race by explicitly forbidding the compiler to
> split page->mapping store in page_move_anon_rmap() with the aid of
> Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx>
> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
> Cc: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx>
> Cc: Rik van Riel <riel@xxxxxxxxxx>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> ---

The paper says "This requires escape analysis: blah blah for this optimization
to be valid" So, I'm not sure it's the case but admit we couldn't guarantee
all of compiler optimization technique so I am in favor of the patch to make
sure future-proof with upcoming suprising compiler technique.

Another review point I had is whether lockless page in shrink_active_list
could be turn into PageKsm in the middle of page_referenced. IOW,

PageAnon && !PageKsm -> true so avoid try_lockpage
<... amount of stall start >
Other cpu makes the page into PageKsm
<... amount of stall end >
PageKsm-> true
-> bang because ksm expect the passed page was locked

However, we increased page->count in isolate_lru_page before passing
the page in page_referenced so KSM cannot make the page KsmPage so
it's safe.

Acked-by: Minchan Kim <minchan@xxxxxxxxxx>

Kind regards,
Minchan Kim
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at