Re: mm: compaction: buffer overflow in isolate_migratepages_range
From: Andrey Ryabinin
Date: Thu Aug 14 2014 - 14:07:48 EST
2014-08-14 19:13 GMT+04:00 Rafael Aquini <aquini@xxxxxxxxxx>:
>> Yeah, it happens because I failed to anticipate a race window opening where
>> balloon_page_movable() can stumble across an anon page being released --
>> somewhere in the midway of __page_cache_release() & free_pages_prepare()
>> down on the put_page() codepath -- while isolate_migratepages_range() performs
>> its loop in the (lru) unlocked case.
> Giving it a second thought, I see my first analisys (above) isn't accurate,
> as if we had raced against a page being released at the point I mentioned,
> balloon_page_movable() would have bailed out while performing its
> page_flags_cleared() checkpoint.
> But I now can see from where this occurrence is coming from, actually.
> The real race window for this issue opens when balloon_page_movable()
> checkpoint @ isolate_migratepages_range() stumbles across a (new)
> page under migration at:
> static int move_to_new_page(struct page *newpage, struct page *page, ...
> newpage->mapping = page->mapping;
> At this point, *newpage points to a fresh page coming out from the allocator
> (just as any other possible ballooned page), but it gets its ->mapping
> pointer set, which can create the conditions to the access (for mapping flag
> checking purposes only) KASAN is complaining about, if *page happens to
> be pointing to an anon page.
>> Although harmless, IMO, as we only go for the isolation step if we hold the
>> lru lock (and the check is re-done under lock safety) this is an
>> annoying thing we have to get rid of to not defeat the purpose of having
>> the kasan in place.
> It still a harmless condition as before, but considering what goes above
> I'm now convinced & confident the patch proposed by Andrey is the real fix
> for such occurrences.
I don't think that it's harmless, because we could cross page boundary here and
try to read from a memory hole.
And this code has more potential problems like use after free. Since
we don't hold locks properly here,
page->mapping could point to freed struct address_space.
We discussed this with Konstantin and he suggested a better solution for this.
If I understood him correctly the main idea was to store bit
identifying ballon page
in struct page (special value in _mapcount), so we won't need to check
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/