Re: [PATCH 06/10] mm: vmscan: demote anon DRAM pages to PMEM node

From: Yang Shi
Date: Tue Mar 26 2019 - 23:45:28 EST




On 3/26/19 5:35 PM, Keith Busch wrote:
On Mon, Mar 25, 2019 at 12:49:21PM -0700, Yang Shi wrote:
On 3/24/19 3:20 PM, Keith Busch wrote:
How do these pages eventually get to swap when migration fails? Looks
like that's skipped.
Yes, they will be just put back to LRU. Actually, I don't expect it would be
very often to have migration fail at this stage (but I have no test data to
support this hypothesis) since the pages have been isolated from LRU, so
other reclaim path should not find them anymore.

If it is locked by someone else right before migration, it is likely
referenced again, so putting back to LRU sounds not bad.

A potential improvement is to have sync migration for kswapd.
Well, it's not that migration fails only if the page is recently
referenced. Migration would fail if there isn't available memory in
the migration node, so this implementation carries an expectation that
migration nodes have higher free capacity than source nodes. And since
your attempting THP's without ever splitting them, that also requires
lower fragmentation for a successful migration.

Yes, it is possible. However, migrate_pages() already has logic to handle such case. If the target node has not enough space for migrating THP in a whole, it would split THP then retry with base pages.

Swapping THP has been optimized to swap in a whole too. It would try to add THP into swap cache in a whole, split THP if the attempt fails, then add base pages into swap cache.

So, I think we can leave this to migrate_pages() without splitting in advance all the time.

Thanks,
Yang


Applications, however, may allocate and pin pages directly out of that
migration node to the point it does not have so much free capacity or
physical continuity, so we probably shouldn't assume it's the only way
to reclaim pages.