Re: [PATCH 06/10] mm: vmscan: demote anon DRAM pages to PMEM node
From: Keith Busch
Date: Sun Mar 24 2019 - 18:19:38 EST
On Sat, Mar 23, 2019 at 12:44:31PM +0800, Yang Shi wrote:
> /*
> + * Demote DRAM pages regardless the mempolicy.
> + * Demot anonymous pages only for now and skip MADV_FREE
> + * pages.
> + */
> + if (PageAnon(page) && !PageSwapCache(page) &&
> + (node_isset(page_to_nid(page), def_alloc_nodemask)) &&
> + PageSwapBacked(page)) {
> +
> + if (has_nonram_online()) {
> + list_add(&page->lru, &demote_pages);
> + unlock_page(page);
> + continue;
> + }
> + }
> +
> + /*
> * Anonymous process memory has backing store?
> * Try to allocate it some swap space here.
> * Lazyfree page could be freed directly
> @@ -1477,6 +1507,25 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
> }
>
> + /* Demote pages to PMEM */
> + if (!list_empty(&demote_pages)) {
> + int err, target_nid;
> + nodemask_t used_mask;
> +
> + nodes_clear(used_mask);
> + target_nid = find_next_best_node(pgdat->node_id, &used_mask,
> + true);
> +
> + err = migrate_pages(&demote_pages, alloc_new_node_page, NULL,
> + target_nid, MIGRATE_ASYNC, MR_DEMOTE);
> +
> + if (err) {
> + putback_movable_pages(&demote_pages);
> +
> + list_splice(&ret_pages, &demote_pages);
> + }
> + }
> +
> mem_cgroup_uncharge_list(&free_pages);
> try_to_unmap_flush();
> free_unref_page_list(&free_pages);
How do these pages eventually get to swap when migration fails? Looks
like that's skipped.
And page cache demotion is useful too, we shouldn't consider only
anonymous for this feature.