Re: [PATCH v2 4/5] mm: introduce MADV_PAGEOUT

From: Michal Hocko
Date: Thu Jun 20 2019 - 03:10:02 EST


On Thu 20-06-19 13:16:20, Minchan Kim wrote:
> On Wed, Jun 19, 2019 at 03:24:50PM +0200, Michal Hocko wrote:
> > On Mon 10-06-19 20:12:51, Minchan Kim wrote:
> > [...]
> > > +static int madvise_pageout_pte_range(pmd_t *pmd, unsigned long addr,
> > > + unsigned long end, struct mm_walk *walk)
> >
> > Again the same question about a potential code reuse...
> > [...]
> > > +regular_page:
> > > + tlb_change_page_size(tlb, PAGE_SIZE);
> > > + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> > > + flush_tlb_batched_pending(mm);
> > > + arch_enter_lazy_mmu_mode();
> > > + for (; addr < end; pte++, addr += PAGE_SIZE) {
> > > + ptent = *pte;
> > > + if (!pte_present(ptent))
> > > + continue;
> > > +
> > > + page = vm_normal_page(vma, addr, ptent);
> > > + if (!page)
> > > + continue;
> > > +
> > > + if (isolate_lru_page(page))
> > > + continue;
> > > +
> > > + isolated++;
> > > + if (pte_young(ptent)) {
> > > + ptent = ptep_get_and_clear_full(mm, addr, pte,
> > > + tlb->fullmm);
> > > + ptent = pte_mkold(ptent);
> > > + set_pte_at(mm, addr, pte, ptent);
> > > + tlb_remove_tlb_entry(tlb, pte, addr);
> > > + }
> > > + ClearPageReferenced(page);
> > > + test_and_clear_page_young(page);
> > > + list_add(&page->lru, &page_list);
> > > + if (isolated >= SWAP_CLUSTER_MAX) {
> >
> > Why do we need SWAP_CLUSTER_MAX batching? Especially when we need ...
> > [...]
>
> It aims for preventing early OOM kill since we isolate too many LRU
> pages concurrently.

This is a good point. For some reason I thought that we consider
isolated pages in should_reclaim_retry but we do not anymore (since we
move from zone to node LRUs I guess). Please stick a comment there.

> > > +unsigned long reclaim_pages(struct list_head *page_list)
> > > +{
> > > + int nid = -1;
> > > + unsigned long nr_reclaimed = 0;
> > > + LIST_HEAD(node_page_list);
> > > + struct reclaim_stat dummy_stat;
> > > + struct scan_control sc = {
> > > + .gfp_mask = GFP_KERNEL,
> > > + .priority = DEF_PRIORITY,
> > > + .may_writepage = 1,
> > > + .may_unmap = 1,
> > > + .may_swap = 1,
> > > + };
> > > +
> > > + while (!list_empty(page_list)) {
> > > + struct page *page;
> > > +
> > > + page = lru_to_page(page_list);
> > > + if (nid == -1) {
> > > + nid = page_to_nid(page);
> > > + INIT_LIST_HEAD(&node_page_list);
> > > + }
> > > +
> > > + if (nid == page_to_nid(page)) {
> > > + list_move(&page->lru, &node_page_list);
> > > + continue;
> > > + }
> > > +
> > > + nr_reclaimed += shrink_page_list(&node_page_list,
> > > + NODE_DATA(nid),
> > > + &sc, 0,
> > > + &dummy_stat, false);
> >
> > per-node batching in fact. Other than that nothing really jumped at me.
> > Except for the shared page cache side channel timing aspect not being
> > considered AFAICS. To be more specific. Pushing out a shared page cache
> > is possible even now but this interface gives a much easier tool to
> > evict shared state and perform all sorts of timing attacks. Unless I am
> > missing something we should be doing something similar to mincore and
> > ignore shared pages without a writeable access or at least document why
> > we do not care.
>
> I'm not sure IIUC side channel attach. As you mentioned, without this syscall,
> 1. they already can do that simply by memory hogging

This is way much more harder for practical attacks because the reclaim
logic is not fully under the attackers control. Having a direct tool to
reclaim memory directly then just opens doors to measure the other
consumers of that memory and all sorts of side channel.

> 2. If we need fix MADV_PAGEOUT, that means we need to fix MADV_DONTNEED, too?

nope because MADV_DONTNEED doesn't unmap from other processes.
--
Michal Hocko
SUSE Labs