Re: [PATCH] mm: avoid blocking lock_page() in kcompactd

From: Michal Hocko
Date: Thu Feb 13 2020 - 02:48:53 EST


On Tue 28-01-20 12:39:55, Michal Hocko wrote:
> On Tue 28-01-20 02:48:57, Matthew Wilcox wrote:
> > On Tue, Jan 28, 2020 at 10:13:52AM +0100, Michal Hocko wrote:
> > > On Tue 28-01-20 00:30:44, Matthew Wilcox wrote:
> > > > On Tue, Jan 28, 2020 at 09:17:12AM +0100, Michal Hocko wrote:
> > > > > On Mon 27-01-20 11:06:53, Matthew Wilcox wrote:
> > > > > > On Mon, Jan 27, 2020 at 04:00:24PM +0100, Michal Hocko wrote:
> > > > > > > On Sun 26-01-20 15:39:35, Matthew Wilcox wrote:
> > > > > > > > On Sun, Jan 26, 2020 at 11:53:55AM -0800, Cong Wang wrote:
> > > > > > > > > I suspect the process gets stuck in the retry loop in try_charge(), as
> > > > > > > > > the _shortest_ stacktrace of the perf samples indicated:
> > > > > > > > >
> > > > > > > > > cycles:ppp:
> > > > > > > > > ffffffffa72963db mem_cgroup_iter
> > > > > > > > > ffffffffa72980ca mem_cgroup_oom_unlock
> > > > > > > > > ffffffffa7298c15 try_charge
> > > > > > > > > ffffffffa729a886 mem_cgroup_try_charge
> > > > > > > > > ffffffffa720ec03 __add_to_page_cache_locked
> > > > > > > > > ffffffffa720ee3a add_to_page_cache_lru
> > > > > > > > > ffffffffa7312ddb iomap_readpages_actor
> > > > > > > > > ffffffffa73133f7 iomap_apply
> > > > > > > > > ffffffffa73135da iomap_readpages
> > > > > > > > > ffffffffa722062e read_pages
> > > > > > > > > ffffffffa7220b3f __do_page_cache_readahead
> > > > > > > > > ffffffffa7210554 filemap_fault
> > > > > > > > > ffffffffc039e41f __xfs_filemap_fault
> > > > > > > > > ffffffffa724f5e7 __do_fault
> > > > > > > > > ffffffffa724c5f2 __handle_mm_fault
> > > > > > > > > ffffffffa724cbc6 handle_mm_fault
> > > > > > > > > ffffffffa70a313e __do_page_fault
> > > > > > > > > ffffffffa7a00dfe page_fault
> > > > >
> > > > > I am not deeply familiar with the readahead code. But is there really a
> > > > > high oerder allocation (order > 1) that would trigger compaction in the
> > > > > phase when pages are locked?
> > > >
> > > > Thanks to sl*b, yes:
> > > >
> > > > radix_tree_node 80890 102536 584 28 4 : tunables 0 0 0 : slabdata 3662 3662 0
> > > >
> > > > so it's allocating 4 pages for an allocation of a 576 byte node.
> > >
> > > I am not really sure that we do sync migration for costly orders.
> >
> > Doesn't the stack trace above indicate that we're doing migration as
> > the result of an allocation in add_to_page_cache_lru()?
>
> Which stack trace do you refer to? Because the one above doesn't show
> much more beyond mem_cgroup_iter and likewise others in this email
> thread. I do not really remember any stack with lock_page on the trace.
> >
> > > > > Btw. the compaction rejects to consider file backed pages when __GFP_FS
> > > > > is not present AFAIR.
> > > >
> > > > Ah, that would save us.
> > >
> > > So the NOFS comes from the mapping GFP mask, right? That is something I
> > > was hoping to get rid of eventually :/ Anyway it would be better to have
> > > an explicit NOFS with a comment explaining why we need that. If for
> > > nothing else then for documentation.
> >
> > I'd also like to see the mapping GFP mask go away, but rather than seeing
> > an explicit GFP_NOFS here, I'd rather see the memalloc_nofs API used.
>
> Completely agreed agree here. The proper place for the scope would be
> the place where pages are locked with an explanation that there are
> other allocations down the line which might invoke sync migration and
> that would be dangerous. Having that explicitly documented is clearly an
> improvement.

Can we pursue on this please? An explicit NOFS scope annotation with a
reference to compaction potentially locking up on pages in the readahead
would be a great start.
--
Michal Hocko
SUSE Labs