Re: [RESEND][PATCH v2] mm: don't call lru draining in the nested lru_cache_disable

From: Michal Hocko
Date: Mon Jan 24 2022 - 04:57:45 EST


On Fri 21-01-22 13:56:31, Minchan Kim wrote:
> On Fri, Jan 21, 2022 at 10:59:32AM +0100, Michal Hocko wrote:
> > On Thu 20-01-22 13:07:55, Minchan Kim wrote:
> > > On Thu, Jan 20, 2022 at 09:24:22AM +0100, Michal Hocko wrote:
> > > > On Wed 19-01-22 20:25:54, Minchan Kim wrote:
> > > > > On Wed, Jan 19, 2022 at 10:20:22AM +0100, Michal Hocko wrote:
> > > > [...]
> > > > > > What does prevent you from calling lru_cache_{disable,enable} this way
> > > > > > with the existing implementation? AFAICS calls can be nested just fine.
> > > > > > Or am I missing something?
> > > > >
> > > > > It just increases more IPI calls since we drain the lru cache
> > > > > both upper layer and lower layer. That's I'd like to avoid
> > > > > in this patch. Just disable lru cache one time for entire
> > > > > allocation path.
> > > >
> > > > I do not follow. Once you call lru_cache_disable at the higher level
> > > > then no new pages are going to be added to the pcp caches. At the same
> > > > time existing caches are flushed so the inner lru_cache_disable will not
> > > > trigger any new IPIs.
> > >
> > > lru_cache_disable calls __lru_add_drain_all with force_all_cpus
> > > unconditionally so keep calling the IPI.
> >
> > OK, this is something I have missed. Why cannot we remove the force_all
> > mode for lru_disable_count>0 when there are no pcp caches populated?
>
> Couldn't gaurantee whether the IPI is finished with only atomic counter.
>
> CPU 0 CPU 1
> lru_cache_disable lru_cache_disable
> ret = atomic_inc_return
>
> ret = atomic_inc_return
> lru_add_drain_all(ret == 1); lru_add_drain_all(ret == 1)
> IPI ongoing skip IPI
> alloc_contig_range
> fail
> ..
> ..
>
> IPI done

But __lru_add_drain_all uses a local mutex while the IPI flushing is
done so the racing lru_cache_disable would block until
flush_work(&per_cpu(lru_add_drain_work, cpu)) completes so all IPIs are
handled. Or am I missing something?

--
Michal Hocko
SUSE Labs