Re: [RFC 2/2] mm: alloc/free depth based PCP high auto-tuning
From: Michal Hocko
Date: Fri Jul 14 2023 - 07:42:33 EST
On Wed 12-07-23 10:05:26, Mel Gorman wrote:
> On Tue, Jul 11, 2023 at 01:19:46PM +0200, Michal Hocko wrote:
> > On Mon 10-07-23 14:53:25, Huang Ying wrote:
> > > To auto-tune PCP high for each CPU automatically, an
> > > allocation/freeing depth based PCP high auto-tuning algorithm is
> > > implemented in this patch.
> > >
> > > The basic idea behind the algorithm is to detect the repetitive
> > > allocation and freeing pattern with short enough period (about 1
> > > second). The period needs to be short to respond to allocation and
> > > freeing pattern changes quickly and control the memory wasted by
> > > unnecessary caching.
> >
> > 1s is an ethernity from the allocation POV. Is a time based sampling
> > really a good choice? I would have expected a natural allocation/freeing
> > feedback mechanism. I.e. double the batch size when the batch is
> > consumed and it requires to be refilled and shrink it under memory
> > pressure (GFP_NOWAIT allocation fails) or when the surplus grows too
> > high over batch (e.g. twice as much). Have you considered something as
> > simple as that?
> > Quite honestly I am not sure time based approach is a good choice
> > because memory consumptions tends to be quite bulky (e.g. application
> > starts or workload transitions based on requests).
> >
>
> I tend to agree. Tuning based on the recent allocation pattern without frees
> would make more sense and also be symmetric with how free_factor works. I
> suspect that time-based may be heavily orientated around the will-it-scale
> benchmark. While I only glanced at this, a few things jumped out
>
> 1. Time-based heuristics are not ideal. congestion_wait() and
> friends was an obvious case where time-based heuristics fell apart even
> before the event it waited on was removed. For congestion, it happened to
> work for slow storage for a while but that was about it. For allocation
> stream detection, it has a similar problem. If a process is allocating
> heavily, then fine, if it's in bursts of less than a second more than one
> second apart then it will not adapt. While I do not think it is explicitly
> mentioned anywhere, my understanding was that heuristics like this within
> mm/ should be driven by explicit events as much as possible and not time.
Agreed. I would also like to point out that it is also important to
realize those events that we should care about. Remember the primary
motivation of the tuning is to reduce the lock contention. That being
said, it is less of a problem to have stream or bursty demand for
memory if that doesn't really cause the said contention, right? So any
auto-tuning should consider that as well and do not inflate the batch
in an absense of the contention. That of course means that a solely
deallocation based monitoring.
--
Michal Hocko
SUSE Labs