Re: [PATCH 3/3] mm: adjust vm_committed_as_batch according to vm overcommit policy
From: Feng Tang
Date: Fri May 15 2020 - 04:02:17 EST
Hi Michal,
Thanks for the thorough reviews for these 3 patches!
On Fri, May 15, 2020 at 03:41:25PM +0800, Michal Hocko wrote:
> On Fri 08-05-20 15:25:17, Feng Tang wrote:
> > When checking a performance change for will-it-scale scalability
> > mmap test [1], we found very high lock contention for spinlock of
> > percpu counter 'vm_committed_as':
> >
> > 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> > 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap;
> > 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap;
> >
> > Actually this heavy lock contention is not always necessary. The
> > 'vm_committed_as' needs to be very precise when the strict
> > OVERCOMMIT_NEVER policy is set, which requires a rather small batch
> > number for the percpu counter.
> >
> > So lift the batch number to 16X for OVERCOMMIT_ALWAYS and
> > OVERCOMMIT_GUESS policies, and add a sysctl handler to adjust it
> > when the policy is reconfigured.
>
> Increasing the batch size for weaker overcommit modes makes sense. But
> your patch is also tuning OVERCOMMIT_NEVER without any explanation why
> that is still "small enough to be precise".
Actually, it keeps the batch algorithm for "OVERCOMMIT_NEVER", but
change the other 2 policies, which I should set it clear in the
commit log.
> > Benchmark with the same testcase in [1] shows 53% improvement on a
> > 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And no change
> > for some platforms, due to the test mmap size of the case is bigger
> > than the batch number computed, though the patch will help mmap/munmap
> > generally.
> >
> > [1] https://lkml.org/lkml/2020/3/5/57
>
> Please do not use lkml.org links in the changelog. Use
> http://lkml.kernel.org/r/$msg instead.
Thanks, will keep that in mind for this and future patches.
> > Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>
> > s32 vm_committed_as_batch = 32;
> >
> > -static void __meminit mm_compute_batch(void)
> > +void mm_compute_batch(void)
> > {
> > u64 memsized_batch;
> > s32 nr = num_present_cpus();
> > s32 batch = max_t(s32, nr*2, 32);
> > -
> > - /* batch size set to 0.4% of (total memory/#cpus), or max int32 */
> > - memsized_batch = min_t(u64, (totalram_pages()/nr)/256, 0x7fffffff);
> > + unsigned long ram_pages = totalram_pages();
> > +
> > + /*
> > + * For policy of OVERCOMMIT_NEVER, set batch size to 0.4%
> > + * of (total memory/#cpus), and lift it to 6.25% for other
> > + * policies to easy the possible lock contention for percpu_counter
> > + * vm_committed_as, while the max limit is INT_MAX
> > + */
> > + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER)
> > + memsized_batch = min_t(u64, ram_pages/nr/256, INT_MAX);
> > + else
> > + memsized_batch = min_t(u64, ram_pages/nr/16, INT_MAX);
Also as you mentioned there are real-world work loads with big mmap
size and multi-threading, can we lift it even further ?
memsized_batch = min_t(u64, ram_pages/nr/4, INT_MAX)
Thanks,
Feng