Re: [linus:master] [mm] cacded5e42: aim9.brk_test.ops_per_sec -5.0% regression

From: Lorenzo Stoakes
Date: Wed Oct 09 2024 - 05:54:13 EST


On Wed, Oct 09, 2024 at 02:44:30PM +0800, Oliver Sang wrote:
> hi, Lorenzo,
>
> On Tue, Oct 08, 2024 at 09:44:24AM +0100, Lorenzo Stoakes wrote:
> > On Tue, Oct 08, 2024 at 04:31:59PM +0800, Oliver Sang wrote:
> > > hi, Lorenzo Stoakes,
> > >
> > > sorry for late, we are in holidays last week.
> > >
> > > On Mon, Sep 30, 2024 at 09:21:52AM +0100, Lorenzo Stoakes wrote:
> > > > On Mon, Sep 30, 2024 at 10:21:27AM GMT, kernel test robot wrote:
> > > > >
> > > > >
> > > > > Hello,
> > > > >
> > > > > kernel test robot noticed a -5.0% regression of aim9.brk_test.ops_per_sec on:
> > > > >
> > > > >
> > > > > commit: cacded5e42b9609b07b22d80c10f0076d439f7d1 ("mm: avoid using vma_merge() for new VMAs")
> > > > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> > > > >
> > > > > testcase: aim9
> > > > > test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory
> > > >
> > > > Hm, quite an old microarchitecture no?
> > > >
> > > > Would it be possible to try this on a range of uarch's, especially more
> > > > recent noes, with some repeated runs to rule out statistical noise? Much
> > > > appreciated!
> > >
> > > we run this test on below platforms, and observed similar regression.
> > > one thing I want to mention is for performance tests, we run one commit at least
> > > 6 times. for this aim9 test, the data is quite stable, so there is no %stddev
> > > value in our table. we won't show this value if it's <2%
> >
> > Thanks, though I do suggest going forward it's worth adding the number even
> > if it's <2% or highlighting that, I found that quite misleading.
> >
> > Also might I suggest reporting the most recent uarch first? As this seeming
> > to be ivy bridge only delayed my responding to this
>
> we have 80+ testsuite but a reletively small machine pool (due to resource
> constraint), the recent uarch machines are used mostly for more popular
> testsuites or those easy for us to catch regression per our experience.
>
> unfortunately, the aim9 is only allot to Ivy Bridge as regular tests now.
> the data on other platforms I shared with you in last thread are from manual
> runs. sorry if this causes any inconvenience.

Understood, sorry I realise you are providing this service for free and
again to reiterate - I'm hugely grateful and glad you helped spot this
problem which I will now address! :)

>
> > (not to sound
> > ungrateful for the report, which is very useful, but it'd be great if you
> > guys could test in -next, as this was there for weeks with no apparent
> > issues).
>
> we don't test a single tree, instead, we merged a lot of trees together to
> so-called hourly kernel and test upon it. mainline is stable and is our merge
> base for lots of hourly kernels, so it has big chance to be tested and bisect
> successfully. -next could also be the merge base some time, but since it's
> rebased frequently, hard for us to finish test and bisect in time, some time
> we even cannot use it as merge base since various issues. it's really a pity
> that we miss issues on -next ...

Sure and I guess from my perspective it can be easy to underestimate the
combinatorial explosion of that.

It'd obviously be a nice-to-have for you to be able to take into account
-next but absolutely get it! :)

>
> >
> > I will look into this now, if I provide patches would you be able to test
> > them using the same boxes? It'd be much appreciated!
>
> sure! that's our pleasure!

Perfect, thanks very much!

>
> >
> > Thanks, Lorenzo
> >