Re: [PATCH v9 0/5] Add NUMA-awareness to qspinlock
From: Paul E. McKenney
Date: Sun Jan 26 2020 - 17:43:07 EST
On Sun, Jan 26, 2020 at 07:35:35AM -0800, Paul E. McKenney wrote:
> On Sat, Jan 25, 2020 at 02:41:39PM -0500, Waiman Long wrote:
> > On 1/24/20 11:58 PM, Paul E. McKenney wrote:
> > > On Fri, Jan 24, 2020 at 09:17:05PM -0500, Waiman Long wrote:
> > >> On 1/24/20 8:59 PM, Waiman Long wrote:
> > >>>> You called it! I will play with QEMU's -numa argument to see if I can get
> > >>>> CNA to run for me. Please accept my apologies for the false alarm.
> > >>>>
> > >>>> Thanx, Paul
> > >>>>
> > >>> CNA is not currently supported in a VM guest simply because the numa
> > >>> information is not reliable. You will have to run it on baremetal to
> > >>> test it. Sorry for that.
> > >> Correction. There is a command line option to force CNA lock to be used
> > >> in a VM. Use the "numa_spinlock=on" boot command line parameter.
> > > As I understand it, I need to use a series of -numa arguments to qemu
> > > combined with the numa_spinlock=on (or =1) on the kernel command line.
> > > If the kernel thinks that there is only one NUMA node, it appears to
> > > avoid doing CNA.
> > >
> > > Correct?
> > >
> > > Thanx, Paul
> > >
> > In auto-detection mode (the default), CNA will only be turned on when
> > paravirt qspinlock is not enabled first and there are at least 2 numa
> > nodes. The "numa_spinlock=on" option will force it on even when both of
> > the above conditions are false.
>
> Hmmm...
>
> Here is my kernel command line taken from the console log:
>
> console=ttyS0 locktorture.onoff_interval=0 numa_spinlock=on locktorture.stat_interval=15 locktorture.shutdown_secs=1800 locktorture.verbose=1
>
> Yet the string "Enabling CNA spinlock" does not appear.
>
> Ah, idiot here needs to enable CONFIG_NUMA_AWARE_SPINLOCKS in his build.
> Trying again with "--kconfig "CONFIG_NUMA_AWARE_SPINLOCKS=y"...
And after fixing that, plus adding the other three Kconfig options required
to enable this, I really do see "Enabling CNA spinlock" in the console log.
Yay!
At the end of the 30-minute locktorture exclusive-lock run, I see this:
Writes: Total: 572176565 Max/Min: 54167704/10878216 ??? Fail: 0
This is about a five-to-one ratio. Is this expected behavior, given a
single NUMA node on a single-socket system with 12 hardware threads?
I will try reader-writer lock next.
Again, should I be using qemu's -numa command-line option to create nodes?
If so, what would be a sane configuration given 12 CPUs and 512MB of
memory for the VM? If not, what is a good way to exercise CNA's NUMA
capabilities within a guest OS?
Thanx, Paul