Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock
From: Waiman Long
Date: Wed Apr 03 2019 - 11:48:09 EST
On 04/03/2019 11:39 AM, Alex Kogan wrote:
> Peter, Longman, many thanks for your detailed comments!
> A few follow-up questions are inlined below.
>> On Apr 2, 2019, at 5:43 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> On Mon, Apr 01, 2019 at 10:36:19AM -0400, Waiman Long wrote:
>>> On 03/29/2019 11:20 AM, Alex Kogan wrote:
>>>> +config NUMA_AWARE_SPINLOCKS
>>>> + bool "Numa-aware spinlocks"
>>>> + depends on NUMA
>>>> + default y
>>>> + help
>>>> + Introduce NUMA (Non Uniform Memory Access) awareness into
>>>> + the slow path of spinlocks.
>>>> + The kernel will try to keep the lock on the same node,
>>>> + thus reducing the number of remote cache misses, while
>>>> + trading some of the short term fairness for better performance.
>>>> + Say N if you want absolute first come first serve fairness.
>>> The patch that I am looking for is to have a separate
>>> numa_queued_spinlock_slowpath() that coexists with
>>> native_queued_spinlock_slowpath() and
>>> paravirt_queued_spinlock_slowpath(). At boot time, we select the most
>>> appropriate one for the system at hand.
> Is this how this selection works today for paravirt?
> I see a PARAVIRT_SPINLOCKS config option, but IIUC you are talking about a different mechanism here.
> Can you, please, elaborate or give me a link to a page that explains that?
>> Agreed; and until we have static_call, I think we can abuse the paravirt
>> stuff for this.
>> By the time we patch the paravirt stuff:
>> we should already have enumerated the NODE topology and so nr_node_ids()
>> should be set.
>> So if we frob pv_ops.lock.queued_spin_lock_slowpath to
>> numa_queued_spin_lock_slowpath before that, it should all get patched
>> just right.
>> That of course means the whole NUMA_AWARE_SPINLOCKS thing depends on
>> PARAVIRT_SPINLOCK, which is a bit awkwardâ
> Just to mention here, the patch so far does not address paravirt, but our goal is to add this support once we address all the concerns for the native version.
> So we will end up with four variants for the queued_spinlock_slowpath() â one for each combination of native/paravirt and NUMA/non-NUMA.
> Or perhaps we do not need a NUMA/paravirt variant?
I don't expect we need a numa variant for paravirt. First of all, the
NUMA information available in a VM guest is unreliable. So let just not
go there. What we are looking for is to make sure your patch won't break
the paravirt code. So testing the paravirt qspinlock should be part of
your testing matrix.