Re: [PATCH 2/3] locking/qspinlock: Introduce CNA into the slow path of qspinlock

From: Alex Kogan
Date: Fri Feb 01 2019 - 16:26:39 EST



> On Jan 31, 2019, at 12:38 PM, Waiman Long <longman@xxxxxxxxxx> wrote:
>
> On 01/30/2019 10:01 PM, Alex Kogan wrote:
>> In CNA, spinning threads are organized in two queues, a main queue for
>> threads running on the same socket as the current lock holder, and a
>> secondary queue for threads running on other sockets. For details,
>> see https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_1810.05600&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=RdsRU4oU2j-_hNG5kDC5ZEE_XHikl3QLNttCaBde3QU&s=NQ840S1lz53Cq7AnOlPWdnjZI7_Ic3rfYsf-w2aYus4&e=.
>>
>> Note that this variant of CNA may introduce starvation by continuously
>> passing the lock to threads running on the same socket. This issue
>> will be addressed later in the series.
>>
>> Signed-off-by: Alex Kogan <alex.kogan@xxxxxxxxxx>
>> Reviewed-by: Steve Sistare <steven.sistare@xxxxxxxxxx>
>
> Just wondering if you have tried include PARVIRT_SPINLOCKS option to see
> if that patch may screw up the PV qspinlock code.
No, I havenât yet.
The idea was to make it work for non-PV systems first, and then extend to PV.

>
> Anyway, I do believe your claim that NUMA-aware qspinlock is good for
> large systems with many nodes. However, all these extra code are
> overhead for small systems that have a single node/socket, for instance.
>
> I will support doing something similar to what had been done to support
> PV qspinlock. IOW, a separate slowpath function that can be patched to
> become the default depending on the system being run on or a kernel boot
> option setting.
>
> I would like to keep the core slowpath function simple and easy to
> understand. So most of the CNA code should be encapsulated into some
> helper functions and put into a separated file.
Sounds good.
I think it should be pretty straightforward to encapsulate the CNA code and do what you suggest.
We will look into that.

Thanks,
â Alex

>
> Thanks,
> Longman
>