Re: Improve spinlock performance by moving work to one core
From: Ling Ma
Date: Mon Nov 30 2015 - 01:17:57 EST
Any comments, the patch is acceptable ?
Thanks
Ling
2015-11-26 17:00 GMT+08:00 Ling Ma <ling.ma.program@xxxxxxxxx>:
> Run thread.c with clean kernel 4.3.0-rc4, perf top -G also indicates
> cache_flusharray and cache_alloc_refill functions spend 25.6% time
> on queued_spin_lock_slowpath totally. it means the compared data
> from our spinlock-test.patch is reliable.
>
> Thanks
> Ling
>
> 2015-11-26 11:49 GMT+08:00 Ling Ma <ling.ma.program@xxxxxxxxx>:
>> Hi Longman,
>>
>> All compared data is from the below operation in spinlock-test.patch:
>>
>> +#if ORG_QUEUED_SPINLOCK
>> + org_queued_spin_lock((struct qspinlock *)&pa.n->list_lock);
>> + refill_fn(&pa);
>> + org_queued_spin_unlock((struct qspinlock *)&pa.n->list_lock);
>> +#else
>> + new_spin_lock((struct nspinlock *)&pa.n->list_lock, refill_fn, &pa);
>> +#endif
>>
>> and
>>
>> +#if ORG_QUEUED_SPINLOCK
>> + org_queued_spin_lock((struct qspinlock *)&pa.n->list_lock);
>> + flusharray_fn(&pa);
>> + org_queued_spin_unlock((struct qspinlock *)&pa.n->list_lock);
>> +#else
>> + new_spin_lock((struct nspinlock *)&pa.n->list_lock, flusharray_fn, &pa);
>> +#endif
>>
>> So the result is correct and fair.
>>
>> Yes, we updated the code in include/asm-generic/qspinlock.h to
>> simplified modification and avoid kernel crash,
>> for example there are 10 lock scenarios to use new spin lock,
>> because bottle-neck is only from one or two scenarios, we only modify them,
>> other lock scenarios will continue to use the lock in qspinlock.h, we
>> must modify the code,
>> otherwise the operation will be hooked in the queued and never be waken up.
>>
>> Thanks
>> Ling
>>
>>
>>
>> 2015-11-26 3:05 GMT+08:00 Waiman Long <waiman.long@xxxxxxx>:
>>> On 11/23/2015 04:41 AM, Ling Ma wrote:
>>>> Hi Longman,
>>>>
>>>> Attachments include user space application thread.c and kernel patch
>>>> spinlock-test.patch based on kernel 4.3.0-rc4
>>>>
>>>> we run thread.c with kernel patch, test original and new spinlock respectively,
>>>> perf top -G indicates thread.c cause cache_alloc_refill and
>>>> cache_flusharray functions to spend ~25% time on original spinlock,
>>>> after introducing new spinlock in two functions, the cost time become ~22%.
>>>>
>>>> The printed data also tell us the new spinlock improves performance
>>>> by about 15%( 93841765576 / 81036259588) on E5-2699V3
>>>>
>>>> Appreciate your comments.
>>>>
>>>>
>>>
>>> I saw that you make the following changes in the code:
>>>
>>> static __always_inline void queued_spin_lock(struct qspinlock *lock)
>>> {
>>> u32 val;
>>> -
>>> +repeat:
>>> val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
>>> if (likely(val == 0))
>>> return;
>>> - queued_spin_lock_slowpath(lock, val);
>>> + goto repeat;
>>> + //queued_spin_lock_slowpath(lock, val);
>>> }
>>>
>>>
>>> This effectively changes the queued spinlock into an unfair byte lock.
>>> Without a pause to moderate the cmpxchg() call, that is especially bad
>>> for performance. Is the performance data above refers to the unfair byte
>>> lock versus your new spinlock?
>>>
>>> Cheers,
>>> Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/