Re: Re-tune x86 uaccess code for PREEMPT_VOLUNTARY

From: H. Peter Anvin
Date: Sun Aug 11 2013 - 00:58:03 EST


That sounds like an issue with specific preemption policies.

Mike Galbraith <bitbucket@xxxxxxxxx> wrote:
>On Sat, 2013-08-10 at 21:27 -0700, H. Peter Anvin wrote:
>> On 08/10/2013 09:17 PM, Mike Galbraith wrote:
>> >>
>> >> Do you have any quantification of "munches throughput?" It seems
>odd
>> >> that it would be worse than polling for preempt all over the
>kernel, but
>> >> perhaps the additional locking is what costs.
>> >
>> > I hadn't compared in ages, so made some fresh samples.
>> >
>> > Q6600 3.11-rc4
>> >
>> > vmark
>> > voluntary 169808 155826 154741 1.000
>> > preempt 149354 124016 128436 .836
>> >
>> > That should be ~worst case, it hates preemption.
>> >
>> > tbench 8
>> > voluntary 1027.96 1028.76 1044.60 1.000
>> > preempt 929.06 935.01 928.64 .900
>> >
>> > hackbench -l 10000
>> > voluntary 23.146 23.124 23.230 1.000
>> > preempt 25.065 24.633 24.789 1.071
>> >
>> > kbuild vmlinux
>> > voluntary 3m44.842s 3m42.975s 3m42.954s 1.000
>> > preempt 3m46.141s 3m45.835s 3m45.953s 1.010
>> >
>> > Compute load comparisons are boring 'course.
>> >
>>
>> I presume voluntary is indistinguishable from no preemption at all?
>
>No, all preemption options produce performance deltas.
>
>> Either way, that is definitely a reproducible test case, so if
>someone
>> is willing to take on optimizing preemption they can use vmark as the
>> litmus test. It would be really awesome if we genuinely could get
>the
>> cost of preemption down to where it just doesn't matter.
>
>You have to eat more scheduler cycles, that's what PREEMPT does for a
>living. Release a lock, wham.
>
>-Mike

--
Sent from my mobile phone. Please excuse brevity and lack of formatting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/