Re: Re-tune x86 uaccess code for PREEMPT_VOLUNTARY

From: Linus Torvalds
Date: Sat Aug 10 2013 - 12:43:40 EST


On Sat, Aug 10, 2013 at 9:09 AM, H. Peter Anvin <hpa@xxxxxxxxx> wrote:
>
> Do you have any quantification of "munches throughput?" It seems odd
> that it would be worse than polling for preempt all over the kernel, but
> perhaps the additional locking is what costs.

Actually, the big thing for true preemption is not so much the preempt
count itself, but the fact that when the preempt count goes back to
zero we have that "check if we should have been preempted" thing.

And in particular, the conditional function call that goes along with it.

The thing is, even if that is almost never taken, just the fact that
there is a conditional function call very often makes code generation
*much* worse. A function that is a leaf function with no stack frame
with no preemption often turns into a non-leaf function with stack
frames when you enable preemption, just because it had a RCU read
region which disabled preemption.

It's similar to the kind of code generation issue that Andi's patches
are trying to work on.

Andi did the "test and jump to a different section to call the
scheduler with registers saved" as an assembly stub in one of his
patches in this series exactly to avoid the cost of this for the
might_sleep() case, and generated that GET_THREAD_AND_SCHEDULE asm
macro for it. But look at that asm macro, and compare it to
"preempt_check_resched()"..

I have often wanted to have access to that kind of thing from C code.
It's not unusual. Think lock failure paths, not Tom Jones.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/