Re: Performance overhead of paravirt_ops on native identified

From: H. Peter Anvin
Date: Fri May 22 2009 - 12:35:47 EST


Xin, Xiaohui wrote:
> What I mean is that if the binary of _spin_lock is like this:
> (gdb) disassemble _spin_lock
> Dump of assembler code for function _spin_lock:
> 0xffffffff80497c0f <_spin_lock+0>: mov 1252634(%rip),%r11 # #0xffffffff805c9930 <test_lock_ops+16>
> 0xffffffff80497c16 <_spin_lock+7>: jmpq *%r11
> End of assembler dump.
> (gdb) disassemble
>
> In this situation the binary contains a jump, the overhead is more than the call.
>

That's an indirect jump, though. I don't think anyone was suggesting
using an indirect jump; the final patched version should be a direct
jump (instead of a direct call.)

I can see how indirect jumps might be slower, since they are probably
not optimized as aggressively in hardware as indirect calls -- indirect
jumps are generally used for switch tables, which often have low
predictability, whereas indirect calls are generally used for method
calls, which are (a) incredibly important for OOP languages, and (b)
generally highly predictable on the dynamic scale.

However, direct jumps and calls don't need prediction at all (although
of course rets do.)

-hpa

--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/