Re: [PATCH -tip 4/5] kprobes/x86: Use text_poke_smp_batch
From: Masami Hiramatsu
Date: Tue May 11 2010 - 20:38:56 EST
Mathieu Desnoyers wrote:
> * Masami Hiramatsu (mhiramat@xxxxxxxxxx) wrote:
>> Use text_poke_smp_batch() in optimization path for reducing
>> the number of stop_machine() issues.
>>
>> Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
>> Cc: Ananth N Mavinakayanahalli <ananth@xxxxxxxxxx>
>> Cc: Ingo Molnar <mingo@xxxxxxx>
>> Cc: Jim Keniston <jkenisto@xxxxxxxxxx>
>> Cc: Jason Baron <jbaron@xxxxxxxxxx>
>> Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
>> ---
>>
>> arch/x86/kernel/kprobes.c | 37 ++++++++++++++++++++++++++++++-------
>> include/linux/kprobes.h | 2 +-
>> kernel/kprobes.c | 13 +------------
>> 3 files changed, 32 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
>> index 345a4b1..63a5c24 100644
>> --- a/arch/x86/kernel/kprobes.c
>> +++ b/arch/x86/kernel/kprobes.c
>> @@ -1385,10 +1385,14 @@ int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
>> return 0;
>> }
>>
>> -/* Replace a breakpoint (int3) with a relative jump. */
>> -int __kprobes arch_optimize_kprobe(struct optimized_kprobe *op)
>> +#define MAX_OPTIMIZE_PROBES 256
>
> So what kind of interrupt latency does a 256-probes batch generate on the
> system ? Are we talking about a few milliseconds, a few seconds ?
>From my experiment on kvm/4cpu, it took about 3 seconds in average.
With this patch, it went down to 30ms. (x100 faster :))
Thank you,
--
Masami Hiramatsu
e-mail: mhiramat@xxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/