...I'm only a bit suspicious of kprobes, since we have:The original commit indicates that anything called from
>NOKPROBE_SYMBOL(preempt_count_sub)
>but trace_preemp_on() called by preempt_count_sub()
>don't have this mark...
preempt_disable() should also be marked as NOKPROBE_SYMBOL:
commit 43627582799db317e966ecb0002c2c3c9805ec0f
Author: Srinivasa Ds<srinivasa@xxxxxxxxxx> Sun Feb 24 00:24:04 2008
Committer: Linus Torvalds<torvalds@xxxxxxxxxxxxxxxxxxxxxxxxxx> Sun Feb 24 02:13:24 2008
Original File: kernel/sched.c
kprobes: refuse kprobe insertion on add/sub_preempt_counter()
Obviously, this would render this patch useless.
BTW, is there a reason why not supporting build-in>>+SEC("kprobe/trace_preempt_off")
tracepoints/events? It looks like it is only an artificial
limitation of bpf_helpers.
Funny I was about to suggest something like this :)>>+int bpf_prog1(struct pt_regs *ctx)>
>>+{
>>+ int cpu = bpf_get_smp_processor_id();
>>+ u64 *ts = bpf_map_lookup_elem(&my_map, &cpu);
>>+
>>+ if (ts)
>>+ *ts = bpf_ktime_get_ns();
>btw, I'm planning to add native per-cpu maps which will
>speed up things more and reduce measurement overhead.
>I think you can retarget this patch to net-next and sendI'll rebase it to net-next.
>it to netdev. It's not too late for this merge window.