Re: [PATCH v1 03/11] perf: Allow for multiple ring buffers per event
From: Alexander Shishkin
Date: Tue Mar 18 2014 - 10:07:55 EST
Andi Kleen <ak@xxxxxxxxxxxxxxx> writes:
>> I really don't want the multi-buffer nonsense proposed.
>
>> An event gets
>> _1_ buffer, that's it.
>
> But we already have multi buffer. Just profile multiple CPUs
> Then you have one buffer per CPU that need to be combined.
>
> This just has two buffers per CPU.
Well, an event still gets *one* *perf* buffer in our implementation,
which is consistent with how things are done now, plus one trace
buffer. We could also export the trace buffer as a device node or
something, so that no software would expect to see perf headers in that
buffer.
>> That also means that if someone redirect another event into this buffer,
>> it needs to just work.
>
> All the tools already handle multiple buffers (for multi CPUs).
> So they don't need it.
>
>> And because its a perf buffer, people expect it to look like one. So
>> we've got 'wrap' it.
>
> Flushing TLBs from NMIs or irq work or any interrupt context is just
> a non starter.
>
> The start/stop hardware and create gigantic gaps was also pretty bad
> and would completely change the perf format too
>
> It seem to me you're trying to solve a non-problem.
Look at it this way: if the only way for it to be part of perf is by
wrapping trace data in perf headers, perf framework is simply not
suitable for instruction tracing. Therefore, it seems logical to have a
standalone driver or, considering ETM/PTM and others, a standalone
framework for exporting instruction trace data to userspace as plain
mmap interface with out the overhead of overwriting userspace ptes,
flushing tlbs, having inconsistent mappings in threads and that would
still work for hardware that doesn't support sg lists.
What do you think?
Regards,
--
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/