Re: Perf can't deal with many tracepoints

From: David Daney
Date: Wed Oct 27 2010 - 20:45:57 EST


On 10/27/2010 05:40 PM, Ted Ts'o wrote:
On Wed, Oct 27, 2010 at 05:16:18PM -0700, David Daney wrote:
Tracing is supposed to be low overhead. Forcing people to decode
things like this at the trace point, may take more code and cause
the trace data to be larger, making it slower than necessary.

If there isn't a good reason to keep perf stupid, then making it
smarter could be attractive.

Agreed. Although one argument against making perf smarter is that
certain things such as the dev_t MAJOR/MINOR split is an internal
abstraction that could potentially vary from kernel to kernel.

And the question is whether perf really should be so different that if
you boot a different kernel, you had better have the right perf
installed.


It may be possible to encode the dev_t split in the trace meta-data. This is done for some other types. Then perf could decode it based on the meta-data.

Another option is to have perf print the raw data and not crash. Then someone looking at the output could, if they desired, decode the dev_t themselves.

David Daney
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/