Re: [RFC] PyTimechart
From: Ingo Molnar
Date: Wed May 12 2010 - 14:23:41 EST
* Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
> On Wed, 2010-05-12 at 18:59 +0200, Ingo Molnar wrote:
> > * Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
> >
> > > On Wed, 2010-05-12 at 16:48 +0200, Frederic Weisbecker wrote:
> > > > On Wed, May 12, 2010 at 03:37:27PM +0200, Pierre Tardy wrote:
> > >
> > > > But we don't yet support trace_printk in perf. May be we could wrap
> > > > them in trace events.
> > >
> > > Hmm, do we really want to do that?
> > >
> > > We really need to get the perf and ftrace trace buffers combined. I
> > > understand why perf chose to do the mmap buffers for the counting, but
> > > for live streaming, it is very inefficient compared to splice.
> >
> > The thing is that for a very long time ftrace didnt have splice support
> > and survived just fine. Even today most of the ftrace usage isnt utilizing
> > splice.
>
> Actually, trace-cmd implements the splice interface and is used by several
> people. I find myself using trace-cmd 90% of the time that I use ftrace,
> specifically because of this speedup.
i know, but most people still use /debug/tracing/ bits not trace-cmd.
> > Yes, splice might help in some situations but on average it's an
> > independent speedup on the order of magnitude of a few percents, not a
> > 'must have' item.
>
> I'll have start running benchmarks to see what the actual speed up is. I'm
> guessing it may be more than a few percent. It allows for zero copy overhead
> and reuse of the data page.
Make sure you measure it in the context of a full app like PyTimechart.
You can measure the overhead using perf stat ;-)
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/