Re: [PATCH 08/12] perf: Carve out mmap helpers for general use

From: Borislav Petkov
Date: Tue Jan 25 2011 - 20:00:15 EST

On Mon, Jan 24, 2011 at 10:39:36AM -0200, Arnaldo Carvalho de Melo wrote:
> Em Mon, Jan 24, 2011 at 10:04:10AM +0100, Borislav Petkov escreveu:
> > Ok, I see at least one problem with my patch - you've reworked the
> > mmaping functionality in evlist.c/evsel.c and I should use it too, I
> > guess. For that, I think you'd want me to apply my stuff ontop of your
> > perf/core branch, right?
> Right, I hope to have that branch merged by Ingo soon.
> > Am I missing something else?
> Nope, you're not. Doing that we erode your patchset a bit, reducing its
> size.

.. which means less work for me, heey, nice! :)

> One related experience I'm doing now is to have a python
> binding, the file for it with the list of files needed for this
> specific binding is at:
> A simple tool using the resulting binding is a thread
> fork/comm/exit/sample watcher, available at:
> In this process I'm moving functions around so as to reduce the number
> of tools/perf/util.c files to link into this python binding,
> untangling things as much as possible.
> The binding proper is:
> I'm digressing, but twatch is an example of a simple "daemon" consuming
> perf events where performance is not much of a problem.
> And provides a prototyping ground when starting to design perf events
> consuming daemons :-)

Yaay, twatch looks almost like a child's play and even my grandma can
profile her system now :).

But yeah, I thought about pythonizing the ras thingy too but the reasons
against it are that we might run on systems which don't have python,
have some obscure/old version of it or we simply don't want to depend on
it or any other additional tool for that matter. Generally, we want to
run with as low overhead as possible when handling hw error info and be
as self-contained as possible.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at