[PATCH 4/4] perf tools: Prefer to use a cpu-wide event for probing CLOEXEC

From: Adrian Hunter
Date: Tue Aug 12 2014 - 11:06:30 EST


When doing a system-wide trace with Intel PT, the jump label
set up as a result of probing CLOEXEC gets reset while the
trace is running. That causes an Intel PT decoding error
because the object code (obtained from /proc/kcore) does
not match the running code at that point. While we can't
expect there never to be jump label changes, we can avoid
cases that the perf tool itself creates.

The problem is avoided by first trying a cpu-wide event
(pid = -1) for probing the PERF_FLAG_FD_CLOEXEC flag and
falling back to an event for the current process (pid = 0).

Signed-off-by: Adrian Hunter <adrian.hunter@xxxxxxxxx>
---
tools/perf/util/cloexec.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
index 000047c..4945aa5 100644
--- a/tools/perf/util/cloexec.c
+++ b/tools/perf/util/cloexec.c
@@ -1,3 +1,4 @@
+#include <sched.h>
#include "util.h"
#include "../perf.h"
#include "cloexec.h"
@@ -15,10 +16,23 @@ static int perf_flag_probe(void)
};
int fd;
int err;
+ int cpu;
+ pid_t pid = -1;

- /* check cloexec flag */
- fd = sys_perf_event_open(&attr, 0, -1, -1,
- PERF_FLAG_FD_CLOEXEC);
+ cpu = sched_getcpu();
+ if (cpu < 0)
+ cpu = 0;
+
+ while (1) {
+ /* check cloexec flag */
+ fd = sys_perf_event_open(&attr, pid, cpu, -1,
+ PERF_FLAG_FD_CLOEXEC);
+ if (fd < 0 && pid == -1 && errno == EACCES) {
+ pid = 0;
+ continue;
+ }
+ break;
+ }
err = errno;

if (fd >= 0) {
@@ -31,7 +45,7 @@ static int perf_flag_probe(void)
err, strerror(err));

/* not supported, confirm error related to PERF_FLAG_FD_CLOEXEC */
- fd = sys_perf_event_open(&attr, 0, -1, -1, 0);
+ fd = sys_perf_event_open(&attr, pid, cpu, -1, 0);
err = errno;

if (WARN_ONCE(fd < 0 && err != EBUSY,
--
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/