[PATCH 3/3] perf report: Don't be bothered with locking when adding hist entries
From: Namhyung Kim
Date: Mon May 13 2013 - 02:17:21 EST
From: Namhyung Kim <namhyung.kim@xxxxxxx>
The perf report is single-threaded, so no need to grab a lock.
Although the fast path of pthread_mutex_[un]lock() is very fast,
there's ~3% gain by eliminating it when we have huge sample data.
$ perf record -a -F 100000 -o perf.data.bench -- perf bench sched all
$ perf record -e cycles:upp -o perf.data.before -- \
> perf report -i perf.data.bench --stdio > /dev/null
... apply this patch ...
$ perf record -e cycles:upp -o perf.data.after -- \
> perf report -i perf.data.bench --stdio > /dev/null
$ perf diff perf.data.{before,after} | grep pthread
+0.02% libpthread-2.15.so [.] _pthread_cleanup_push_defer
+0.02% libpthread-2.15.so [.] _pthread_cleanup_pop_restore
0.05% -0.05% perf [.] pthread_mutex_unlock@plt
0.05% -0.05% perf [.] pthread_mutex_lock@plt
1.01% -1.01% libpthread-2.15.so [.] pthread_mutex_lock
1.68% -1.68% libpthread-2.15.so [.] __pthread_mutex_unlock_usercnt
0.05% -0.05% libpthread-2.15.so [.] pthread_mutex_unlock
Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>
---
tools/perf/builtin-report.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 63febd24e912..0f0cf2472d9d 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -312,8 +312,6 @@ static int process_sample_event(struct perf_tool *tool,
if (rep->cpu_list && !test_bit(sample->cpu, rep->cpu_bitmap))
return 0;
- pthread_mutex_lock(&evsel->hists.lock);
-
if (sort__mode == SORT_MODE__BRANCH) {
ret = perf_report__add_branch_hist_entry(tool, &al, sample,
evsel, machine);
@@ -332,8 +330,6 @@ static int process_sample_event(struct perf_tool *tool,
if (ret < 0)
pr_debug("problem incrementing symbol period, skipping event\n");
}
- pthread_mutex_unlock(&evsel->hists.lock);
-
return ret;
}
--
1.7.11.7
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/