Re: [RESEND PATCH v3 1/n] perf/core: addressing 4x slowdown during per-process profiling of STREAM benchmark on Intel Xeon Phi
From: Mark Rutland
Date: Tue Jun 20 2017 - 12:39:51 EST
On Tue, Jun 20, 2017 at 06:12:08PM +0300, Alexey Budankov wrote:
> On 20.06.2017 16:44, Mark Rutland wrote:
> >On Fri, Jun 16, 2017 at 02:03:58AM +0300, Alexey Budankov wrote:
> >>perf/core: use rb trees for pinned/flexible groups
> >>By default, the userspace perf tool opens per-cpu task-bound events
> >>when sampling, so for N logical events requested by the user, the tool
> >>will open N * NR_CPUS events.
> >>In the kernel, we mux events with a hrtimer, periodically rotating the
> >>flexible group list and trying to schedule each group in turn. We
> >>skip groups whose cpu filter doesn't match. So when we get unlucky,
> >>we can walk N * (NR_CPUS - 1) groups pointlessly for each hrtimer
> >>This has been observed to result in significant overhead when running
> >>the STREAM benchmark on 272 core Xeon Phi systems.
> >>One way to avoid this is to place our events into an rb tree sorted by
> >>CPU filter, so that our hrtimer can skip to the current CPU's
> >>list and ignore everything else.
> >>As a step towards that, this patch replaces event group lists with rb
> >>Signed-off-by: Alexey Budankov <alexey.budankov@xxxxxxxxxxxxxxx>
> >> include/linux/perf_event.h | 18 ++-
> >> kernel/events/core.c | 393
> >> 2 files changed, 307 insertions(+), 104 deletions(-)
> >>Addressed Mark Rutland's comments from the previous patch version.
> >... then this should be v4, no?
> >Which comments? Could you pelase write a changelog in future?
> Changes are:
> 1. changed type of pinned_groups/flexible_groups to rb_tree;
> 2. removed group_list_entry and reused group_entry for that purposes;
> 3. added add_to_groups()/del_from_groups() helper functions;
Thanks; the new changelog is much appreciated.