Re: [PATCH v5 4/7] arm64: perf: Defer irq_work to IPI_IRQ_WORK

From: Mark Rutland
Date: Mon Jun 22 2020 - 10:19:27 EST


On Wed, Jun 17, 2020 at 12:38:48PM +0100, Alexandru Elisei wrote:
> From: Julien Thierry <julien.thierry@xxxxxxx>
>
> perf_event_overflow() can queue an irq_work on the current PE, which is
> executed via an IPI. Move the processing of the irq_work from the PMU IRQ
> handler to the IPI handler, which gets executed immediately afterwards.
>
> This also makes the IRQ handler NMI safe, because it removes the call to
> irq_work_run().

It wasn't entirely clear to me what the situation was today, and why
this was sound. How about the following to spell that out more
explicitly:

| When handling events armv8pmu_handle_irq() calls
| perf_event_overflow(), and subsequently calls irq_work_run() to handle
| any work queued by perf_event_overflow(). As perf_event_overflow()
| raises IPI_IRQ_WORK when queing the work, this isn't strictly
| necessary and the work could be handled as part of the IPI_IRQ_WORK
| handler.
|
| In the common case the IPI handler will run immediately after the PMU
| IRQ handler, and where the PE is heavily loaded with interrupts other
| handlers may run first, widening the window where some counters are
| disabled.
|
| In practice this window is unlikely to be a significant issue, and
| removing the call to irq_work_run() would make the PMU IRQ handler NMI
| safe in addition to making it simpler, so let's do that.

Thanks,
Mark.

>
> Cc: Julien Thierry <julien.thierry.kdev@xxxxxxxxx>
> Cc: Will Deacon <will.deacon@xxxxxxx>
> Cc: Mark Rutland <mark.rutland@xxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
> Cc: Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>
> Cc: Jiri Olsa <jolsa@xxxxxxxxxx>
> Cc: Namhyung Kim <namhyung@xxxxxxxxxx>
> Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
> Signed-off-by: Julien Thierry <julien.thierry@xxxxxxx>
> [Reworded commit]
> Signed-off-by: Alexandru Elisei <alexandru.elisei@xxxxxxx>
> ---
> arch/arm64/kernel/perf_event.c | 14 +++++---------
> 1 file changed, 5 insertions(+), 9 deletions(-)
>
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index a6195022be7d..cf1d92030790 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -750,20 +750,16 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
> if (!armpmu_event_set_period(event))
> continue;
>
> + /*
> + * Perf event overflow will queue the processing of the event as
> + * an irq_work which will be taken care of in the handling of
> + * IPI_IRQ_WORK.
> + */
> if (perf_event_overflow(event, &data, regs))
> cpu_pmu->disable(event);
> }
> armv8pmu_start(cpu_pmu);
>
> - /*
> - * Handle the pending perf events.
> - *
> - * Note: this call *must* be run with interrupts disabled. For
> - * platforms that can have the PMU interrupts raised as an NMI, this
> - * will not work.
> - */
> - irq_work_run();
> -
> return IRQ_HANDLED;
> }
>
> --
> 2.27.0
>