Re: [PATCH] arm64: perf: Ensure EL0 access is disabled at reset

From: Mark Rutland
Date: Tue Apr 27 2021 - 09:55:22 EST


On Tue, Apr 27, 2021 at 08:48:52AM -0500, Rob Herring wrote:
> The ER, SW, and EN bits in the PMUSERENR_EL0 register are UNKNOWN at
> reset and the register is never initialized, so EL0 access could be
> enabled by default on some implementations. Let's initialize
> PMUSERENR_EL0 to a known state with EL0 access disabled.

We reset PMUSERENR_EL0 via the reset_pmuserenr_el0 macro, called from
__cpu_setup when a CPU is onlined and from cpu_do_resume() when a CPU
returns from a context-destructive idle state. We do it there so that
it's handled even if a kernel isn't built with perf support.

AFAICT, that *should* do the right thing -- are you seeing UNKNOWN
values, or was this found by inspection?

Thanks,
Mark.

>
> Signed-off-by: Rob Herring <robh@xxxxxxxxxx>
> ---
> arch/arm64/kernel/perf_event.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 4658fcf88c2b..c32778ae5117 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -450,6 +450,11 @@ static inline void armv8pmu_pmcr_write(u32 val)
> write_sysreg(val, pmcr_el0);
> }
>
> +static void armv8pmu_clear_pmuserenr(void)
> +{
> + write_sysreg(0, pmuserenr_el0);
> +}
> +
> static inline int armv8pmu_has_overflowed(u32 pmovsr)
> {
> return pmovsr & ARMV8_PMU_OVERFLOWED_MASK;
> @@ -933,6 +938,9 @@ static void armv8pmu_reset(void *info)
> armv8pmu_disable_counter(U32_MAX);
> armv8pmu_disable_intens(U32_MAX);
>
> + /* User access is unknown at reset. */
> + armv8pmu_clear_pmuserenr();
> +
> /* Clear the counters we flip at guest entry/exit */
> kvm_clr_pmu_events(U32_MAX);
>
> --
> 2.27.0
>