Re: [PATCH 6/9] riscv: Fix set up of vector cpu hotplug callback
From: Andrew Jones
Date: Fri Feb 07 2025 - 13:15:24 EST
On Fri, Feb 07, 2025 at 06:36:28PM +0100, Clément Léger wrote:
>
>
> On 07/02/2025 17:19, Andrew Jones wrote:
> > Whether or not we have RISCV_PROBE_VECTOR_UNALIGNED_ACCESS we need to
> > set up a cpu hotplug callback to check if we have vector at all,
> > since, when we don't have vector, we need to set
> > vector_misaligned_access to unsupported rather than leave it the
> > default of unknown.
> >
> > Fixes: e7c9d66e313b ("RISC-V: Report vector unaligned access speed hwprobe")
> > Signed-off-by: Andrew Jones <ajones@xxxxxxxxxxxxxxxx>
> > ---
> > arch/riscv/kernel/unaligned_access_speed.c | 31 +++++++++++-----------
> > 1 file changed, 16 insertions(+), 15 deletions(-)
> >
> > diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
> > index c9d3237649bb..d9d4ca1fadc7 100644
> > --- a/arch/riscv/kernel/unaligned_access_speed.c
> > +++ b/arch/riscv/kernel/unaligned_access_speed.c
> > @@ -356,6 +356,20 @@ static void check_vector_unaligned_access(struct work_struct *work __always_unus
> > per_cpu(vector_misaligned_access, cpu) = speed;
> > }
> >
> > +/* Measure unaligned access speed on all CPUs present at boot in parallel. */
> > +static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> > +{
> > + schedule_on_each_cpu(check_vector_unaligned_access);
> Hey Andrew,
>
> While at it, could you add a comment stating that schedule_on_cpu()
> (while documented as really slow) is used due to kernel_vector_begin()
> needing interrupts to be enabled ? I stumbled upon that while reworking
> misaligned.
That should be a separate patch, since this patch is mostly just moving
code (not even this function was "moved", but git-diff prefers to say it
was moved rather than what was actually moved...)
I guess the comment patch you suggest should go in your rework series.
Thanks,
drew
>
> Thanks,
>
> Clément
>
> > +
> > + return 0;
> > +}
> > +#else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
> > +static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> > +{
> > + return 0;
> > +}
> > +#endif
> > +
> > static int riscv_online_cpu_vec(unsigned int cpu)
> > {
> > if (!has_vector()) {
> > @@ -363,27 +377,16 @@ static int riscv_online_cpu_vec(unsigned int cpu)
> > return 0;
> > }
> >
> > +#ifdef CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS
> > if (per_cpu(vector_misaligned_access, cpu) != RISCV_HWPROBE_MISALIGNED_VECTOR_UNKNOWN)
> > return 0;
> >
> > check_vector_unaligned_access_emulated(NULL);
> > check_vector_unaligned_access(NULL);
> > - return 0;
> > -}
> > -
> > -/* Measure unaligned access speed on all CPUs present at boot in parallel. */
> > -static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> > -{
> > - schedule_on_each_cpu(check_vector_unaligned_access);
> > +#endif
> >
> > return 0;
> > }
> > -#else /* CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS */
> > -static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)
> > -{
> > - return 0;
> > -}
> > -#endif
> >
> > static int __init check_unaligned_access_all_cpus(void)
> > {
> > @@ -409,10 +412,8 @@ static int __init check_unaligned_access_all_cpus(void)
> > cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
> > riscv_online_cpu, riscv_offline_cpu);
> > #endif
> > -#ifdef CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS
> > cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "riscv:online",
> > riscv_online_cpu_vec, NULL);
> > -#endif
> >
> > return 0;
> > }
>