Re: [PATCH v8 15/26] arm64: alternative: Apply alternatives early in boot process
From: Julien Thierry
Date: Thu Jan 10 2019 - 05:50:59 EST
On 08/01/2019 17:40, Suzuki K Poulose wrote:
>
>
> On 08/01/2019 15:20, Julien Thierry wrote:
>> Hi Suzuki,
>>
>> On 08/01/2019 14:51, Suzuki K Poulose wrote:
>>> Hi Julien,
>>>
>>> On 08/01/2019 14:07, Julien Thierry wrote:
>>>> From: Daniel Thompson <daniel.thompson@xxxxxxxxxx>
>>>>
>>>> Currently alternatives are applied very late in the boot process (and
>>>> a long time after we enable scheduling). Some alternative sequences,
>>>> such as those that alter the way CPU context is stored, must be applied
>>>> much earlier in the boot sequence.
>>>>
>>>> Introduce apply_boot_alternatives() to allow some alternatives to be
>>>> applied immediately after we detect the CPU features of the boot CPU.
>>>>
>>>> Signed-off-by: Daniel Thompson <daniel.thompson@xxxxxxxxxx>
>>>> [julien.thierry@xxxxxxx: rename to fit new cpufeature framework better,
>>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂ apply BOOT_SCOPE feature early in boot]
>>>> Signed-off-by: Julien Thierry <julien.thierry@xxxxxxx>
>>>> Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
>>>> Cc: Will Deacon <will.deacon@xxxxxxx>
>>>> Cc: Christoffer Dall <christoffer.dall@xxxxxxx>
>>>> Cc: Suzuki K Poulose <suzuki.poulose@xxxxxxx>
>>>> ---
>>>> ÂÂ arch/arm64/include/asm/alternative.h |Â 1 +
>>>>  arch/arm64/include/asm/cpufeature.h | 4 ++++
>>>> ÂÂ arch/arm64/kernel/alternative.cÂÂÂÂÂ | 43
>>>> +++++++++++++++++++++++++++++++-----
>>>> ÂÂ arch/arm64/kernel/cpufeature.cÂÂÂÂÂÂ |Â 6 +++++
>>>> ÂÂ arch/arm64/kernel/smp.cÂÂÂÂÂÂÂÂÂÂÂÂÂ |Â 7 ++++++
>>>> ÂÂ 5 files changed, 56 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/alternative.h
>>>> b/arch/arm64/include/asm/alternative.h
>>>> index 9806a23..b9f8d78 100644
>>>> --- a/arch/arm64/include/asm/alternative.h
>>>> +++ b/arch/arm64/include/asm/alternative.h
>>>> @@ -25,6 +25,7 @@ struct alt_instr {
>>>> ÂÂ typedef void (*alternative_cb_t)(struct alt_instr *alt,
>>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ __le32 *origptr, __le32 *updptr, int nr_inst);
>>>> ÂÂ +void __init apply_boot_alternatives(void);
>>>> ÂÂ void __init apply_alternatives_all(void);
>>>> ÂÂ bool alternative_is_applied(u16 cpufeature);
>>>> ÂÂ diff --git a/arch/arm64/include/asm/cpufeature.h
>>>> b/arch/arm64/include/asm/cpufeature.h
>>>> index 89c3f31..e505e1f 100644
>>>> --- a/arch/arm64/include/asm/cpufeature.h
>>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>>> @@ -391,6 +391,10 @@ static inline int cpucap_default_scope(const
>>>> struct arm64_cpu_capabilities *cap)
>>>> ÂÂ extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
>>>> ÂÂ extern struct static_key_false arm64_const_caps_ready;
>>>> ÂÂ +/* ARM64 CAPS + alternative_cb */
>>>> +#define ARM64_NPATCHABLE (ARM64_NCAPS + 1)
>>>> +extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
>>>> +
>>>> ÂÂ #define for_each_available_cap(cap)ÂÂÂÂÂÂÂ \
>>>> ÂÂÂÂÂÂ for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS)
>>>> ÂÂ diff --git a/arch/arm64/kernel/alternative.c
>>>> b/arch/arm64/kernel/alternative.c
>>>> index c947d22..a9b4677 100644
>>>> --- a/arch/arm64/kernel/alternative.c
>>>> +++ b/arch/arm64/kernel/alternative.c
>>>> @@ -155,7 +155,8 @@ static void clean_dcache_range_nopatch(u64 start,
>>>> u64 end)
>>>> ÂÂÂÂÂÂ } while (cur += d_size, cur < end);
>>>> ÂÂ }
>>>> ÂÂ -static void __apply_alternatives(void *alt_region, bool is_module)
>>>> +static void __apply_alternatives(void *alt_region, bool is_module,
>>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ unsigned long *feature_mask)
>>>> ÂÂ {
>>>> ÂÂÂÂÂÂ struct alt_instr *alt;
>>>> ÂÂÂÂÂÂ struct alt_region *region = alt_region;
>>>> @@ -165,6 +166,9 @@ static void __apply_alternatives(void *alt_region,
>>>> bool is_module)
>>>> ÂÂÂÂÂÂ for (alt = region->begin; alt < region->end; alt++) {
>>>> ÂÂÂÂÂÂÂÂÂÂ int nr_inst;
>>>> ÂÂ +ÂÂÂÂÂÂÂ if (!test_bit(alt->cpufeature, feature_mask))
>>>> +ÂÂÂÂÂÂÂÂÂÂÂ continue;
>>>> +
>>>> ÂÂÂÂÂÂÂÂÂÂ /* Use ARM64_CB_PATCH as an unconditional patch */
>>>> ÂÂÂÂÂÂÂÂÂÂ if (alt->cpufeature < ARM64_CB_PATCH &&
>>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ !cpus_have_cap(alt->cpufeature))
>>>> @@ -203,8 +207,11 @@ static void __apply_alternatives(void
>>>> *alt_region, bool is_module)
>>>> ÂÂÂÂÂÂÂÂÂÂ __flush_icache_all();
>>>> ÂÂÂÂÂÂÂÂÂÂ isb();
>>>> ÂÂ -ÂÂÂÂÂÂÂ /* We applied all that was available */
>>>> -ÂÂÂÂÂÂÂ bitmap_copy(applied_alternatives, cpu_hwcaps, ARM64_NCAPS);
>>>> +ÂÂÂÂÂÂÂ /* Ignore ARM64_CB bit from feature mask */
>>>> +ÂÂÂÂÂÂÂ bitmap_or(applied_alternatives, applied_alternatives,
>>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂ feature_mask, ARM64_NCAPS);
>>>> +ÂÂÂÂÂÂÂ bitmap_and(applied_alternatives, applied_alternatives,
>>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ cpu_hwcaps, ARM64_NCAPS);
>>>> ÂÂÂÂÂÂ }
>>>> ÂÂ }
>>>> ÂÂ @@ -225,8 +232,13 @@ static int __apply_alternatives_multi_stop(void
>>>> *unused)
>>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ cpu_relax();
>>>> ÂÂÂÂÂÂÂÂÂÂ isb();
>>>> ÂÂÂÂÂÂ } else {
>>>> +ÂÂÂÂÂÂÂ DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE);
>>>> +
>>>> +ÂÂÂÂÂÂÂ bitmap_complement(remaining_capabilities, boot_capabilities,
>>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ARM64_NPATCHABLE);
>>>> +
>>>> ÂÂÂÂÂÂÂÂÂÂ BUG_ON(all_alternatives_applied);
>>>> -ÂÂÂÂÂÂÂ __apply_alternatives(®ion, false);
>>>> +ÂÂÂÂÂÂÂ __apply_alternatives(®ion, false, remaining_capabilities);
>>>> ÂÂÂÂÂÂÂÂÂÂ /* Barriers provided by the cache flushing */
>>>> ÂÂÂÂÂÂÂÂÂÂ WRITE_ONCE(all_alternatives_applied, 1);
>>>> ÂÂÂÂÂÂ }
>>>> @@ -240,6 +252,24 @@ void __init apply_alternatives_all(void)
>>>> ÂÂÂÂÂÂ stop_machine(__apply_alternatives_multi_stop, NULL,
>>>> cpu_online_mask);
>>>> ÂÂ }
>>>> ÂÂ +/*
>>>> + * This is called very early in the boot process (directly after we
>>>> run
>>>> + * a feature detect on the boot CPU). No need to worry about other
>>>> CPUs
>>>> + * here.
>>>> + */
>>>> +void __init apply_boot_alternatives(void)
>>>> +{
>>>> +ÂÂÂ struct alt_region region = {
>>>> +ÂÂÂÂÂÂÂ .beginÂÂÂ = (struct alt_instr *)__alt_instructions,
>>>> +ÂÂÂÂÂÂÂ .endÂÂÂ = (struct alt_instr *)__alt_instructions_end,
>>>> +ÂÂÂ };
>>>> +
>>>> +ÂÂÂ /* If called on non-boot cpu things could go wrong */
>>>> +ÂÂÂ WARN_ON(smp_processor_id() != 0);
>>>> +
>>>> +ÂÂÂ __apply_alternatives(®ion, false, &boot_capabilities[0]);
>>>> +}
>>>> +
>>>> ÂÂ #ifdef CONFIG_MODULES
>>>> ÂÂ void apply_alternatives_module(void *start, size_t length)
>>>> ÂÂ {
>>>> @@ -247,7 +277,10 @@ void apply_alternatives_module(void *start,
>>>> size_t length)
>>>> ÂÂÂÂÂÂÂÂÂÂ .beginÂÂÂ = start,
>>>> ÂÂÂÂÂÂÂÂÂÂ .endÂÂÂ = start + length,
>>>> ÂÂÂÂÂÂ };
>>>> +ÂÂÂ DECLARE_BITMAP(all_capabilities, ARM64_NPATCHABLE);
>>>> +
>>>> +ÂÂÂ bitmap_fill(all_capabilities, ARM64_NPATCHABLE);
>>>> ÂÂ -ÂÂÂ __apply_alternatives(®ion, true);
>>>> +ÂÂÂ __apply_alternatives(®ion, true, &all_capabilities[0]);
>>>> ÂÂ }
>>>> ÂÂ #endif
>>>> diff --git a/arch/arm64/kernel/cpufeature.c
>>>> b/arch/arm64/kernel/cpufeature.c
>>>> index 84fa5be..71c8d4f 100644
>>>> --- a/arch/arm64/kernel/cpufeature.c
>>>> +++ b/arch/arm64/kernel/cpufeature.c
>>>> @@ -54,6 +54,9 @@
>>>> ÂÂ EXPORT_SYMBOL(cpu_hwcaps);
>>>> ÂÂ static struct arm64_cpu_capabilities const __ro_after_init
>>>> *cpu_hwcaps_ptrs[ARM64_NCAPS];
>>>> ÂÂ +/* Need also bit for ARM64_CB_PATCH */
>>>> +DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
>>>> +
>>>> ÂÂ /*
>>>> ÂÂÂ * Flag to indicate if we have computed the system wide
>>>> ÂÂÂ * capabilities based on the boot time active CPUs. This
>>>> @@ -1672,6 +1675,9 @@ static void update_cpu_capabilities(u16
>>>> scope_mask)
>>>> ÂÂÂÂÂÂÂÂÂÂ if (caps->desc)
>>>> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂ pr_info("detected: %s\n", caps->desc);
>>>> ÂÂÂÂÂÂÂÂÂÂ cpus_set_cap(caps->capability);
>>>> +
>>>> +ÂÂÂÂÂÂÂ if (caps->type & SCOPE_BOOT_CPU)
>>>
>>> You may want to do :
>>> ÂÂÂÂÂÂÂÂ if (scope_mask & SCOPE_BOOT_CPU)
>>>
>>> for a tighter check to ensure this doesn't update the boot_capabilities
>>> after we have applied the boot_scope alternatives and miss applying the
>>> alternatives for those, should someone add a multi-scope (i.e
>>> SCOPE_BOOT_CPU and
>>> something else) capability (even by mistake).
>>>
>>
>> But a multi-scope capability containing SCOPE_BOOT_CPU should already
>> get updated for setup_boot_cpu_capabilities. Capabilities marked with
>> SCOPE_BOOT_CPU need to be enabled on the boot CPU or not at all.
>
> Yes, you're right. It is not normal to have multiple SCOPE for a
> "capability".
> But if someone comes with such a cap, we may miss handling this case. It is
> always better to be safer.
>
>>
>> Shouldn't the call to caps->matches() fail for a boot feature that was
>> not found on the boot cpu?
>>
>> Also, you made the opposite suggestion 4 version ago with a more
>> worrying scenario :) :
>> https://lkml.org/lkml/2018/5/25/208
>
> Ah, you're right. I missed that. We need the additional check as you
> mention
> below.
>
>>
>> Otherwise, if my assumption above is wrong, it means the check should
>> probably be:
>> ÂÂÂÂif (caps->type & SCOPE_BOOT_CPU && scope_mask & SCOPE_BOOT_CPU)
>
> Yes, this is what we want.
Yes you're right, the behaviour I was thinking of were the ID register,
where there views are altered for following CPUs when system wide
features are missing on the boot CPU. But ->matches() callbacks don't
only rely on ID registers.
I'll add that for the next version.
Thanks,
--
Julien Thierry