Re: [v3 1/1] x86/cpufeatures: Implement Predictive Store Forwarding control.

From: Saripalli, RK
Date: Thu Apr 29 2021 - 10:03:24 EST




On 4/29/2021 12:13 AM, Tom Lendacky wrote:
> On 4/28/21 11:03 AM, Ramakrishna Saripalli wrote:
>> From: Ramakrishna Saripalli <rk.saripalli@xxxxxxx>
>>
>> Certain AMD processors feature a new technology called Predictive Store
>> Forwarding (PSF).
>>
>> PSF is a micro-architectural optimization designed to improve the
>> performance of code execution by predicting dependencies between
>> loads and stores.
>>
>> Incorrect PSF predictions can occur due to two reasons.
>>
>> - It is possible that the load/store pair may have had dependency for
>> a while but the dependency has stopped because the address in the
>> load/store pair has changed.
>>
>> - Second source of incorrect PSF prediction can occur because of an alias
>> in the PSF predictor structure stored in the microarchitectural state.
>> PSF predictor tracks load/store pair based on portions of instruction
>> pointer. It is possible that a load/store pair which does have a
>> dependency may be aliased by another load/store pair which does not have
>> the same dependency. This can result in incorrect speculation.
>>
>> Software may be able to detect this aliasing and perform side-channel
>> attacks.
>>
>> All CPUs that implement PSF provide one bit to disable this feature.
>> If the bit to disable this feature is available, it means that the CPU
>> implements PSF feature and is therefore vulnerable to PSF risks.
>>
>> The bits that are introduced
>>
>> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
>> If this bit is 1, CPU implements PSF and PSF control
>> via SPEC_CTRL_MSR is supported in the CPU.
>>
>> All AMD processors that support PSF implement a bit in
>> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
>> Forwarding.
>>
>> PSF control introduces a new kernel parameter called
>> predict_store_fwd.
>>
>> Kernel parameter predict_store_fwd has the following values
>>
>> - off. This value is used to disable PSF on all CPUs.
>>
>> - on. This value is used to enable PSF on all CPUs.
>> This is also the default setting.
>> ---
>> ChangeLogs:
>> V3->V2:
>> Set the X86_FEATURE_SPEC_CTRL_MSR cap in boot cpu caps.
>> Fix kernel documentation for the kernel parameter.
>> Rename PSF to a control instead of mitigation.
>>
>> V1->V2:
>> - Smashed multiple commits into one commit.
>> - Rename PSF to a control instead of mitigation.
>>
>> V1:
>> - Initial patchset.
>> - Kernel parameter controls enable and disable of PSF.
>> ====================
>> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@xxxxxxx>
>> ---
>> .../admin-guide/kernel-parameters.txt | 5 +++++
>> arch/x86/include/asm/cpufeatures.h | 1 +
>> arch/x86/include/asm/msr-index.h | 2 ++
>> arch/x86/kernel/cpu/amd.c | 20 +++++++++++++++++++
>> 4 files changed, 28 insertions(+)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index de27d5a4d994..0576e8a8d033 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -3950,6 +3950,11 @@
>> Format: {"off"}
>> Disable Hardware Transactional Memory
>>
>> + predict_store_fwd= [X86] This option controls PSF.
>> + off - Turns off PSF.
>> + on - Turns on PSF.
>> + default : on.
>> +
>> preempt= [KNL]
>> Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
>> none - Limited to cond_resched() calls
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index 3c94316169a3..e36e6bf2f18b 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -313,6 +313,7 @@
>> #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
>> #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
>> #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
>> +#define X86_FEATURE_PSFD (13*32+28) /* Predictive Store Forward Disable */
>>
>> /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
>> #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 742d89a00721..21f0c3fc1b2c 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -51,6 +51,8 @@
>> #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
>> #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
>> #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
>> +#define SPEC_CTRL_PSFD_SHIFT 7
>> +#define SPEC_CTRL_PSFD BIT(SPEC_CTRL_PSFD_SHIFT) /* Predictive Store Forwarding Disable */
>>
>> #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
>> #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
>> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
>> index 2d11384dc9ab..c9b6ba3ea431 100644
>> --- a/arch/x86/kernel/cpu/amd.c
>> +++ b/arch/x86/kernel/cpu/amd.c
>> @@ -1165,3 +1165,23 @@ void set_dr_addr_mask(unsigned long mask, int dr)
>> break;
>> }
>> }
>> +
>> +static int __init psf_cmdline(char *str)
>> +{
>> + if (!boot_cpu_has(X86_FEATURE_PSFD))
>> + return 0;
>> +
>> + if (!str)
>> + return -EINVAL;
>> +
>> + if (!strcmp(str, "off")) {
>> + set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
>> + x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
>> + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>> + setup_clear_cpu_cap(X86_FEATURE_PSFD);
>
> Why are you clearing the feature here? Won't this be needed for
> virtualization support?

Yes this feature is needed for KVM/virtualization support.
This feature should not be cleared.

>
> Thanks,
> Tom
>
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +early_param("predict_store_fwd", psf_cmdline);
>>