On Tue, Jun 14, 2022, Tom Lendacky wrote:
On 6/14/22 11:13, Sean Christopherson wrote:
This breaks SME on Rome and Milan when compiling with clang-13. I haven't been
able to figure out exactly what goes wrong. printk isn't functional at this point,
and interactive debug during boot on our test systems is beyond me. I can't even
verify that the bug is specific to clang because the draconian build system for our
test systems apparently is stuck pointing at gcc-4.9.
I suspect the issue is related to relocation and/or encrypting memory, as skipping
the call to early_snp_set_memory_shared() if SNP isn't active masks the issue.
I've dug through the assembly and haven't spotted a smoking gun, e.g. no obvious
use of absolute addresses.
Forcing a VM through the same path doesn't fail. I can't test an SEV guest at the
moment because INIT_EX is also broken.
I'm not sure if there's a way to remove the jump table optimization for
the arch/x86/coco/core.c file when retpolines aren't configured.
And for post-boot I don't think we'd want to disable any such optimizations.
A possibled "fix" would be to do what sme_encrypt_kernel() does and just query
sev_status directly. But even that works, the fragility of the boot code is
terrifying :-( I can't think of any clever solutions though.
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index bd4a34100ed0..5efab0d8e49d 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -127,7 +127,9 @@ static bool __head check_la57_support(unsigned long physaddr)
}
#endif
-static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdval_t *pmd)
+static unsigned long __head sme_postprocess_startup(struct boot_params *bp,
+ pmdval_t *pmd,
+ unsigned long physaddr)
{
unsigned long vaddr, vaddr_end;
int i;
@@ -156,7 +158,9 @@ static unsigned long __head sme_postprocess_startup(struct boot_params *bp, pmdv
* address but the kernel is currently running off of the identity
* mapping so use __pa() to get a *currently* valid virtual address.
*/
- early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr), PTRS_PER_PMD);
+ if (sev_status & MSR_AMD64_SEV_SNP_ENABLED_BIT)
+ __early_snp_set_memory_shared(__pa(vaddr), __pa(vaddr),
+ PTRS_PER_PMD);
i = pmd_index(vaddr);
pmd[i] -= sme_get_me_mask();
@@ -316,7 +320,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
*/
*fixup_long(&phys_base, physaddr) += load_delta - sme_get_me_mask();
- return sme_postprocess_startup(bp, pmd);
+ return sme_postprocess_startup(bp, pmd, physaddr);
}
/* Wipe all early page tables except for the kernel symbol map */