On Mar 19, 2020, at 2:13 AM, Joerg Roedel <joro@xxxxxxxxxx> wrote:
From: Tom Lendacky <thomas.lendacky@xxxxxxx>
The runtime handler needs a GHCB per CPU. Set them up and map them
unencrypted.
Signed-off-by: Tom Lendacky <thomas.lendacky@xxxxxxx>
Signed-off-by: Joerg Roedel <jroedel@xxxxxxx>
---
arch/x86/include/asm/mem_encrypt.h | 2 ++
arch/x86/kernel/sev-es.c | 28 +++++++++++++++++++++++++++-
arch/x86/kernel/traps.c | 3 +++
3 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index c17980e8db78..4bf5286310a0 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -197,6 +203,26 @@ static bool __init sev_es_setup_ghcb(void)
return true;
}
+void sev_es_init_ghcbs(void)
+{
+ int cpu;
+
+ if (!sev_es_active())
+ return;
+
+ /* Allocate GHCB pages */
+ ghcb_page = __alloc_percpu(sizeof(struct ghcb), PAGE_SIZE);
+
+ /* Initialize per-cpu GHCB pages */
+ for_each_possible_cpu(cpu) {
+ struct ghcb *ghcb = (struct ghcb *)per_cpu_ptr(ghcb_page, cpu);
+
+ set_memory_decrypted((unsigned long)ghcb,
+ sizeof(*ghcb) >> PAGE_SHIFT);
+ memset(ghcb, 0, sizeof(*ghcb));
+ }
+}
+
set_memory_decrypted needs to check the return value. I see it
consistently return ENOMEM. I've traced that back to split_large_page
in arch/x86/mm/pat/set_memory.c.