Re: [PATCH v3 2/8] x86/sev: Add support for the RMPREAD instruction

From: Borislav Petkov
Date: Fri Oct 18 2024 - 08:41:44 EST


On Mon, Sep 30, 2024 at 10:22:10AM -0500, Tom Lendacky wrote:
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index 103a2dd6e81d..73d4f422829a 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -301,6 +301,17 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
> {
> struct rmpentry_raw *e;
>
> + if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
> + int ret;
> +
> + asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
> + : "=a" (ret)
> + : "a" (pfn << PAGE_SHIFT), "c" (entry)
> + : "memory", "cc");
> +
> + return ret;
> + }

I think this should be:

diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
index 73d9295dd013..5500c5d64cba 100644
--- a/arch/x86/virt/svm/sev.c
+++ b/arch/x86/virt/svm/sev.c
@@ -303,12 +303,11 @@ static int get_rmpentry(u64 pfn, struct rmpentry *entry)
struct rmpentry_raw *e;

if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
- int ret;
+ int ret = pfn << PAGE_SHIFT;

asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
- : "=a" (ret)
- : "a" (pfn << PAGE_SHIFT), "c" (entry)
- : "memory", "cc");
+ : "+a" (ret), "+c" (entry)
+ :: "memory", "cc");

return ret;
}

because "The RCX register provides the effective address of a 16-byte data
structure into which the RMP state is written."

So your %rcx is both an input and an output operand and you need to do the "+"
thing here too for that.

Same for %rax.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette