Re: [PATCH Part2 v5 06/45] x86/sev: Invalid pages from direct map when adding it to RMP table

From: Borislav Petkov
Date: Wed Sep 29 2021 - 10:34:34 EST


On Fri, Aug 20, 2021 at 10:58:39AM -0500, Brijesh Singh wrote:
> Subject: Re: [PATCH Part2 v5 06/45] x86/sev: Invalid pages from direct map when adding it to RMP table

That subject needs to have a verb. I think that verb should be
"Invalidate".

> The integrity guarantee of SEV-SNP is enforced through the RMP table.
> The RMP is used with standard x86 and IOMMU page tables to enforce memory
> restrictions and page access rights. The RMP check is enforced as soon as
> SEV-SNP is enabled globally in the system. When hardware encounters an
> RMP checks failure, it raises a page-fault exception.
>
> The rmp_make_private() and rmp_make_shared() helpers are used to add
> or remove the pages from the RMP table.

> Improve the rmp_make_private() to
> invalid state so that pages cannot be used in the direct-map after its
> added in the RMP table, and restore to its default valid permission after
> the pages are removed from the RMP table.

That sentence needs rewriting into proper english.

The more important thing is, though, this doesn't talk about *why*
you're doing this: you want to remove pages from the direct map when
they're in the RMP table because something might modify the page and
then the RMP check will fail?

Also, set_direct_map_invalid_noflush() simply clears the Present and RW
bit of a pte.

So what's up?

> Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx>
> ---
> arch/x86/kernel/sev.c | 61 ++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 60 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 8627c49666c9..bad41deb8335 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -2441,10 +2441,42 @@ int psmash(u64 pfn)
> }
> EXPORT_SYMBOL_GPL(psmash);
>
> +static int restore_direct_map(u64 pfn, int npages)

restore_pages_in_direct_map()

> +{
> + int i, ret = 0;
> +
> + for (i = 0; i < npages; i++) {
> + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
> + if (ret)
> + goto cleanup;
> + }

So this is looping over a set of virtually contiguous pages, I presume,
and if so, you should add a function called

set_memory_p_rw()

to arch/x86/mm/pat/set_memory.c which does

return change_page_attr_set(&addr, numpages, __pgprot(_PAGE_PRESENT | _PAGE_RW), 0);

so that you can do all pages in one go.

> +
> +cleanup:
> + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i);
> + return ret;
> +}
> +
> +static int invalid_direct_map(unsigned long pfn, int npages)

invalidate_pages_in_direct_map()

or so.

> +{
> + int i, ret = 0;
> +
> + for (i = 0; i < npages; i++) {
> + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i));

Same as above but that helper should do the reverse:

set_memory_np_ro()
{
return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT | _PAGE_RW), 0);
}

Btw, please add those helpers in a separate patch.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette