Re: [PATCH v10 16/50] x86/sev: Introduce snp leaked pages list

From: Vlastimil Babka
Date: Thu Dec 07 2023 - 11:20:45 EST


On 10/16/23 15:27, Michael Roth wrote:
> From: Ashish Kalra <ashish.kalra@xxxxxxx>
>
> Pages are unsafe to be released back to the page-allocator, if they
> have been transitioned to firmware/guest state and can't be reclaimed
> or transitioned back to hypervisor/shared state. In this case add
> them to an internal leaked pages list to ensure that they are not freed

Note the adding to the list doesn't ensure anything like that. Not dropping
the refcount to zero does. But tracking them might indeed not be bad for
e.g. crashdump investigations so no objection there.

> or touched/accessed to cause fatal page faults.
>
> Signed-off-by: Ashish Kalra <ashish.kalra@xxxxxxx>
> [mdr: relocate to arch/x86/coco/sev/host.c]
> Signed-off-by: Michael Roth <michael.roth@xxxxxxx>
> ---
> arch/x86/include/asm/sev-host.h | 3 +++
> arch/x86/virt/svm/sev.c | 28 ++++++++++++++++++++++++++++
> 2 files changed, 31 insertions(+)
>
> diff --git a/arch/x86/include/asm/sev-host.h b/arch/x86/include/asm/sev-host.h
> index 1df989411334..7490a665e78f 100644
> --- a/arch/x86/include/asm/sev-host.h
> +++ b/arch/x86/include/asm/sev-host.h
> @@ -19,6 +19,8 @@ void sev_dump_hva_rmpentry(unsigned long address);
> int psmash(u64 pfn);
> int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, int asid, bool immutable);
> int rmp_make_shared(u64 pfn, enum pg_level level);
> +void snp_leak_pages(u64 pfn, unsigned int npages);
> +
> #else
> static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENXIO; }
> static inline void sev_dump_hva_rmpentry(unsigned long address) {}
> @@ -29,6 +31,7 @@ static inline int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, int as
> return -ENXIO;
> }
> static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENXIO; }
> +static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
> #endif
>
> #endif
> diff --git a/arch/x86/virt/svm/sev.c b/arch/x86/virt/svm/sev.c
> index bf9b97046e05..29a69f4b8cfb 100644
> --- a/arch/x86/virt/svm/sev.c
> +++ b/arch/x86/virt/svm/sev.c
> @@ -59,6 +59,12 @@ struct rmpentry {
> static struct rmpentry *rmptable_start __ro_after_init;
> static u64 rmptable_max_pfn __ro_after_init;
>
> +/* list of pages which are leaked and cannot be reclaimed */
> +static LIST_HEAD(snp_leaked_pages_list);
> +static DEFINE_SPINLOCK(snp_leaked_pages_list_lock);
> +
> +static atomic_long_t snp_nr_leaked_pages = ATOMIC_LONG_INIT(0);
> +
> #undef pr_fmt
> #define pr_fmt(fmt) "SEV-SNP: " fmt
>
> @@ -518,3 +524,25 @@ int rmp_make_shared(u64 pfn, enum pg_level level)
> return rmpupdate(pfn, &val);
> }
> EXPORT_SYMBOL_GPL(rmp_make_shared);
> +
> +void snp_leak_pages(u64 pfn, unsigned int npages)
> +{
> + struct page *page = pfn_to_page(pfn);
> +
> + pr_debug("%s: leaking PFN range 0x%llx-0x%llx\n", __func__, pfn, pfn + npages);
> +
> + spin_lock(&snp_leaked_pages_list_lock);
> + while (npages--) {
> + /*
> + * Reuse the page's buddy list for chaining into the leaked
> + * pages list. This page should not be on a free list currently
> + * and is also unsafe to be added to a free list.
> + */
> + list_add_tail(&page->buddy_list, &snp_leaked_pages_list);
> + sev_dump_rmpentry(pfn);
> + pfn++;

You increment pfn, but not page, which is always pointing to the page of the
initial pfn, so need to do page++ too.
But that assumes it's all order-0 pages (hard to tell for me whether that's
true as we start with a pfn), if there can be compound pages, it would be
best to only add the head page and skip the tail pages - it's not expected
to use page->buddy_list of tail pages.

> + }
> + spin_unlock(&snp_leaked_pages_list_lock);
> + atomic_long_inc(&snp_nr_leaked_pages);
> +}
> +EXPORT_SYMBOL_GPL(snp_leak_pages);