Re: [RFC PATCH 2/6] KVM: guestmem_fd: Make error_remove_page callback to unmap guest memory

From: Sean Christopherson
Date: Wed Sep 13 2023 - 12:29:00 EST


On Wed, Sep 13, 2023, isaku.yamahata@xxxxxxxxx wrote:
> @@ -316,26 +316,43 @@ static int kvm_gmem_error_page(struct address_space *mapping, struct page *page)
> end = start + thp_nr_pages(page);
>
> list_for_each_entry(gmem, gmem_list, entry) {
> + struct kvm *kvm = gmem->kvm;
> +
> + KVM_MMU_LOCK(kvm);
> + kvm_mmu_invalidate_begin(kvm);
> + KVM_MMU_UNLOCK(kvm);
> +
> + flush = false;
> xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) {
> - for (gfn = start; gfn < end; gfn++) {
> - if (WARN_ON_ONCE(gfn < slot->base_gfn ||
> - gfn >= slot->base_gfn + slot->npages))
> - continue;
> -
> - /*
> - * FIXME: Tell userspace that the *private*
> - * memory encountered an error.
> - */
> - send_sig_mceerr(BUS_MCEERR_AR,
> - (void __user *)gfn_to_hva_memslot(slot, gfn),
> - PAGE_SHIFT, current);
> - }
> + pgoff_t pgoff;
> +
> + if (WARN_ON_ONCE(end < slot->base_gfn ||
> + start >= slot->base_gfn + slot->npages))
> + continue;
> +
> + pgoff = slot->gmem.pgoff;
> + struct kvm_gfn_range gfn_range = {
> + .slot = slot,
> + .start = slot->base_gfn + max(pgoff, start) - pgoff,
> + .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff,
> + .arg.page = page,
> + .may_block = true,
> + .memory_error = true,

Why pass arg.page and memory_error? There's no usage in this mini-series, and no
explanation of what arch code would do the information. And I can't think of why
arch would need to do anything but zap the SPTEs. If the memory error is directly
related to the current instruction, the vCPU will fault on the zapped SPTE, see
-HWPOISON, and exit to userspace. If the memory is unrelated, then the delayed
notification is less than ideal, but not fundamentally broken, e.g. it's no worse
than TDX's behavior of not signaling #MC until a poisoned cache line is actually
accessed.

I don't get arg.page in particular, because having the gfn should be enough for
arch code to take action beyond zapping SPTEs.

And _if_ we want to communicate the error to arch code, it would be much better
to add a dedicated arch hook instead of piggybacking kvm_mmu_unmap_gfn_range()
with a "memory_error" flag.

If we just zap SPTEs, then can't this simply be?

static int kvm_gmem_error_page(struct address_space *mapping, struct page *page)
{
struct list_head *gmem_list = &mapping->private_list;
struct kvm_gmem *gmem;
pgoff_t start, end;

filemap_invalidate_lock_shared(mapping);

start = page->index;
end = start + thp_nr_pages(page);

list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_begin(gmem, start, end);

/*
* Do not truncate the range, what action is taken in response to the
* error is userspace's decision (assuming the architecture supports
* gracefully handling memory errors). If/when the guest attempts to
* access a poisoned page, kvm_gmem_get_pfn() will return -EHWPOISON,
* at which point KVM can either terminate the VM or propagate the
* error to userspace.
*/

list_for_each_entry(gmem, gmem_list, entry)
kvm_gmem_invalidate_end(gmem, start, end);

filemap_invalidate_unlock_shared(mapping);

return MF_DELAYED;
}