[PATCH v10 062/108] KVM: x86/tdp_mmu: implement MapGPA hypercall for TDX

From: Vishal Annapurve
Date: Thu Dec 08 2022 - 19:02:25 EST


> +int __kvm_mmu_map_gpa(struct kvm *kvm, gfn_t *startp, gfn_t end,
> + bool map_private)
> +{
> + gfn_t start = *startp;
> + int attr;
> + int ret;
> +
> + if (!kvm_gfn_shared_mask(kvm))
> + return -EOPNOTSUPP;
> +
> + attr = map_private ? KVM_MEM_ATTR_PRIVATE : KVM_MEM_ATTR_SHARED;
> + start = start & ~kvm_gfn_shared_mask(kvm);
> + end = end & ~kvm_gfn_shared_mask(kvm);
> +
> + /*
> + * To make the following kvm_vm_set_mem_attr() success within spinlock
> + * without memory allocation.
> + */
> + ret = kvm_vm_reserve_mem_attr(kvm, start, end);
> + if (ret)
> + return ret;
> +
> + write_lock(&kvm-> mmu_lock);
> + if (is_tdp_mmu_enabled(kvm)) {
> + gfn_t s = start;
> +
> + ret = kvm_tdp_mmu_map_gpa(kvm, &s, end, map_private);
> + if (!ret) {
> + KVM_BUG_ON(kvm_vm_set_mem_attr(kvm, attr, start, end), kvm);

This will result in no exits to userspace during memory conversion requests from
guests. And as a result, userspace will not be able to explicitly back/unback
shared/private memory during conversions leading to double allocation of memory.

Is this an intended behavior for memory conversion with TDX VMs as per earlier
discussion?

> + } else if (ret == -EAGAIN) {
> + KVM_BUG_ON(kvm_vm_set_mem_attr(kvm, attr, start, s), kvm);
> + start = s;
> + }
> + } else {
> + ret = -EOPNOTSUPP;
> + }
> + write_unlock(&kvm-> mmu_lock);
> +
> + if (ret == -EAGAIN) {
> + if (map_private)