Re: [V1 PATCH 4/6] KVM: selftests: x86: Execute VMs with private memory

From: Vishal Annapurve
Date: Thu Dec 08 2022 - 16:57:19 EST


On Mon, Nov 14, 2022 at 5:53 PM Vishal Annapurve <vannapurve@xxxxxxxxxx> wrote:
>
> On Mon, Nov 14, 2022 at 11:37 AM Peter Gonda <pgonda@xxxxxxxxxx> wrote:
> >...
> > > +static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm,
> > > + struct kvm_vcpu *vcpu)
> > > +{
> > > + uint64_t gpa, npages, attrs, size;
> > > +
> > > + TEST_ASSERT(vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE,
> > > + "Unhandled Hypercall %lld\n", vcpu->run->hypercall.nr);
> > > + gpa = vcpu->run->hypercall.args[0];
> > > + npages = vcpu->run->hypercall.args[1];
> > > + size = npages << MIN_PAGE_SHIFT;
> > > + attrs = vcpu->run->hypercall.args[2];
> > > + pr_info("Explicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size,
> > > + (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) ? "private" : "shared");
> > > +
> > > + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
> > > + vm_allocate_private_mem(vm, gpa, size);
> > > + else
> > > + vm_unback_private_mem(vm, gpa, size);
> > > +
> > > + vcpu->run->hypercall.ret = 0;
> > > +}
> > > +
> > > +static void vcpu_work(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
> > > + struct vm_setup_info *info)
> > > +{
> > > + struct ucall uc;
> > > + uint64_t cmd;
> > > +
> > > + /*
> > > + * Loop until the guest is done.
> > > + */
> > > +
> > > + while (true) {
> > > + vcpu_run(vcpu);
> > > +
> > > + if (vcpu->run->exit_reason == KVM_EXIT_IO) {
> > > + cmd = get_ucall(vcpu, &uc);
> > > + if (cmd != UCALL_SYNC)
> > > + break;
> > > +
> > > + TEST_ASSERT(info->ioexit_cb, "ioexit cb not present");
> > > + info->ioexit_cb(vm, uc.args[1]);
> > > + continue;
> > > + }
> >
> > Should this be integrated into the ucall library directly somehow?
> > That way users of VMs with private memory do not need special
> > handling?
> >
> > After Sean's series:
> > https://lore.kernel.org/linux-arm-kernel/20220825232522.3997340-3-seanjc@xxxxxxxxxx/
> > we have a common get_ucall() that this check could be integrated into?
> >

New patchset posted via [1] modifies the APIs to give more control in
the actual selftest implementation.

[1] https://lore.kernel.org/lkml/20221205232341.4131240-5-vannapurve@xxxxxxxxxx/T/

> > > +
> > > + if (vcpu->run->exit_reason == KVM_EXIT_HYPERCALL) {
> > > + handle_vm_exit_map_gpa_hypercall(vm, vcpu);
> > > + continue;
> > > + }
> > > +
> > > + TEST_FAIL("Unhandled VCPU exit reason %d\n",
> > > + vcpu->run->exit_reason);
> > > + break;
> > > + }
> > > +
> > > + if (vcpu->run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT)
> > > + TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0],
> > > + __FILE__, uc.args[1], uc.args[2]);
> > > +}
> > > +
> > > +/*
> > > + * Execute guest vm with private memory memslots.
> > > + *
> > > + * Input Args:
> > > + * info - pointer to a structure containing information about setting up a VM
> > > + * with private memslots
> > > + *
> > > + * Output Args: None
> > > + *
> > > + * Return: None
> > > + *
> > > + * Function called by host userspace logic in selftests to execute guest vm
> > > + * logic. It will install test_mem_slot : containing the region of memory that
> > > + * would be used to test private/shared memory accesses to a memory backed by
> > > + * private memslots
> > > + */
> > > +void execute_vm_with_private_test_mem(struct vm_setup_info *info)
> > > +{
> > > + struct kvm_vm *vm;
> > > + struct kvm_enable_cap cap;
> > > + struct kvm_vcpu *vcpu;
> > > + uint64_t test_area_gpa, test_area_size;
> > > + struct test_setup_info *test_info = &info->test_info;
> > > +
> > > + TEST_ASSERT(info->guest_fn, "guest_fn not present");
> > > + vm = vm_create_with_one_vcpu(&vcpu, info->guest_fn);
> >
> > I am a little confused with how this library is going to work for SEV
> > VMs that want to have UPM private memory eventually.
> >
> > Why should users of UPM be forced to use this very specific VM
> > creation and vCPU run loop. In the patch
> > https://lore.kernel.org/lkml/20220829171021.701198-1-pgonda@xxxxxxxxxx/T/#m033ebc32df47a172bc6c46d4398b6c4387b7934d
> > SEV VMs need to be created specially vm_sev_create_with_one_vcpu() but
> > then callers can run the VM's vCPUs like other selftests.
> >
> > How do you see this working with SEV VMs?
> >
>
> This VM creation method can be useful to run the VMs whose execution
> might call mapgpa to change the memory attributes. New VM creation
> method specific to Sev VMs can be introduced.
>
> I tried to reuse this framework earlier for Sev VM selftests via:
> 1) https://lore.kernel.org/lkml/20220830224259.412342-8-vannapurve@xxxxxxxxxx/T/#m8164d3111c9a17ebab77f01635df8930207cc65d
> 2) https://lore.kernel.org/lkml/20220830224259.412342-8-vannapurve@xxxxxxxxxx/T/#m8164d3111c9a17ebab77f01635df8930207cc65d
>
> Though these changes need to be refreshed after this updated series.