Re: [PATCH v3 06/17] virt: acrn: Introduce VM management interfaces

From: Liu, Shuo A
Date: Thu Sep 10 2020 - 22:47:35 EST


Hi Greg,

On 9/11/2020 00:28, Greg Kroah-Hartman wrote:
> On Thu, Sep 10, 2020 at 02:19:00PM +0800, Shuo A Liu wrote:
>> On Wed 9.Sep'20 at 11:45:16 +0200, Greg Kroah-Hartman wrote:
>>> On Wed, Sep 09, 2020 at 05:08:25PM +0800, shuo.a.liu@xxxxxxxxx wrote:
>>>> From: Shuo Liu <shuo.a.liu@xxxxxxxxx>
>>>>
>>>> The VM management interfaces expose several VM operations to ACRN
>>>> userspace via ioctls. For example, creating VM, starting VM, destroying
>>>> VM and so on.
>>>>
>>>> The ACRN Hypervisor needs to exchange data with the ACRN userspace
>>>> during the VM operations. HSM provides VM operation ioctls to the ACRN
>>>> userspace and communicates with the ACRN Hypervisor for VM operations
>>>> via hypercalls.
>>>>
>>>> HSM maintains a list of User VM. Each User VM will be bound to an
>>>> existing file descriptor of /dev/acrn_hsm. The User VM will be
>>>> destroyed when the file descriptor is closed.
>>>>
>>>> Signed-off-by: Shuo Liu <shuo.a.liu@xxxxxxxxx>
>>>> Reviewed-by: Zhi Wang <zhi.a.wang@xxxxxxxxx>
>>>> Reviewed-by: Reinette Chatre <reinette.chatre@xxxxxxxxx>
>>>> Cc: Zhi Wang <zhi.a.wang@xxxxxxxxx>
>>>> Cc: Zhenyu Wang <zhenyuw@xxxxxxxxxxxxxxx>
>>>> Cc: Yu Wang <yu1.wang@xxxxxxxxx>
>>>> Cc: Reinette Chatre <reinette.chatre@xxxxxxxxx>
>>>> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
>>>> ---
>>>> .../userspace-api/ioctl/ioctl-number.rst | 1 +
>>>> MAINTAINERS | 1 +
>>>> drivers/virt/acrn/Makefile | 2 +-
>>>> drivers/virt/acrn/acrn_drv.h | 22 +++++-
>>>> drivers/virt/acrn/hsm.c | 66 ++++++++++++++++
>>>> drivers/virt/acrn/hypercall.h | 78 +++++++++++++++++++
>>>> drivers/virt/acrn/vm.c | 69 ++++++++++++++++
>>>> include/uapi/linux/acrn.h | 56 +++++++++++++
>>>> 8 files changed, 293 insertions(+), 2 deletions(-)
>>>> create mode 100644 drivers/virt/acrn/hypercall.h
>>>> create mode 100644 drivers/virt/acrn/vm.c
>>>> create mode 100644 include/uapi/linux/acrn.h
>>>>

[...]

>>>> + ret = hcall_create_vm(virt_to_phys(vm_param));
>>>> + if (ret < 0 || vm_param->vmid == ACRN_INVALID_VMID) {
>>>> + dev_err(vm->dev, "Failed to create VM! Error: %d\n", ret);
>>>> + return NULL;
>>>> + }
>>>> +
>>>> + vm->vmid = vm_param->vmid;
>>>> + vm->vcpu_num = vm_param->vcpu_num;
>>>> +
>>>> + write_lock_bh(&acrn_vm_list_lock);
>>>> + list_add(&vm->list, &acrn_vm_list);
>>>
>>> Wait, why do you have a global list of devices? Shouldn't that device
>>> be tied to the vm structure? Who will be iterating this list that does
>>> not have the file handle to start with?
>>
>> Active VMs in this list will be used by the I/O requests dispatching
>> tasklet ioreq_tasklet, whose callback function is ioreq_tasklet_handler()
>> in patch 0009. ioreq_tasklet_handler() currently handles the notification
>> interrupt from the hypervisor and dispatches I/O requests to each VMs.
>
> So you need to somehow look through the whole list of devices for every
> I/O request? That feels really really wrong, why don't you have that
> pointer in the first place?
>
> Again, step back and describe what you need/desire and then think about
> how to best solve that. Almost always, a list of objects that you have
> to iterate over all the time is not the way to do it...

Each VM has a shared buffer for I/O requests passing with the
hypervisor. Currently, the hypervisor doesn't indicate the VMs which has
pending I/O requests. So when kernel get the notification interrupt, it
search all VMs' shared buffer and dispatch the pending I/O requests.

The current I/O requests dispatching implementation uses one global
tasklet (be scheduled in the hypervisor notification interrupt), so it
needs to iterate all VMs to do the dispatching.

Each VM has a dedicated hypervisor notification interrupt vector might
be suited (a vector can be linked with a VM). The disadvantage is that
it might occupy many vectors.

Looking forward to more suggestions. Thanks very much.

>
> Somedays I think we need an "here's how to do the things you really need
> to do in a driver" chapter in the Linux Device Driver's book..
That will be great. :)

Thanks
shuo