Re: [PATCH 1/7] KVM: Document KVM_MAP_MEMORY ioctl

From: Xu Yilun
Date: Fri Apr 19 2024 - 10:04:09 EST


On Wed, Apr 17, 2024 at 10:37:00PM +0200, Paolo Bonzini wrote:
> On Wed, Apr 17, 2024 at 10:28 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >
> > On Wed, Apr 17, 2024, Paolo Bonzini wrote:
> > > +4.143 KVM_MAP_MEMORY
> > > +------------------------
> > > +
> > > +:Capability: KVM_CAP_MAP_MEMORY
> > > +:Architectures: none
> > > +:Type: vcpu ioctl
> > > +:Parameters: struct kvm_map_memory (in/out)
> > > +:Returns: 0 on success, < 0 on error

The definition of *success* here doesn't align with below comments.
Maybe replace success with a clearer definition, e.g. 0 when all or
part of the pages are processed. < 0 when error and no page is
processed.

> > > +
> > > +Errors:
> > > +
> > > + ========== ===============================================================
> > > + EINVAL The specified `base_address` and `size` were invalid (e.g. not
> > > + page aligned or outside the defined memory slots).
> >
> > "outside the memslots" should probably be -EFAULT, i.e. keep EINVAL for things
> > that can _never_ succeed.
> >
> > > + EAGAIN The ioctl should be invoked again and no page was processed.
> > > + EINTR An unmasked signal is pending and no page was processed.
> >
> > I'm guessing we'll want to handle large ranges, at which point we'll likely end
> > up with EAGAIN and/or EINTR after processing at least one page.
>
> Yes, in that case you get a success (return value of 0), just like read().

[...]

> >
> > > +When the ioctl returns, the input values are updated to point to the
> > > +remaining range. If `size` > 0 on return, the caller can just issue
> > > +the ioctl again with the same `struct kvm_map_memory` argument.
> >
> > This is likely misleading. Unless KVM explicitly zeros size on *every* failure,
> > a pedantic reading of this would suggest that userspace can retry and it should
> > eventually succeed.
>
> Gotcha... KVM explicitly zeros size on every success, but never zeros
> size on a failure.

Thanks,
Yilun