On Fri, Apr 29, 2022, Paolo Bonzini wrote:
On 4/29/22 16:24, Sean Christopherson wrote:
I don't love the divergent memslot behavior, but it's technically correct, so I
can't really argue. Do we want to "officially" document the memslot behavior?
I don't know what you mean by officially document,
Something in kvm/api.rst under KVM_SET_USER_MEMORY_REGION.
but at least I have relied on it to test KVM's MAXPHYADDR=52 cases before
such hardware existed. :)
Ah, that's a very good reason to support this for shadow paging. Maybe throw
something about testing in the changelog? Without considering the testing angle,
it looks like KVM supports max=52 for !TDP just because it can, because practically
speaking there's unlikely to be a use case for exposing that much memory to a
guest when using shadow paging.