Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking

From: Paolo Bonzini
Date: Tue Dec 17 2019 - 04:01:47 EST


On 16/12/19 19:54, Peter Xu wrote:
> On Mon, Dec 16, 2019 at 11:08:15AM +0100, Paolo Bonzini wrote:
>>> Although now because we have kvm_get_running_vcpu() all cases for [&]
>>> should be fine without changing anything, but I tend to add another
>>> patch in the next post to convert all the [&] cases explicitly to pass
>>> vcpu pointer instead of kvm pointer to be clear if no one disagrees,
>>> then we verify that against kvm_get_running_vcpu().
>>
>> This is a good idea but remember not to convert those to
>> kvm_vcpu_write_guest, because you _don't_ want these writes to touch
>> SMRAM (most of the addresses are OS-controlled rather than
>> firmware-controlled).
>
> OK. I think I only need to pass in vcpu* instead of kvm* in
> kvm_write_guest_page() just like kvm_vcpu_write_guest(), however we
> still keep to only write to address space id==0 for that.

No, please pass it all the way down to the [&] functions but not to
kvm_write_guest_page. Those should keep using vcpu->kvm.

>>> init_rmode_tss or init_rmode_identity_map. But I've marked them as
>>> unimportant because they should only happen once at boot.
>>
>> We need to check if userspace can add an arbitrary number of entries by
>> calling KVM_SET_TSS_ADDR repeatedly. I think it can; we'd have to
>> forbid multiple calls to KVM_SET_TSS_ADDR which is not a problem in general.
>
> Will do that altogether with the series. I can further change both of
> these calls to not track dirty at all, which shouldn't be hard, after
> all userspace didn't even know them, as you mentioned below.
>
> Is there anything to explain what KVM_SET_TSS_ADDR is used for? This
> is the thing I found that is closest to useful (from api.txt):

The best description is probably at https://lwn.net/Articles/658883/:

They are needed for unrestricted_guest=0. Remember that, in that case,
the VM always runs in protected mode and with paging enabled. In order
to emulate real mode you put the guest in a vm86 task, so you need some
place for a TSS and for a page table, and they must be in guest RAM
because the guest's TR and CR3 points to it. They are invisible to the
guest, because the STR and MOV-from-CR instructions are invalid in vm86
mode, but it must be there.

If you don't call KVM_SET_TSS_ADDR you actually get a complaint in
dmesg, and the TR stays at 0. I am not really sure what kind of bad
things can happen with unrestricted_guest=0, probably you just get a VM
Entry failure. The TSS takes 3 pages of memory. An interesting point is
that you actually don't need to set the TR selector to a valid value (as
you would do when running in "normal" vm86 mode), you can simply set the
base and limit registers that are hidden in the processor, and generally
inaccessible except through VMREAD/VMWRITE or system management mode. So
KVM needs to set up a TSS but not a GDT.

For paging, instead, 1 page is enough because we have only 4GB of memory
to address. KVM disables CR4.PAE (page address extensions, aka 8-byte
entries in each page directory or page table) and enables CR4.PSE (page
size extensions, aka 4MB huge pages support with 4-byte page directory
entries). One page then fits 1024 4-byte page directory entries, each
for a 4MB huge pages, totaling exactly 4GB. Here if you don't set it the
page table is at address 0xFFFBC000. QEMU changes it to 0xFEFFC000 so
that the BIOS can be up to 16MB in size (the default only allows 256k
between 0xFFFC0000 and 0xFFFFFFFF).

The different handling, where only the page table has a default, is
unfortunate, but so goes life...

> So... has it really popped into existance somewhere? It would be good
> at least to know why it does not need to be migrated.

It does not need to be migrated just because the contents are constant.

Paolo