Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page in memory.
From: Tang Chen
Date: Tue Jul 15 2014 - 08:10:48 EST
On 07/15/2014 07:52 PM, Jan Kiszka wrote:
On 2014-07-14 16:58, Gleb Natapov wrote:
......
+ struct page *page = gfn_to_page_no_pin(vcpu->kvm,
+ APIC_DEFAULT_PHYS_BASE>> PAGE_SHIFT);
If you do not use kvm->arch.apic_access_page to get current address why not drop it entirely?
I should also update kvm->arch.apic_access_page here. It is used in other
places
in kvm, so I don't think we should drop it. Will update the patch.
What other places? The only other place I see is in nested kvm code and you can call
gfn_to_page_no_pin() there instead of using kvm->arch.apic_access_page directly. But
as far as I see nested kvm code cannot handle change of APIC_ACCESS_ADDR phys address.
If APIC_ACCESS_ADDR changes during nested guest run, non nested vmcs will still have old
physical address. One way to fix that is to set KVM_REQ_APIC_PAGE_RELOAD during nested exit.
Hi Jan,
Thanks for the reply. Please see below.
I cannot follow your concerns yet. Specifically, how should
APIC_ACCESS_ADDR (the VMCS field, right?) change while L2 is running? We
currently pin/unpin on L1->L2/L2->L1, respectively. Or what do you mean?
Currently, we pin the nested apic page in memory. And as a result, the page
cannot be migrated/hot-removed, Just like the apic page for L1 vm.
What we want to do here is DO NOT ping the page in memory. When it is
migrated,
we track the hpa of the page and update the VMCS field at proper time.
Please refer to patch 5/5, I have done this for the L1 vm. The solution is:
1. When apic page is migrated, invalidate ept entry of the apic page in
mmu_notifier
registered by kvm, which is kvm_mmu_notifier_invalidate_page() here.
2. Introduce a new vcpu request named KVM_REQ_APIC_PAGE_RELOAD, and
enforce all the
vcpu to exit from guest mode, make this request to all the vcpus.
3. In the request handler, use GUP function to find back the new apic
page, and
update the VMCS field.
I think Gleb is trying to say that we have to face the same problem in
nested vm.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/