[PATCH 3/3] kvm: add a function to check if page is from NVDIMM pmem.

From: Zhang Yi
Date: Wed Jul 04 2018 - 02:52:18 EST


For device specific memory space, when we move these area of pfn to
memory zone, we will set the page reserved flag at that time, some of
these reserved for device mmio, and some of these are not, such as
NVDIMM pmem.

Now, we map these dev_dax or fs_dax pages to kvm for DIMM/NVDIMM
backend, since these pages are reserved. the check of
kvm_is_reserved_pfn() misconceives those pages as MMIO. Therefor, we
introduce 2 page map types, MEMORY_DEVICE_FS_DAX/MEMORY_DEVICE_DEV_DAX,
to indentify these pages are from NVDIMM pmem. and let kvm treat these
as normal pages.

Without this patch, Many operations will be missed due to this
mistreatment to pmem pages. For example, a page may not have chance to
be unpinned for KVM guest(in kvm_release_pfn_clean); not able to be
marked as dirty/accessed(in kvm_set_pfn_dirty/accessed) etc.

Signed-off-by: Zhang Yi <yi.z.zhang@xxxxxxxxxxxxxxx>
Signed-off-by: Zhang Yu <yu.c.zhang@xxxxxxxxxxxxxxx>
---
virt/kvm/kvm_main.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index afb2e6e..1365d18 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -140,10 +140,23 @@ __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
{
}

+static bool kvm_is_nd_pfn(kvm_pfn_t pfn)
+{
+ struct page *page = pfn_to_page(pfn);
+
+ return is_zone_device_page(page) &&
+ ((page->pgmap->type == MEMORY_DEVICE_FS_DAX) ||
+ (page->pgmap->type == MEMORY_DEVICE_DEV_DAX));
+}
+
bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
{
- if (pfn_valid(pfn))
- return PageReserved(pfn_to_page(pfn));
+ struct page *page;
+
+ if (pfn_valid(pfn)) {
+ page = pfn_to_page(pfn);
+ return kvm_is_nd_pfn(pfn) ? false : PageReserved(page);
+ }

return true;
}
--
2.7.4