[PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages

From: Sean Christopherson
Date: Wed Feb 10 2021 - 18:09:08 EST


Assert that the GPA for a memslot backed by a hugepage is 1gb aligned,
and fix perf_test_util accordingly. Lack of GPA alignment prevents KVM
from backing the guest with hugepages, e.g. x86's write-protection of
hugepages when dirty logging is activated is otherwise not exercised.

Add a comment explaining that guest_page_size is for non-huge pages to
try and avoid confusion about what it actually tracks.

Cc: Ben Gardon <bgardon@xxxxxxxxxx>
Cc: Yanan Wang <wangyanan55@xxxxxxxxxx>
Cc: Andrew Jones <drjones@xxxxxxxxxx>
Cc: Peter Xu <peterx@xxxxxxxxxx>
Cc: Aaron Lewis <aaronlewis@xxxxxxxxxx>
Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
---
tools/testing/selftests/kvm/lib/kvm_util.c | 2 ++
tools/testing/selftests/kvm/lib/perf_test_util.c | 9 +++++++++
2 files changed, 11 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 2e497fbab6ae..855d20784ba7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -735,6 +735,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
else
ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);

+ ASSERT_EQ(guest_paddr, align(guest_paddr, alignment));
+
/* Add enough memory to align up if necessary */
if (alignment > 1)
region->mmap_size += alignment;
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 81490b9b4e32..f187b86f2e14 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -58,6 +58,11 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));

perf_test_args.host_page_size = getpagesize();
+
+ /*
+ * Snapshot the non-huge page size. This is used by the guest code to
+ * access/dirty pages at the logging granularity.
+ */
perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;

guest_num_pages = vm_adjust_num_guest_pages(mode,
@@ -87,6 +92,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
perf_test_args.guest_page_size;
guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
+ if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
+ backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
+ guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
+
#ifdef __s390x__
/* Align to 1M (segment size) */
guest_test_phys_mem &= ~((1 << 20) - 1);
--
2.30.0.478.g8a0d178c01-goog