Re: [PATCH v7 12/14] KVM: selftests: Add dirty ring buffer test
From: Peter Xu
Date: Thu Mar 19 2020 - 13:02:16 EST
On Wed, Mar 18, 2020 at 12:37:18PM -0400, Peter Xu wrote:
> +static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
> + void *bitmap, uint32_t num_pages)
> +{
> + /* We only have one vcpu */
> + static uint32_t fetch_index = 0;
> + uint32_t count = 0, cleared;
> +
> + /*
> + * Before fetching the dirty pages, we need a vmexit of the
> + * worker vcpu to make sure the hardware dirty buffers were
> + * flushed. This is not needed for dirty-log/clear-log tests
> + * because get dirty log will natually do so.
> + *
> + * For now we do it in the simple way - we simply wait until
> + * the vcpu uses up the soft dirty ring, then it'll always
> + * do a vmexit to make sure that PML buffers will be flushed.
> + * In real hypervisors, we probably need a vcpu kick or to
> + * stop the vcpus (before the final sync) to make sure we'll
> + * get all the existing dirty PFNs even cached in hardware.
> + */
> + sem_wait(&dirty_ring_vcpu_stop);
> +
> + /* Only have one vcpu */
> + count = dirty_ring_collect_one(vcpu_map_dirty_ring(vm, VCPU_ID),
> + slot, bitmap, num_pages, &fetch_index);
> +
> + cleared = kvm_vm_reset_dirty_ring(vm);
> +
> + /* Cleared pages should be the same as collected */
> + TEST_ASSERT(cleared == count, "Reset dirty pages (%u) mismatch "
> + "with collected (%u)", cleared, count);
> +
> + DEBUG("Notifying vcpu to continue\n");
This line is removed later so hidden from my eyes too... but it'll
also break the bisection. Fixed now.
> + sem_post(&dirty_ring_vcpu_cont);
> +
> + pr_info("Iteration %ld collected %u pages\n", iteration, count);
> +}
--
Peter Xu