Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
From: Paolo Bonzini
Date: Mon Dec 16 2019 - 04:29:45 EST
On 14/12/19 17:26, Peter Xu wrote:
> On Sat, Dec 14, 2019 at 08:57:26AM +0100, Paolo Bonzini wrote:
>> On 13/12/19 21:23, Peter Xu wrote:
>>>> What is the benefit of using u16 for that? That means with 4K pages, you
>>>> can share at most 256M of dirty memory each time? That seems low to me,
>>>> especially since it's sufficient to touch one byte in a page to dirty it.
>>>>
>>>> Actually, this is not consistent with the definition in the code ;-)
>>>> So I'll assume it's actually u32.
>>> Yes it's u32 now. Actually I believe at least Paolo would prefer u16
>>> more. :)
>>
>> It has to be u16, because it overlaps the padding of the first entry.
>
> Hmm, could you explain?
>
> Note that here what Christophe commented is on dirty_index,
> reset_index of "struct kvm_dirty_ring", so imho it could really be
> anything we want as long as it can store a u32 (which is the size of
> the elements in kvm_dirty_ring_indexes).
>
> If you were instead talking about the previous union definition of
> "struct kvm_dirty_gfns" rather than "struct kvm_dirty_ring", iiuc I've
> moved those indices out of it and defined kvm_dirty_ring_indexes which
> we expose via kvm_run, so we don't have that limitation as well any
> more?
Yeah, I meant that since the size has (had) to be u16 in the union, it
need not be bigger in kvm_dirty_ring.
I don't think having more than 2^16 entries in the *per-CPU* ring buffer
makes sense; lagging in recording dirty memory by more than 256 MiB per
CPU would mean a large pause later on resetting the ring buffers (your
KVM_CLEAR_DIRTY_LOG patches found the sweet spot to be around 1 GiB for
the whole system).
So I liked the union, but if you removed it you might as well align the
producer and consumer indices to 64 bytes so that they are in separate
cache lines.
Paolo