Re: [PATCH] KVM: arm64: Limit stage2_apply_range() batch size to smallest block

From: Krister Johansen
Date: Thu Apr 04 2024 - 00:40:45 EST


On Tue, Apr 02, 2024 at 10:00:53AM -0700, Krister Johansen wrote:
> On Sat, Mar 30, 2024 at 10:17:43AM +0000, Marc Zyngier wrote:
> > On Fri, 29 Mar 2024 19:15:37 +0000,
> > Krister Johansen <kjlx@xxxxxxxxxxxxxxxxxx> wrote:
> > > On Fri, Mar 29, 2024 at 06:48:38AM -0700, Oliver Upton wrote:
> > > > On Thu, Mar 28, 2024 at 12:05:08PM -0700, Krister Johansen wrote:
> > > > > Further reducing the stage2_apply_range() batch size has substantial
> > > > > performance improvements for IO that share a CPU performing an unmap
> > > > > operation. By switching to a 2mb chunk, IO performance regressions were
> > > > > no longer observed in this author's tests. E.g. it was possible to
> > > > > obtain the advertised device throughput despite an unmap operation
> > > > > occurring on the CPU where the interrupt was running. There is a
> > > > > tradeoff, however. No changes were observed in per-operation timings
> > > > > when running the kvm_pagetable_test without an interrupt load. However,
> > > > > with a 64gb VM, 1 vcpu, and 4k pages and a IO load, map times increased
> > > > > by about 15% and unmap times increased by about 58%. In essence, this
> > > > > trades slower map/unmap times for improved IO throughput.
> > > >
> > > > There are other users of the range-based operations, like
> > > > write-protection. Live migration is especially sensitive to the latency
> > > > of page table updates as it can affect the VMM's ability to converge
> > > > with the guest.
> > >
> > > To be clear, the reduction in performance was observed when I
> > > concurrently executed both the kvm_pagetable_test and a networking
> > > benchmark where the NIC's interrupts were assigned to the same CPU where
> > > the pagetable test was executing. I didn't see a slowdown just running
> > > the pagetable test.
> >
> > Any chance you could share more details about your HW configuration
> > (what CPU is that?) and the type of traffic? This is the sort of
> > things I'd like to be able to reproduce in order to experiment various
> > strategies.
>
> Sure, I only have access to documentation that is publicly available.
>
> The hardware where we ran into this inititally was Graviton 3, which is
> a Neoverse-V1 based core. It does not support FEAT_TLBIRANGE. I've
> also tested on Graviton 4, which is Neoverse-V2 based. It _does_
> support FEAT_TLBIRANGE. The deferred range based invalidation
> support, was enough to allow us to teardown a large VM based on 4k pages
> and not incur a visible performance penalty. I haven't had a chance to
> test to see if and how Will's patches change this, though.

Just a quick followup that I did test Will's patches and didn't find
that it changed the performance of the workload that I'd been testing.
IOW, I wasn't able to discern a network performance difference between
the baseline and those changes.

Thanks,

-K