Re: [RFC PATCH v3 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64

From: Catalin Marinas
Date: Wed May 20 2020 - 13:08:08 EST


On Mon, May 18, 2020 at 08:21:02PM +0800, Zhenyu Ye wrote:
> On 2020/5/14 23:28, Catalin Marinas wrote:
> > On Tue, Apr 14, 2020 at 07:28:35PM +0800, Zhenyu Ye wrote:
> >> + }
> >> + scale++;
> >> + range_size >>= TLB_RANGE_MASK_SHIFT;
> >> + }
> >
> > So, you start from scale 0 and increment it until you reach the maximum.
> > I think (haven't done the maths on paper) you could also start from the
> > top with something like scale = ilog2(range_size) / 5. Not sure it's
> > significantly better though, maybe avoiding the loop 3 times if your
> > range is 2MB (which happens with huge pages).
>
> This optimization is only effective when the range is a multiple of 256KB
> (when the page size is 4KB), and I'm worried about the performance
> of ilog2(). I traced the __flush_tlb_range() last year and found that in
> most cases the range is less than 256K (see details in [1]).

THP or hugetlbfs would exercise bigger strides but I guess it depends on
the use-case. ilog2() should be reduced to a few instructions on arm64
AFAICT (haven't tried but it should use the CLZ instruction).

> > Anyway, I think it would be more efficient if we combine the
> > __flush_tlb_range() and the _directly one into the same function with a
> > single loop for both. For example, if the stride is 2MB already, we can
> > handle this with a single classic TLBI without all the calculations for
> > the range operation. The hardware may also handle this better since the
> > software already told it there can be only one entry in that 2MB range.
> > So each loop iteration could figure which operation to use based on
> > cpucaps, TLBI range ops, stride and reduce range_size accordingly.
>
> Summarize your suggestion in one sentence: use 'stride' to optimize the
> preformance of TLBI. This can also be done by dividing into two functions,
> and this should indeed be taken into account in the TLBI RANGE feature.
>
> But if we figure which operation to use based on cpucaps in each loop
> iteration, then cpus_have_const_cap() will be called frequently, which
> may affect performance of TLBI. In my opinion, we should do as few
> judgments as possible in the loop, so judge the cpucaps outside the
> loop maybe a good choice.

cpus_have_const_cap() is a static label, so should be patched with a
branch or nop. My point was that in the classic __flush_tlb_range()
loop, instead of an addr += stride we could have something more dynamic
depending on whether the CPU supports range TLBI ops or not. But we
would indeed have more (static) branches in the loop, so possibly some
performance degradation.

If the code looks ok, I'd favour this and we can look at the
optimisation later. But I can't really tell how the code would look
without attempting to merge the two.

Anyway, a first step would be to to add the the range and stride to the
decision (i.e. (end-start)/stride > 1) before jumping to the range
operations. You can avoid the additional checks in the new TLBI
functions since we know we have at least two (huge)pages.

--
Catalin