Re: [PATCH 1/1] Revert "iommu/iova: Retry from last rb tree node if iova search fails"

From: John Garry
Date: Tue Mar 02 2021 - 09:11:35 EST


On 01/03/2021 15:48, John Garry wrote:

While max32_alloc_size indirectly tracks the largest*contiguous* available space, one of the ideas from which it grew was to simply keep
count of the total number of free PFNs. If you're really spending
significant time determining that the tree is full, as opposed to just
taking longer to eventually succeed, then it might be relatively
innocuous to tack on that semi-redundant extra accounting as a
self-contained quick fix for that worst case.

Anyway, we see ~50% throughput regression, which is intolerable. As seen
in [0], I put this down to the fact that we have so many IOVA requests
which exceed the rcache size limit, which means many RB tree accesses
for non-cacheble IOVAs, which are now slower.

I will attempt to prove this by increasing RCACHE RANGE, such that all IOVA sizes may be cached.

About this one, as expected, we restore performance by increasing the RCACHE RANGE.

Some figures:
Baseline v5.12-rc1

strict mode:
600K IOPs

Revert "iommu/iova: Retry from last rb tree node if iova search fails":
1215K

Increase IOVA RCACHE range 6 -> 10 (All IOVAs size requests now cacheable for this experiment):
1400K

Reduce LLDD max SGE count 124 -> 16:
1288K

non-strict mode
1650K

So ideally we can work towards something for which IOVAs of all size could be cached.

Cheers,
John