Re: [PATCH] of: Rework and simplify phandle cache to use a fixed size

From: Jon Hunter
Date: Mon Jan 13 2020 - 06:12:29 EST



On 10/01/2020 23:50, Rob Herring wrote:
> On Tue, Jan 7, 2020 at 4:22 AM Jon Hunter <jonathanh@xxxxxxxxxx> wrote:
>>
>> Hi Rob,
>>
>> On 11/12/2019 23:23, Rob Herring wrote:
>>> The phandle cache was added to speed up of_find_node_by_phandle() by
>>> avoiding walking the whole DT to find a matching phandle. The
>>> implementation has several shortcomings:
>>>
>>> - The cache is designed to work on a linear set of phandle values.
>>> This is true for dtc generated DTs, but not for other cases such as
>>> Power.
>>> - The cache isn't enabled until of_core_init() and a typical system
>>> may see hundreds of calls to of_find_node_by_phandle() before that
>>> point.
>>> - The cache is freed and re-allocated when the number of phandles
>>> changes.
>>> - It takes a raw spinlock around a memory allocation which breaks on
>>> RT.
>>>
>>> Change the implementation to a fixed size and use hash_32() as the
>>> cache index. This greatly simplifies the implementation. It avoids
>>> the need for any re-alloc of the cache and taking a reference on nodes
>>> in the cache. We only have a single source of removing cache entries
>>> which is of_detach_node().
>>>
>>> Using hash_32() removes any assumption on phandle values improving
>>> the hit rate for non-linear phandle values. The effect on linear values
>>> using hash_32() is about a 10% collision. The chances of thrashing on
>>> colliding values seems to be low.
>>>
>>> To compare performance, I used a RK3399 board which is a pretty typical
>>> system. I found that just measuring boot time as done previously is
>>> noisy and may be impacted by other things. Also bringing up secondary
>>> cores causes some issues with measuring, so I booted with 'nr_cpus=1'.
>>> With no caching, calls to of_find_node_by_phandle() take about 20124 us
>>> for 1248 calls. There's an additional 288 calls before time keeping is
>>> up. Using the average time per hit/miss with the cache, we can calculate
>>> these calls to take 690 us (277 hit / 11 miss) with a 128 entry cache
>>> and 13319 us with no cache or an uninitialized cache.
>>>
>>> Comparing the 3 implementations the time spent in
>>> of_find_node_by_phandle() is:
>>>
>>> no cache: 20124 us (+ 13319 us)
>>> 128 entry cache: 5134 us (+ 690 us)
>>> current cache: 819 us (+ 13319 us)
>>>
>>> We could move the allocation of the cache earlier to improve the
>>> current cache, but that just further complicates the situation as it
>>> needs to be after slab is up, so we can't do it when unflattening (which
>>> uses memblock).
>>>
>>> Reported-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
>>> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
>>> Cc: Segher Boessenkool <segher@xxxxxxxxxxxxxxxxxxx>
>>> Cc: Frank Rowand <frowand.list@xxxxxxxxx>
>>> Signed-off-by: Rob Herring <robh@xxxxxxxxxx>
>>
>> With next-20200106 I have noticed a regression on Tegra210 where it
>> appears that only one of the eMMC devices is being registered. Bisect is
>> pointing to this patch and reverting on top of next fixes the problem.
>> That is as far as I have got so far, so if you have any ideas, please
>> let me know. Unfortunately, there do not appear to be any obvious errors
>> from the bootlog.
>
> I guess that's tegra210-p2371-2180.dts because none of the others have
> 2 SD hosts enabled. I don't see anything obvious though. Are you doing
> any runtime mods to the DT?

I have noticed that the bootloader is doing some runtime mods and so
checking if this is the cause. I will let you know, but most likely,
seeing as I cannot find anything wrong with this change itself.

Cheers
Jon

--
nvpublic