Re: [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64

From: Zhenyu Ye
Date: Tue Jul 14 2020 - 09:57:05 EST


On 2020/7/14 18:36, Catalin Marinas wrote:
> On Fri, Jul 10, 2020 at 05:44:20PM +0800, Zhenyu Ye wrote:
>> +#define __TLBI_RANGE_PAGES(num, scale) (((num) + 1) << (5 * (scale) + 1))
>> +#define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3)
>> +
>> +#define TLBI_RANGE_MASK GENMASK_ULL(4, 0)
>> +#define __TLBI_RANGE_NUM(range, scale) \
>> + (((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK)
> [...]
>> + int num = 0;
>> + int scale = 0;
> [...]
>> + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT;
> [...]
>
> Since num is an int, __TLBI_RANGE_PAGES is still an int. Shifting it by
> PAGE_SHIFT can overflow as the maximum would be 8GB for 4K pages (or
> 128GB for 64K pages). I think we probably get away with this because of
> some implicit type conversion but I'd rather make __TLBI_RANGE_PAGES an
> explicit unsigned long:
>
> #define __TLBI_RANGE_PAGES(num, scale) ((unsigned long)((num) + 1) << (5 * (scale) + 1))
>

This is valuable and I will update this in next series, with the check
for binutils (or encode the instructions by hand), as soon as possible.

> Without this change, the CBMC check fails (see below for the test). In
> the kernel, we don't have this problem as we encode the address via
> __TLBI_VADDR_RANGE and it doesn't overflow.> The good part is that CBMC reckons the algorithm is correct ;).

Thanks for your test!

Zhenyu