Re: [PATCH v1 12/16] arm64/mm: Support huge pte-mapped pages in vmap

From: Ryan Roberts
Date: Thu Feb 13 2025 - 04:15:47 EST



>>>> +#define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size
>>>> +static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr,
>>>> + unsigned long end, u64 pfn,
>>>> + unsigned int max_page_shift)
>>>> +{
>>>> + if (max_page_shift < CONT_PTE_SHIFT)
>>>> + return PAGE_SIZE;
>>>> +
>>>> + if (end - addr < CONT_PTE_SIZE)
>>>> + return PAGE_SIZE;
>>>> +
>>>> + if (!IS_ALIGNED(addr, CONT_PTE_SIZE))
>>>> + return PAGE_SIZE;
>>>> +
>>>> + if (!IS_ALIGNED(PFN_PHYS(pfn), CONT_PTE_SIZE))
>>>> + return PAGE_SIZE;
>>>> +
>>>> + return CONT_PTE_SIZE;
>>>
>>> A small nit:
>>>
>>> Should the rationale behind picking CONT_PTE_SIZE be added here as an in code
>>> comment or something in the function - just to make things bit clear.
>>
>> I'm not sure what other size we would pick?
>
> The suggestion was to add a small comment in the above helper function explaining
> the rationale for various conditions in there while returning either PAGE_SIZE or
> CONT_PTE_SIZE to improve readability etc.

OK I've added the following:

/*
* If the block is at least CONT_PTE_SIZE in size, and is naturally
* aligned in both virtual and physical space, then we can pte-map the
* block using the PTE_CONT bit for more efficient use of the TLB.
*/

Thanks,
Ryan