Re: [PATCH] arm64/mm: Simplify and document pte_to_phys() for 52 bit addresses

From: Ard Biesheuvel
Date: Mon Oct 31 2022 - 05:47:38 EST


Hello Anshuman,

On Mon, 31 Oct 2022 at 09:24, Anshuman Khandual
<anshuman.khandual@xxxxxxx> wrote:
>
> pte_to_phys() assembly definition does multiple bits field transformations
> to derive physical address, embedded inside a page table entry. Unlike its
> C counter part i.e __pte_to_phys(), pte_to_phys() is not very apparent. It
> simplifies these operations, by deriving all positions and widths in macro
> format and documenting individual steps in the physical address extraction.
> While here, this also updates __pte_to_phys() and __phys_to_pte_val().
>
> Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
> Cc: Will Deacon <will@xxxxxxxxxx>
> Cc: Mark Brown <broonie@xxxxxxxxxx>
> Cc: Mark Rutland <mark.rutland@xxxxxxx>
> Cc: Ard Biesheuvel <ardb@xxxxxxxxxx>
> Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
> ---
> This applies on v6.1-rc3.
>
> arch/arm64/include/asm/assembler.h | 37 +++++++++++++++++++++++---
> arch/arm64/include/asm/pgtable-hwdef.h | 5 ++++
> arch/arm64/include/asm/pgtable.h | 4 +--
> 3 files changed, 41 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index e5957a53be39..aea320b04d85 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -661,9 +661,40 @@ alternative_endif
>
> .macro pte_to_phys, phys, pte
> #ifdef CONFIG_ARM64_PA_BITS_52
> - ubfiz \phys, \pte, #(48 - 16 - 12), #16
> - bfxil \phys, \pte, #16, #32
> - lsl \phys, \phys, #16
> + /*
> + * Physical address needs to be derived from the given page table
> + * entry according to the following formula.
> + *
> + * phys = pte[47..16] | (pte[15..12] << 36)
> + *
> + * These instructions here retrieve the embedded 52 bits physical
> + * address in phys[51..0]. This involves copying over both higher
> + * and lower addresses into phys[35..0] which is then followed by
> + * 16 bit left shift.
> + *
> + * Get higher 4 bits
> + *
> + * phys[35..20] = pte[15..0] i.e phys[35..32] = pte[15..12]
> + *
> + * Get lower 32 bits
> + *
> + * phys[31..0] = pte[47..16]
> + *
> + * Till now
> + *
> + * phys[35..0] = pte[51..16]
> + *
> + * Left shift
> + *
> + * phys[51..0] = phys[35..0] << 16
> + *
> + * Finally
> + *
> + * phys[51..16] = pte[47..16] | (pte[15..12] << 36)
> + */
> + ubfiz \phys, \pte, #HIGH_ADDR_SHIFT, #HIGH_ADDR_BITS_MAX
> + bfxil \phys, \pte, #PAGE_SHIFT, #(LOW_ADDR_BITS_MAX - PAGE_SHIFT)
> + lsl \phys, \phys, #PAGE_SHIFT


I think the wall of text is unnecessary, tbh. And substituting every
occurrence of the constant value 16 with PAGE_SHIFT is slightly
misleading, as the entire calculation only makes sense for 64k granule
size, but that doesn't mean the constant is intrinsically tied to the
page size.

> #else
> and \phys, \pte, #PTE_ADDR_MASK
> #endif


If you want to clarify this and make it more self documenting, should
we perhaps turn it into something like

and \phys, \pte, #PTE_ADDR_MASK // isolate PTE address bits
#ifdef CONFIG_ARM64_PA_BITS_52
orr \phys, \phys, \phys, lsl #48 - 12 // copy bits [27:12] into [63:48]
and \phys, \phys, #0xfffffffff0000 // retain the address bits [51:16]
#endif


> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index 5ab8d163198f..683ca2378960 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -157,6 +157,11 @@
>
> #define PTE_ADDR_LOW (((_AT(pteval_t, 1) << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
> #ifdef CONFIG_ARM64_PA_BITS_52
> +#define LOW_ADDR_BITS_MAX 48
> +#define HIGH_ADDR_BITS_MAX 16
> +#define HIGH_ADDR_BITS_MIN 12
> +#define HIGH_ADDR_WIDTH (HIGH_ADDR_BITS_MAX - HIGH_ADDR_BITS_MIN)
> +#define HIGH_ADDR_SHIFT (LOW_ADDR_BITS_MAX - PAGE_SHIFT - PAGE_SHIFT + HIGH_ADDR_WIDTH)

Why are you subtracting PAGE_SHIFT twice here?

> #define PTE_ADDR_HIGH (_AT(pteval_t, 0xf) << 12)
> #define PTE_ADDR_MASK (PTE_ADDR_LOW | PTE_ADDR_HIGH)
> #else
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 71a1af42f0e8..014bac4a69e9 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -77,11 +77,11 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
> static inline phys_addr_t __pte_to_phys(pte_t pte)
> {
> return (pte_val(pte) & PTE_ADDR_LOW) |
> - ((pte_val(pte) & PTE_ADDR_HIGH) << 36);
> + ((pte_val(pte) & PTE_ADDR_HIGH) << (PHYS_MASK_SHIFT - PAGE_SHIFT));

Same here. PHYS_MASK_SHIFT - PAGE_SHIFT happens to equal 36, but that
does not mean the placement of the high address bits in the PTE is
fundamentally tied to the dimensions of the granule or physical
address space.

I think it makes sense to have a macro somewhere that specifies the
shift of the high address bits between a PTE and a physical address,
but it is just a property of how the ARM ARM happens to define the PTE
format, so I don't think it makes sense to define it in terms of
PAGE_SHIFT or PHYS_MASK_SHIFT.



> }
> static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
> {
> - return (phys | (phys >> 36)) & PTE_ADDR_MASK;
> + return (phys | (phys >> (PHYS_MASK_SHIFT - PAGE_SHIFT))) & PTE_ADDR_MASK;
> }
> #else
> #define __pte_to_phys(pte) (pte_val(pte) & PTE_ADDR_MASK)
> --
> 2.25.1
>