Re: [PATCH v2 06/14] arm64/mm: Hoist barriers out of set_ptes_anysz() loop

From: Catalin Marinas
Date: Sat Feb 22 2025 - 06:57:00 EST


On Mon, Feb 17, 2025 at 02:07:58PM +0000, Ryan Roberts wrote:
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index e255a36380dc..e4b1946b261f 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -317,10 +317,8 @@ static inline void __set_pte_nosync(pte_t *ptep, pte_t pte)
> WRITE_ONCE(*ptep, pte);
> }
>
> -static inline void __set_pte(pte_t *ptep, pte_t pte)
> +static inline void __set_pte_complete(pte_t pte)
> {
> - __set_pte_nosync(ptep, pte);
> -
> /*
> * Only if the new pte is valid and kernel, otherwise TLB maintenance
> * or update_mmu_cache() have the necessary barriers.

Unrelated to this patch but I just realised that this comment is stale,
we no longer do anything in update_mmu_cache() since commit 120798d2e7d1
("arm64: mm: remove dsb from update_mmu_cache"). If you respin, please
remove the update_mmu_cache() part as well.

Thanks.

--
Catalin