Re: [PATCH v2] rust: page: add byte-wise atomic memory copy methods

From: Danilo Krummrich

Date: Tue Feb 17 2026 - 08:55:16 EST


On Tue Feb 17, 2026 at 2:00 PM CET, Peter Zijlstra wrote:
> Anyway, I don't think something like the below is an unreasonable patch.
>
> It ensures all accesses to the ptr obtained from kmap_local_*() and
> released by kunmap_local() stays inside those two.

I'd argue that not ensuring this is a feature, as I don't see why we would want
to ensure this if !CONFIG_HIGHMEM.

I think this is not about not escaping a critical scope, but about ensuring to
read exactly once.

> ---
> diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
> index 0574c21ca45d..2fe71b715a46 100644
> --- a/include/linux/highmem-internal.h
> +++ b/include/linux/highmem-internal.h
> @@ -185,31 +185,42 @@ static inline void kunmap(const struct page *page)
>
> static inline void *kmap_local_page(const struct page *page)
> {
> - return page_address(page);
> + void *addr = page_address(page);
> + barrier();
> + return addr;
> }
>
> static inline void *kmap_local_page_try_from_panic(const struct page *page)
> {
> - return page_address(page);
> + void *addr = page_address(page);
> + barrier();
> + return addr;
> }
>
> static inline void *kmap_local_folio(const struct folio *folio, size_t offset)
> {
> - return folio_address(folio) + offset;
> + void *addr = folio_address(folio) + offset;
> + barrier();
> + return addr;
> }
>
> static inline void *kmap_local_page_prot(const struct page *page, pgprot_t prot)
> {
> - return kmap_local_page(page);
> + void *addr = kmap_local_page(page);
> + barrier();
> + return addr;
> }
>
> static inline void *kmap_local_pfn(unsigned long pfn)
> {
> - return kmap_local_page(pfn_to_page(pfn));
> + void *addr = kmap_local_page(pfn_to_page(pfn));
> + barrier();
> + return addr;
> }
>
> static inline void __kunmap_local(const void *addr)
> {
> + barrier();
> #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
> kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE));
> #endif