Re: [PATCH v2] mm/sparse: fix comment for section map alignment

From: David Hildenbrand (Arm)

Date: Thu Apr 02 2026 - 06:35:29 EST


On 4/2/26 12:23, Muchun Song wrote:
> The comment in mmzone.h currently details exhaustive per-architecture
> bit-width lists and explains alignment using min(PAGE_SHIFT,
> PFN_SECTION_SHIFT). Such details risk falling out of date over time
> and may inadvertently be left un-updated.
>
> We always expect a single section to cover full pages. Therefore,
> we can safely assume that PFN_SECTION_SHIFT is large enough to
> accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
>
> Update the comment to accurately reflect this consensus, making it
> clear that we rely on a single section covering full pages.
>
> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> ---
> v1 -> v2:
> - Drop the actual BUILD_BUG_ON logic modification (keeping the simple
> comparison) and only simplify/clarify the mmzone.h comment.
> - Add explanation explicitly noting that a single section is always
> expected to cover full pages, per discussions with David Hildenbrand
> and Andrew Morton.
> ---
> include/linux/mmzone.h | 25 ++++++++++---------------
> 1 file changed, 10 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 7de42be81d4b..a071f1a0e242 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -2056,21 +2056,16 @@ static inline struct mem_section *__nr_to_section(unsigned long nr)
> extern size_t mem_section_usage_size(void);
>
> /*
> - * We use the lower bits of the mem_map pointer to store
> - * a little bit of information. The pointer is calculated
> - * as mem_map - section_nr_to_pfn(pnum). The result is
> - * aligned to the minimum alignment of the two values:
> - * 1. All mem_map arrays are page-aligned.
> - * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
> - * lowest bits. PFN_SECTION_SHIFT is arch-specific
> - * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
> - * worst combination is powerpc with 256k pages,
> - * which results in PFN_SECTION_SHIFT equal 6.
> - * To sum it up, at least 6 bits are available on all architectures.
> - * However, we can exceed 6 bits on some other architectures except
> - * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
> - * with the worst case of 64K pages on arm64) if we make sure the
> - * exceeded bit is not applicable to powerpc.
> + * We use the lower bits of the mem_map pointer to store a little bit of
> + * information. The pointer is calculated as mem_map - section_nr_to_pfn().
> + * The result is aligned to the minimum alignment of the two values:
> + *
> + * 1. All mem_map arrays are page-aligned.
> + * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits.
> + *
> + * We always expect a single section to cover full pages. Therefore,
> + * we can safely assume that PFN_SECTION_SHIFT is large enough to
> + * accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
> */
> enum {
> SECTION_MARKED_PRESENT_BIT,

Thanks!

Acked-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>

--
Cheers,

David