[PATCH] mm: documentation: standardize on "zone lock" terminology

From: Dmitry Ilvokhin

Date: Thu Mar 05 2026 - 13:36:17 EST


During review of the zone lock tracing series it was suggested to
standardize documentation and comments on the term "zone lock"
instead of using zone_lock or referring to the internal field
zone->_lock.

Update references accordingly.

Signed-off-by: Dmitry Ilvokhin <d@xxxxxxxxxxxx>
---
Documentation/mm/physical_memory.rst | 4 ++--
Documentation/trace/events-kmem.rst | 8 ++++----
mm/compaction.c | 2 +-
mm/internal.h | 2 +-
mm/page_alloc.c | 12 ++++++------
mm/page_isolation.c | 4 ++--
mm/page_owner.c | 2 +-
7 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst
index e344f93515b6..2398d87ac156 100644
--- a/Documentation/mm/physical_memory.rst
+++ b/Documentation/mm/physical_memory.rst
@@ -500,11 +500,11 @@ General
``nr_isolate_pageblock``
Number of isolated pageblocks. It is used to solve incorrect freepage counting
problem due to racy retrieving migratetype of pageblock. Protected by
- ``zone_lock``. Defined only when ``CONFIG_MEMORY_ISOLATION`` is enabled.
+ zone lock. Defined only when ``CONFIG_MEMORY_ISOLATION`` is enabled.

``span_seqlock``
The seqlock to protect ``zone_start_pfn`` and ``spanned_pages``. It is a
- seqlock because it has to be read outside of ``zone_lock``, and it is done in
+ seqlock because it has to be read outside of zone lock, and it is done in
the main allocator path. However, the seqlock is written quite infrequently.
Defined only when ``CONFIG_MEMORY_HOTPLUG`` is enabled.

diff --git a/Documentation/trace/events-kmem.rst b/Documentation/trace/events-kmem.rst
index 3c20a972de27..42f08f3b136c 100644
--- a/Documentation/trace/events-kmem.rst
+++ b/Documentation/trace/events-kmem.rst
@@ -57,7 +57,7 @@ the per-CPU allocator (high performance) or the buddy allocator.

If pages are allocated directly from the buddy allocator, the
mm_page_alloc_zone_locked event is triggered. This event is important as high
-amounts of activity imply high activity on the zone_lock. Taking this lock
+amounts of activity imply high activity on the zone lock. Taking this lock
impairs performance by disabling interrupts, dirtying cache lines between
CPUs and serialising many CPUs.

@@ -79,11 +79,11 @@ contention on the lruvec->lru_lock.
mm_page_pcpu_drain page=%p pfn=%lu order=%d cpu=%d migratetype=%d

In front of the page allocator is a per-cpu page allocator. It exists only
-for order-0 pages, reduces contention on the zone_lock and reduces the
+for order-0 pages, reduces contention on the zone lock and reduces the
amount of writing on struct page.

When a per-CPU list is empty or pages of the wrong type are allocated,
-the zone_lock will be taken once and the per-CPU list refilled. The event
+the zone lock will be taken once and the per-CPU list refilled. The event
triggered is mm_page_alloc_zone_locked for each page allocated with the
event indicating whether it is for a percpu_refill or not.

@@ -92,7 +92,7 @@ which triggers a mm_page_pcpu_drain event.

The individual nature of the events is so that pages can be tracked
between allocation and freeing. A number of drain or refill pages that occur
-consecutively imply the zone_lock being taken once. Large amounts of per-CPU
+consecutively imply the zone lock being taken once. Large amounts of per-CPU
refills and drains could imply an imbalance between CPUs where too much work
is being concentrated in one place. It could also indicate that the per-CPU
lists should be a larger size. Finally, large amounts of refills on one CPU
diff --git a/mm/compaction.c b/mm/compaction.c
index 143ead2cb10a..32623894a632 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1419,7 +1419,7 @@ static bool suitable_migration_target(struct compact_control *cc,
int order = cc->order > 0 ? cc->order : pageblock_order;

/*
- * We are checking page_order without zone->_lock taken. But
+ * We are checking page_order without zone lock taken. But
* the only small danger is that we skip a potentially suitable
* pageblock, so it's not worth to check order for valid range.
*/
diff --git a/mm/internal.h b/mm/internal.h
index f634ac469c87..95b583e7e4f7 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -727,7 +727,7 @@ static inline unsigned int buddy_order(struct page *page)
* (d) a page and its buddy are in the same zone.
*
* For recording whether a page is in the buddy system, we set PageBuddy.
- * Setting, clearing, and testing PageBuddy is serialized by zone->_lock.
+ * Setting, clearing, and testing PageBuddy is serialized by zone lock.
*
* For recording page's order, we use page_private(page).
*/
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4c95364b7063..75ee81445640 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2440,7 +2440,7 @@ enum rmqueue_mode {

/*
* Do the hard work of removing an element from the buddy allocator.
- * Call me with the zone->_lock already held.
+ * Call me with the zone lock already held.
*/
static __always_inline struct page *
__rmqueue(struct zone *zone, unsigned int order, int migratetype,
@@ -2468,7 +2468,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
* fallbacks modes with increasing levels of fragmentation risk.
*
* The fallback logic is expensive and rmqueue_bulk() calls in
- * a loop with the zone->_lock held, meaning the freelists are
+ * a loop with the zone lock held, meaning the freelists are
* not subject to any outside changes. Remember in *mode where
* we found pay dirt, to save us the search on the next call.
*/
@@ -7046,7 +7046,7 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
* pages. Because of this, we reserve the bigger range and
* once this is done free the pages we are not interested in.
*
- * We don't have to hold zone->_lock here because the pages are
+ * We don't have to hold zone lock here because the pages are
* isolated thus they won't get removed from buddy.
*/
outer_start = find_large_buddy(start);
@@ -7615,7 +7615,7 @@ void accept_page(struct page *page)
return;
}

- /* Unlocks zone->_lock */
+ /* Unlocks zone lock */
__accept_page(zone, &flags, page);
}

@@ -7632,7 +7632,7 @@ static bool try_to_accept_memory_one(struct zone *zone)
return false;
}

- /* Unlocks zone->_lock */
+ /* Unlocks zone lock */
__accept_page(zone, &flags, page);

return true;
@@ -7773,7 +7773,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned

/*
* Best effort allocation from percpu free list.
- * If it's empty attempt to spin_trylock zone->_lock.
+ * If it's empty attempt to spin_trylock zone lock.
*/
page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac);

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index cf731370e7a7..e8414e9a718a 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
zone_unlock_irqrestore(zone, flags);
if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
/*
- * printk() with zone->_lock held will likely trigger a
+ * printk() with zone lock held will likely trigger a
* lockdep splat, so defer it here.
*/
dump_page(unmovable, "unmovable page");
@@ -553,7 +553,7 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
/*
* Test all pages in the range is free(means isolated) or not.
* all pages in [start_pfn...end_pfn) must be in the same zone.
- * zone->_lock must be held before call this.
+ * zone lock must be held before call this.
*
* Returns the last tested pfn.
*/
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 54a4ba63b14f..109f2f28f5b1 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -799,7 +799,7 @@ static void init_pages_in_zone(struct zone *zone)
continue;

/*
- * To avoid having to grab zone->_lock, be a little
+ * To avoid having to grab zone lock, be a little
* careful when reading buddy page order. The only
* danger is that we skip too much and potentially miss
* some early allocated pages, which is better than
--
2.47.3