Re: [PATCH v3 24/30] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock

From: Harry Yoo

Date: Tue Jan 20 2026 - 07:56:32 EST


On Tue, Jan 20, 2026 at 07:51:29PM +0800, Qi Zheng wrote:
>
>
> On 1/20/26 4:21 PM, Harry Yoo wrote:
> > On Wed, Jan 14, 2026 at 07:32:51PM +0800, Qi Zheng wrote:
> > > From: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> > >
> > > The following diagram illustrates how to ensure the safety of the folio
> > > lruvec lock when LRU folios undergo reparenting.
> > >
> > > In the folio_lruvec_lock(folio) function:
> > > ```
> > > rcu_read_lock();
> > > retry:
> > > lruvec = folio_lruvec(folio);
> > > /* There is a possibility of folio reparenting at this point. */
> > > spin_lock(&lruvec->lru_lock);
> > > if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > /*
> > > * The wrong lruvec lock was acquired, and a retry is required.
> > > * This is because the folio resides on the parent memcg lruvec
> > > * list.
> > > */
> > > spin_unlock(&lruvec->lru_lock);
> > > goto retry;
> > > }
> > >
> > > /* Reaching here indicates that folio_memcg() is stable. */
> > > ```
> > >
> > > In the memcg_reparent_objcgs(memcg) function:
> > > ```
> > > spin_lock(&lruvec->lru_lock);
> > > spin_lock(&lruvec_parent->lru_lock);
> > > /* Transfer folios from the lruvec list to the parent's. */
> > > spin_unlock(&lruvec_parent->lru_lock);
> > > spin_unlock(&lruvec->lru_lock);
> > > ```
> > >
> > > After acquiring the lruvec lock, it is necessary to verify whether
> > > the folio has been reparented. If reparenting has occurred, the new
> > > lruvec lock must be reacquired. During the LRU folio reparenting
> > > process, the lruvec lock will also be acquired (this will be
> > > implemented in a subsequent patch). Therefore, folio_memcg() remains
> > > unchanged while the lruvec lock is held.
> > >
> > > Given that lruvec_memcg(lruvec) is always equal to folio_memcg(folio)
> > > after the lruvec lock is acquired, the lruvec_memcg_debug() check is
> > > redundant. Hence, it is removed.
> > >
> > > This patch serves as a preparation for the reparenting of LRU folios.
> > >
> > > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> > > Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
> > > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> > > ---
> > > include/linux/memcontrol.h | 45 +++++++++++++++++++----------
> > > include/linux/swap.h | 1 +
> > > mm/compaction.c | 29 +++++++++++++++----
> > > mm/memcontrol.c | 59 +++++++++++++++++++++-----------------
> > > mm/swap.c | 4 +++
> > > 5 files changed, 91 insertions(+), 47 deletions(-)
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 4b6f20dc694ba..26c3c0e375f58 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -742,7 +742,15 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
> > > * folio_lruvec - return lruvec for isolating/putting an LRU folio
> > > * @folio: Pointer to the folio.
> > > *
> > > - * This function relies on folio->mem_cgroup being stable.
> > > + * Call with rcu_read_lock() held to ensure the lifetime of the returned lruvec.
> > > + * Note that this alone will NOT guarantee the stability of the folio->lruvec
> > > + * association; the folio can be reparented to an ancestor if this races with
> > > + * cgroup deletion.
> > > + *
> > > + * Use folio_lruvec_lock() to ensure both lifetime and stability of the binding.
> > > + * Once a lruvec is locked, folio_lruvec() can be called on other folios, and
> > > + * their binding is stable if the returned lruvec matches the one the caller has
> > > + * locked. Useful for lock batching.
> > > */
> > > static inline struct lruvec *folio_lruvec(struct folio *folio)
> > > {
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 548e67dbf2386..a1573600d4188 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > diff --git a/mm/swap.c b/mm/swap.c
> > > index cb1148a92d8ec..7e53479ca1732 100644
> > > --- a/mm/swap.c
> > > +++ b/mm/swap.c
> > > @@ -284,9 +286,11 @@ void lru_note_cost_unlock_irq(struct lruvec *lruvec, bool file,
> > > }
> > > spin_unlock_irq(&lruvec->lru_lock);
> > > + rcu_read_unlock();
> > > lruvec = parent_lruvec(lruvec);
> >
> > It looks bit weird to call parent_lruvec(lruvec) outside RCU read lock
> > because the reason why it holds RCU read lock is to prevent release of
> > memory cgroup and its lruvec.
> >
> > I guess this isn't broken (for now) because all callers of
> > lru_note_cost_unlock_irq() are holding a reference to the memcg?
>
> I checked all the callers again, and they do indeed hold the refcnt
> for the memcg, so it's safe for now.

Thanks for double checking!

> But it seems rather fragile,

Yeah, it's fragile and

> perhaps we should also include parent_lruvec() within the RCU lock.

that would be much better.

> >
> > > if (!lruvec)
> > > break;
> > > + rcu_read_lock();
> > > spin_lock_irq(&lruvec->lru_lock);
> > > }
> > > }

--
Cheers,
Harry / Hyeonggon