Re: [PATCH v2 08/12] mm/mglru: simplify and improve dirty writeback handling
From: Chen Ridong
Date: Mon Apr 06 2026 - 22:53:03 EST
On 2026/4/2 8:11, Barry Song wrote:
> On Tue, Mar 31, 2026 at 5:18 PM Kairui Song <ryncsn@xxxxxxxxx> wrote:
>>
>> On Tue, Mar 31, 2026 at 04:42:59PM +0800, Baolin Wang wrote:
>>>
>>>
>>> On 3/29/26 3:52 AM, Kairui Song via B4 Relay wrote:
>>>> From: Kairui Song <kasong@xxxxxxxxxxx>
>>>>
>>>> The current handling of dirty writeback folios is not working well for
>>>> file page heavy workloads: Dirty folios are protected and move to next
>>>> gen upon isolation of getting throttled or reactivation upon pageout
>>>> (shrink_folio_list).
>>>>
>>>> This might help to reduce the LRU lock contention slightly, but as a
>>>> result, the ping-pong effect of folios between head and tail of last two
>>>> gens is serious as the shrinker will run into protected dirty writeback
>>>> folios more frequently compared to activation. The dirty flush wakeup
>>>> condition is also much more passive compared to active/inactive LRU.
>>>> Active / inactve LRU wakes the flusher if one batch of folios passed to
>>>> shrink_folio_list is unevictable due to under writeback, but MGLRU
>>>> instead has to check this after the whole reclaim loop is done, and then
>>>> count the isolation protection number compared to the total reclaim
>>>> number.
>>>>
>>>> And we previously saw OOM problems with it, too, which were fixed but
>>>> still not perfect [1].
>>>>
>>>> So instead, just drop the special handling for dirty writeback, just
>>>> re-activate it like active / inactive LRU. And also move the dirty flush
>>>> wake up check right after shrink_folio_list. This should improve both
>>>> throttling and performance.
>>>>
>>>> Test with YCSB workloadb showed a major performance improvement:
>>>>
>>>> Before this series:
>>>> Throughput(ops/sec): 61642.78008938203
>>>> AverageLatency(us): 507.11127774145166
>>>> pgpgin 158190589
>>>> pgpgout 5880616
>>>> workingset_refault 7262988
>>>>
>>>> After this commit:
>>>> Throughput(ops/sec): 80216.04855744806 (+30.1%, higher is better)
>>>> AverageLatency(us): 388.17633477268913 (-23.5%, lower is better)
>>>> pgpgin 101871227 (-35.6%, lower is better)
>>>> pgpgout 5770028
>>>> workingset_refault 3418186 (-52.9%, lower is better)
>>>>
>>>> The refault rate is ~50% lower, and throughput is ~30% higher, which
>>>> is a huge gain. We also observed significant performance gain for
>>>> other real-world workloads.
>>>>
>>>> We were concerned that the dirty flush could cause more wear for SSD:
>>>> that should not be the problem here, since the wakeup condition is when
>>>> the dirty folios have been pushed to the tail of LRU, which indicates
>>>> that memory pressure is so high that writeback is blocking the workload
>>>> already.
>>>>
>>>> Reviewed-by: Axel Rasmussen <axelrasmussen@xxxxxxxxxx>
>>>> Link: https://lore.kernel.org/linux-mm/20241026115714.1437435-1-jingxiangzeng.cas@xxxxxxxxx/ [1]
>>>> Signed-off-by: Kairui Song <kasong@xxxxxxxxxxx>
>>>> ---
>>>> mm/vmscan.c | 57 ++++++++++++++++-----------------------------------------
>>>> 1 file changed, 16 insertions(+), 41 deletions(-)
>>>>
>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>> index 8de5c8d5849e..17b5318fad39 100644
>>>> --- a/mm/vmscan.c
>>>> +++ b/mm/vmscan.c
>>>> @@ -4583,7 +4583,6 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>>>> int tier_idx)
>>>> {
>>>> bool success;
>>>> - bool dirty, writeback;
>>>> int gen = folio_lru_gen(folio);
>>>> int type = folio_is_file_lru(folio);
>>>> int zone = folio_zonenum(folio);
>>>> @@ -4633,21 +4632,6 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>>>> return true;
>>>> }
>>>> - dirty = folio_test_dirty(folio);
>>>> - writeback = folio_test_writeback(folio);
>>>> - if (type == LRU_GEN_FILE && dirty) {
>>>> - sc->nr.file_taken += delta;
>>>> - if (!writeback)
>>>> - sc->nr.unqueued_dirty += delta;
>>>> - }
>>>> -
>>>> - /* waiting for writeback */
>>>> - if (writeback || (type == LRU_GEN_FILE && dirty)) {
>>>> - gen = folio_inc_gen(lruvec, folio, true);
>>>> - list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
>>>> - return true;
>>>> - }
>>>
>>> I'm a bit concerned about the handling of dirty folios.
>>>
>>> In the original logic, if we encounter a dirty folio, we increment its
>>> generation counter by 1 and move it to the *second oldest generation*.
>>>
>>> However, with your patch, shrink_folio_list() will activate the dirty folio
>>> by calling folio_set_active(). Then, evict_folios() -> move_folios_to_lru()
>>> will put the dirty folio back into the MGLRU list.
>>>
>>> But because the folio_test_active() is true for this dirty folio, the dirty
>>> folio will now be placed into the *second youngest generation* (see
>>> lru_gen_folio_seq()).
>>
>> Yeah, and that's exactly what we want. Or else, these folios will
>> stay at oldest gen, following scan will keep seeing them and hence
>> keep bouncing these folios again and again to a younger gen since
>> they are not reclaimable.
>>
>> The writeback callback (folio_rotate_reclaimable) will move them
>> back to tail once they are actually reclaimable. So we are not
>> losing any ability to reclaim them. Am I missing anything?
>>
>
> This makes sense to me. As long as folio_rotate_reclaimable()
> exists, we can move those folios back to the tail once they are
> clean and ready for reclaim.
>
> This reminds me of Ridong's patch, which tried to emulate MGLRU's
> behavior by 'rotating' folios whose IO completed during isolate,
> and thus missed folio_rotate_reclaimable() in the active/inactive
> LRUs[1]. Not sure if that patch has managed to land since v7.
>
Not yet.
I checked and didn't find Kirill's series "[PATCH 0/8] mm: Remove PG_reclaim"
merged into master either.
I've rerun my original test case and confirmed that the issue can still be
reproduced.
--
Best regards,
Ridong