Re: [PATCH] mm: fix LRU balancing effect of new transparent huge pages

From: Andrew Morton
Date: Mon May 11 2020 - 17:58:22 EST


On Mon, 11 May 2020 14:38:23 -0700 Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:

> On Mon, May 11, 2020 at 2:11 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > On Sat, 9 May 2020 07:19:46 -0700 Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
> >
> > > Currently, THP are counted as single pages until they are split right
> > > before being swapped out. However, at that point the VM is already in
> > > the middle of reclaim, and adjusting the LRU balance then is useless.
> > >
> > > Always account THP by the number of basepages, and remove the fixup
> > > from the splitting path.
> >
> > Confused. What kernel is this applicable to?
>
> It is still applicable to the latest Linux kernel.

The patch has

> @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
>
> __count_vm_events(PGACTIVATE, nr_pages);
> __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages);
> - update_page_reclaim_stat(lruvec, file, 1);
> + update_page_reclaim_stat(lruvec, file, 1, nr_pages);
> }
> }

but current mainline is quite different:

static void __activate_page(struct page *page, struct lruvec *lruvec,
void *arg)
{
if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
int file = page_is_file_lru(page);
int lru = page_lru_base_type(page);

del_page_from_lru_list(page, lruvec, lru);
SetPageActive(page);
lru += LRU_ACTIVE;
add_page_to_lru_list(page, lruvec, lru);
trace_mm_lru_activate(page);

__count_vm_event(PGACTIVATE);
update_page_reclaim_stat(lruvec, file, 1);
}
}

q:/usr/src/linux-5.7-rc5> patch -p1 --dry-run < ~/x.txt
checking file mm/swap.c
Hunk #2 FAILED at 288.
Hunk #3 FAILED at 546.
Hunk #4 FAILED at 564.
Hunk #5 FAILED at 590.
Hunk #6 succeeded at 890 (offset -9 lines).
Hunk #7 succeeded at 915 (offset -9 lines).
Hunk #8 succeeded at 958 with fuzz 2 (offset -10 lines).
4 out of 8 hunks FAILED