Re: [PATCH 3/3] mm: swap: fix update_page_reclaim_stat for huge pages

From: Johannes Weiner
Date: Fri May 08 2020 - 17:51:43 EST


On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote:
> Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats
> just once for a page irrespective if a page is huge or not. Fix that by
> passing the hpage_nr_pages(page) to it.
>
> Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx>

https://lore.kernel.org/patchwork/patch/685703/

Laughs, then cries.

> @@ -928,7 +928,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
> }
>
> if (!PageUnevictable(page))
> - update_page_reclaim_stat(lruvec, file, PageActive(page_tail));
> + update_page_reclaim_stat(lruvec, file, PageActive(page_tail), 1);

The change to __pagevec_lru_add_fn() below makes sure the tail pages
are already accounted. This would make them count twice.

> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> @@ -973,7 +973,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
> if (page_evictable(page)) {
> lru = page_lru(page);
> update_page_reclaim_stat(lruvec, page_is_file_lru(page),
> - PageActive(page));
> + PageActive(page), nr_pages);
> if (was_unevictable)
> __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
> } else {