Re: [PATCH 6/6] mm: page_alloc: consolidate free page accounting

From: Johannes Weiner
Date: Thu Sep 14 2023 - 00:11:51 EST


On Wed, Sep 13, 2023 at 10:18:17PM +0200, Vlastimil Babka wrote:
> Pageblock migratetype is set under zone->lock but there are
> places that read it outside of zone->lock and then trust it to perform the
> freelist placement. See for example __free_pages_ok(), or free_unref_page()
> in the cases it calls free_one_page(). These determine pageblock migratetype
> before taking the zone->lock. Only for has_isolate_pageblock() cases we are
> more careful, because previously isolation was the only case where precision
> was needed. So I think this warning is going to trigger?

Good catch on these two. It didn't get the warning, but the code is
indeed not quite right. The fix looks straight-forward: move the
lookup in those two cases into the zone->lock.

__free_pages_ok() is used

- when freeing >COSTLY order pages that aren't THPs
- when bootstrapping the allocator
- when allocating with alloc_pages_exact()
- when "accepting" unaccepted vm host memory before first use

none of which are too hot to tolerate the bitmap access inside the
lock instead of outside. free_one_page() is used in the isolated pages
and pcp-contended paths, both of which are exceptions as well.

Plus, it saves two branches (under the lock) in those paths to test
for the isolate conditions.

And a double lookup in case there *is* isolation.

I checked the code again and didn't see any other instances like the
two here. But I'll double check tomorrow and then send a fix.

> > @@ -757,23 +783,21 @@ static inline void __free_one_page(struct page *page,
> > VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
> >
> > VM_BUG_ON(migratetype == -1);
> > - if (likely(!is_migrate_isolate(migratetype)))
> > - __mod_zone_freepage_state(zone, 1 << order, migratetype);
> > -
> > VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
> > VM_BUG_ON_PAGE(bad_range(zone, page), page);
> >
> > while (order < MAX_ORDER) {
> > - if (compaction_capture(capc, page, order, migratetype)) {
> > - __mod_zone_freepage_state(zone, -(1 << order),
> > - migratetype);
> > + int buddy_mt;
> > +
> > + if (compaction_capture(capc, page, order, migratetype))
> > return;
> > - }
> >
> > buddy = find_buddy_page_pfn(page, pfn, order, &buddy_pfn);
> > if (!buddy)
> > goto done_merging;
> >
> > + buddy_mt = get_pfnblock_migratetype(buddy, buddy_pfn);
>
> You should assume buddy_mt equals migratetype, no? It's the same assumption
> as the VM_WARN_ONCE() I've discussed?

Ah, you're right, that lookup can be removed.

Actually, that section is brainfarts. There is an issue with updating
the buddy type before removing it from the list. I was confused why
this didn't warn for me, but it's because on my test setup, the
pageblock_order == MAX_ORDER since I don't have hugetlb compiled in.

The fix looks simple. I'll test it, with pb order 9 as well, and
follow-up.

> > @@ -801,9 +825,9 @@ static inline void __free_one_page(struct page *page,
> > * merge with it and move up one order.
> > */
> > if (page_is_guard(buddy))
> > - clear_page_guard(zone, buddy, order, migratetype);
> > + clear_page_guard(zone, buddy, order);
> > else
> > - del_page_from_free_list(buddy, zone, order);
> > + del_page_from_free_list(buddy, zone, order, buddy_mt);
>
> Ugh so this will add account_freepages() call to each iteration of the
> __free_one_page() hot loop, which seems like a lot of unnecessary overhead -
> as long as we are within pageblock_order the migratetype should be the same,
> and thus also is_migrate_isolate() and is_migrate_cma() tests should return
> the same value so we shouldn't need to call __mod_zone_page_state()
> piecemeal like this.

Good point, this is unnecessarily naive. The net effect on the
counters from the buddy merging is nil. That's true even when we go
beyond the page block, because we don't merge isolated/cma blocks with
anything except their own.

I'll move the accounting out into a single call.

Thanks for your thorough review!