Re: [RFC PATCH 00/15] Make MAX_ORDER adjustable as a kernel boot time parameter.

From: David Hildenbrand
Date: Mon Aug 09 2021 - 03:20:38 EST


On 06.08.21 20:24, Zi Yan wrote:
On 6 Aug 2021, at 13:08, David Hildenbrand wrote:

On 06.08.21 18:54, Vlastimil Babka wrote:
On 8/6/21 6:16 PM, David Hildenbrand wrote:
On 06.08.21 17:36, Vlastimil Babka wrote:
On 8/5/21 9:02 PM, Zi Yan wrote:
From: Zi Yan <ziy@xxxxxxxxxx>

Patch 3 restores the pfn_valid_within() check when buddy allocator can merge
pages across memory sections. The check was removed when ARM64 gets rid of holes
in zones, but holes can appear in zones again after this patchset.

To me that's most unwelcome resurrection. I kinda missed it was going away and
now I can't even rejoice? I assume the systems that will be bumping max_order
have a lot of memory. Are they going to have many holes? What if we just
sacrificed the memory that would have a hole and don't add it to buddy at all?

I think the old implementation was just horrible and the description we have
here still suffers from that old crap: "but holes can appear in zones again".
No, it's not related to holes in zones at all. We can have MAX_ORDER -1 pages
that are partially a hole.

And to be precise, "hole" here means "there is no memmap" and not "there is a
hole but it has a valid memmap".

Yes.

But IIRC, we now have under SPARSEMEM always a complete memmap for a complete
memory sections (when talking about system RAM, ZONE_DEVICE is different but we
don't really care for now I think).

So instead of introducing what we had before, I think we should look into
something that doesn't confuse each person that stumbles over it out there. What
does pfn_valid_within() even mean in the new context? pfn_valid() is most
probably no longer what we really want, as we're dealing with multiple sections
that might be online or offline; in the old world, this was different, as a
MAX_ORDER -1 page was completely contained in a memory section that was either
online or offline.

I'd imagine something that expresses something different in the context of
sparsemem:

"Some page orders, such as MAX_ORDER -1, might span multiple memory sections.
Each memory section has a completely valid memmap if online. Memory sections
might either be completely online or completely offline. pfn_to_online_page()
might succeed on one part of a MAX_ORDER - 1 page, but not on another part. But
it will certainly be consistent within one memory section."

Further, as we know that MAX_ORDER -1 and memory sections are a power of two, we
can actually do a binary search to identify boundaries, instead of having to
check each and every page in the range.

Is what I describe the actual reason why we introduce pfn_valid_within() ? (and
might better introduce something new, with a better fitting name?)

What I don't like is mainly the re-addition of pfn_valid_within() (or whatever
we'd call it) into __free_one_page() for performance reasons, and also to
various pfn scanners (compaction) for performance and "I must not forget to
check this, or do I?" confusion reasons. It would be really great if we could
keep a guarantee that memmap exists for MAX_ORDER blocks. I see two ways to
achieve that:

1. we create memmap for MAX_ORDER blocks, pages in sections not online are
marked as reserved or some other state that allows us to do checks such as "is
there a buddy? no" without accessing a missing memmap
2. smaller blocks than MAX_ORDER are not released to buddy allocator

I think 1 would be more work, but less wasteful in the end?

It will end up seriously messing with memory hot(un)plug. It's not sufficient if there is a memmap (pfn_valid()), it has to be online (pfn_to_online_page()) to actually have a meaning.

So you'd have to allocate a memmap for all such memory sections, initialize it to all PG_Reserved ("memory hole") and mark these memory sections online. Further, you need memory block devices that are initialized and online.

So far so good, although wasteful. What happens if someone hotplugs a memory block that doesn't span a complete MAX_ORDER -1 page? Broken.


The only "workaround" would be requiring that MAX_ORDER - 1 cannot be bigger than memory blocks (memory_block_size_bytes()). The memory block size determines our hot(un)plug granularity and can (on some archs already) be determined at runtime. As both (MAX_ORDER and memory_block_size_bytes) would be determined at runtime, for example, by an admin explicitly requesting it, this might be feasible.


Memory hot(un)plug / onlining / offlining would most probably work naturally (although the hot(un)plug granularity is then limited to e.g., 1GiB memory blocks). But if that's what an admin requests on the command line, so be it.

What might need some thought, though, is having overlapping sections/such memory blocks with devmem. Sub-section hotadd has to continue working unless we want to break some PMEM devices seriously.

Thanks a lot for your valuable inputs!

Yes, this might work. But it seems to also defeat the purpose of sparse memory, which allow only memmapping present PFNs, right?

Not really. I will only be suboptimal for corner cases.

Except corner cases for devemem, we already always populate the memmap for complete memory sections. Now, we would populate the memmap for all memory sections spanning a MAX_ORDER - 1 page, if bigger than a section.

Will it matter in practice? I doubt it.

I consider 1 GiB allocations only relevant for really big machines. There, we don't really expect to have a lot of random memory holes. On a 1 TiB machine, with 1 GiB memory blocks and 1 GiB max_order - 1, you don't expect to have a completely fragmented memory layout such that allocating additional memmap for some memory sections really makes a difference.

Also it requires a lot more intrusive changes, which might not be accepted easily.

I guess it should require quite minimal changes in contrast to what you propose. What we should have to do is

a) Check that the configured MAX_ORDER - 1 is effectively not bigger than the memory block size

b) Initialize all sections spanning a MAX_ORDER - 1 during boot, we won't even have to mess with memory blocks et al.

All that's required is parsing/processing early parameters in the right order.

That sound very little intrusive compared to what you propose. Actually, I think what you propose would be an optimization of that approach.


I will look into the cost of the added pfn checks and try to optimize it. One thing I can think of is that these non-present PFNs should only appear at the beginning and at the end of a zone, since HOLES_IN_ZONE is gone, maybe I just need to store and check PFN range of a zone instead of checking memory section validity and modify the zone PFN range during memory hot(un)plug. For offline pages in the middle of a zone, struct page still exists and PageBuddy() returns false, since PG_offline is set, right?

I think we can have quite some crazy "sparse" layouts where you can have random holes within a zone, not only at the beginning/end.

Offline pages can be identified using pfn_to_online_page(). You must not touch their memmap, not even to check for PageBuddy(). PG_offline is a special case where pfn_to_online_page() returns true and the memmap is valid, however, the pages are logically offline and might get logically onlined later -- primarily used in virtualized environments, for example, with memory ballooning.

You can treat PG_offline pages like their are online, they just are accounted differently (!managed) and shouldn't be touched; but otherwise, they are just like any other allocated page.

--
Thanks,

David / dhildenb