Re: [PATCH v5 09/16] kexec: enable KHO support for memory preservation

From: Pratyush Yadav
Date: Thu Mar 27 2025 - 06:05:08 EST


Hi Changyuan,

On Wed, Mar 19 2025, Changyuan Lyu wrote:

> From: "Mike Rapoport (Microsoft)" <rppt@xxxxxxxxxx>
>
> Introduce APIs allowing KHO users to preserve memory across kexec and
> get access to that memory after boot of the kexeced kernel
>
> kho_preserve_folio() - record a folio to be preserved over kexec
> kho_restore_folio() - recreates the folio from the preserved memory
> kho_preserve_phys() - record physically contiguous range to be
> preserved over kexec.
> kho_restore_phys() - recreates order-0 pages corresponding to the
> preserved physical range
>
> The memory preservations are tracked by two levels of xarrays to manage
> chunks of per-order 512 byte bitmaps. For instance the entire 1G order
> of a 1TB x86 system would fit inside a single 512 byte bitmap. For
> order 0 allocations each bitmap will cover 16M of address space. Thus,
> for 16G of memory at most 512K of bitmap memory will be needed for order 0.
>
> At serialization time all bitmaps are recorded in a linked list of pages
> for the next kernel to process and the physical address of the list is
> recorded in KHO FDT.

Why build the xarray only to transform it down to bitmaps when you can
build the bitmaps from the get go? This would end up wasting both time
and memory. At least from this patch, I don't really see much else being
done with the xarray apart from setting bits in the bitmap.

Of course, with the current linked list structure, this cannot work. But
I don't see why we need to have it. I think having a page-table like
structure would be better -- only instead of having PTEs at the lowest
levels, you have the bitmap.

Just like page tables, each table is page-size. So each page at the
lowest level can have 4k * 8 == 32768 bits. This maps to 128 MiB of 4k
pages. The next level will be pointers to the level 1 table, just like
in page tables. So we get 4096 / 8 == 512 pointers. Each level 2 table
maps to 64 GiB of memory. Similarly, level 3 table maps to 32 TiB and
level 4 to 16 PiB.

Now, __kho_preserve() can just find or allocate the table entry for the
PFN and set its bit. Similar work has to be done when doing the xarray
access as well, so this should have roughly the same performance. When
doing KHO, we just need to record the base address of the table and we
are done. This saves us from doing the expensive copying/transformation
of data in the critical path.

I don't see any obvious downsides compared to the current format. The
serialized state might end up taking slightly more memory due to upper
level tables, but it should still be much less than having two
representations of the same information exist simultaneously.

>
> The next kernel then processes that list, reserves the memory ranges and
> later, when a user requests a folio or a physical range, KHO restores
> corresponding memory map entries.
>
> Suggested-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx>
> Co-developed-by: Changyuan Lyu <changyuanl@xxxxxxxxxx>
> Signed-off-by: Changyuan Lyu <changyuanl@xxxxxxxxxx>
[...]
> +static void deserialize_bitmap(unsigned int order,
> + struct khoser_mem_bitmap_ptr *elm)
> +{
> + struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap);
> + unsigned long bit;
> +
> + for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) {
> + int sz = 1 << (order + PAGE_SHIFT);
> + phys_addr_t phys =
> + elm->phys_start + (bit << (order + PAGE_SHIFT));
> + struct page *page = phys_to_page(phys);
> +
> + memblock_reserve(phys, sz);
> + memblock_reserved_mark_noinit(phys, sz);

Why waste time and memory building the reserved ranges? We already have
all the information in the serialized bitmaps, and memblock is already
only allocating from scratch. So we should not need this at all, and
instead simply skip these pages in memblock_free_pages(). With the
page-table like format I mentioned above, this should be very easy since
you can find out whether a page is reserved or not in O(1) time.

> + page->private = order;
> + }
> +}
> +
> +static void __init kho_mem_deserialize(void)
> +{
> + struct khoser_mem_chunk *chunk;
> + struct kho_in_node preserved_mem;
> + const phys_addr_t *mem;
> + int err;
> + u32 len;
> +
> + err = kho_get_node(NULL, "preserved-memory", &preserved_mem);
> + if (err) {
> + pr_err("no preserved-memory node: %d\n", err);
> + return;
> + }
> +
> + mem = kho_get_prop(&preserved_mem, "metadata", &len);
> + if (!mem || len != sizeof(*mem)) {
> + pr_err("failed to get preserved memory bitmaps\n");
> + return;
> + }
> +
> + chunk = *mem ? phys_to_virt(*mem) : NULL;
> + while (chunk) {
> + unsigned int i;
> +
> + memblock_reserve(virt_to_phys(chunk), sizeof(*chunk));
> +
> + for (i = 0; i != chunk->hdr.num_elms; i++)
> + deserialize_bitmap(chunk->hdr.order,
> + &chunk->bitmaps[i]);
> + chunk = KHOSER_LOAD_PTR(chunk->hdr.next);
> + }
> +}
> +
> /* Helper functions for KHO state tree */
>
> struct kho_prop {
[...]

--
Regards,
Pratyush Yadav