Re: [RFC v3 00/21] Preserved-over-Kexec RAM

From: Gowans, James
Date: Fri May 26 2023 - 09:57:32 EST


On Wed, 2023-04-26 at 17:08 -0700, Anthony Yznaga wrote:
> Sending out this RFC in part to guage community interest.
> This patchset implements preserved-over-kexec memory storage or PKRAM as a
> method for saving memory pages of the currently executing kernel so that
> they may be restored after kexec into a new kernel. The patches are adapted
> from an RFC patchset sent out in 2013 by Vladimir Davydov [1]. They
> introduce the PKRAM kernel API.
>
> One use case for PKRAM is preserving guest memory and/or auxillary
> supporting data (e.g. iommu data) across kexec to support reboot of the
> host with minimal disruption to the guest.

Hi Anthony,

Thanks for re-posting this - I'm been wanting to re-kindle the discussion
on preserving memory across kexec for a while now.

There are a few aspects at play in this space of memory management
designed specifically for the virtualisation and live update (kexec) use-
case which I think we should consider:

1. Preserving userspace-accessible memory across kexec: this is what pkram
addresses.

2. Preserving kernel state: This would include memory required for kexec
with DMA passthrough devices, like IOMMU root page and page tables, DMA-
able buffers for drivers, etc. Also certain structures for improved kernel
boot performance after kexec, like a PCI device cache, clock LPJ and
possible others, sort of what Xen breadcrumbs [0] achieves. The pkram RFC
indicates that this should be possible, though IMO this could be more
straight forward to do with a new filesystem with first-class support for
kernel persistence via something like inode types for kernel data.

3. Ensuring huge/gigantic memory allocations: to improve the TLB perf of
2-stage translations it's beneficial to allocate guest memory in large
contiguous blocks, preferably PUD-level blocks for multi-GiB guests. If
the buddy allocator is used this may be a challenge both from an
implementation and a fragmentation perspective, and it may be desirable to
have stronger guarantees about allocation sizes.

4. Removing struct page overhead: When doing the huge/gigantic
allocations, in generally it won't be necessary to have 4 KiB struct
pages. This is something with dmemfs [1, 2] tries to achieve by using a
large chunk of reserved memory and managing that by a new filesystem.

5. More "advanced" memory management APIs/ioctls for virtualisation: Being
able to support things like DMA-driven post-copy live migration, memory
oversubscription, carving out chunks of memory from a VM to launch side-
car VMs, more fine-grain control of IOMMU or MMU permissions, etc. This
may be easier to achieve with a new filesystem, rather than coupling to
tempfs semantics and ioctls.

Overall, with the above in mind, my take is that we may have a smoother
path to implement a more comprehensive solution by going the route of a
new purpose-built file system on top of reserved memory. Sort of like
dmemfs with persistence and specifically support for kernel persistence.

Does my take here make sense?

I'm hoping to put together an RFC for something like the above (dmemfs
with persistence) soon, focusing on how the IOMMU persistence will work.
This is an important differentiating factor to cover in the RFC, IMO.

> PKRAM provides a flexible way
> for doing this without requiring that the amount of memory used by a fixed
> size created a prior.

AFAICT the main down-side of what I'm suggesting here compared to pkram,
is that as you say here: pkram doesn't require the up-front reserving of
memory - allocations from the global shared pool are dynamic. I'm on the
fence as to whether this is actually a desirable property though. Carving
out a large chunk of system memory as reserved memory for a persisted
filesystem (as I'm suggesting) has the advantages of removing struct page
overhead, providing better guarantees about huge/gigantic page
allocations, and probably makes the kexec restore path simpler and more
self contained.

I think there's an argument to be made that having a clearly-defined large
range of memory which is persisted, and the rest is normal "ephemeral"
kernel memory may be preferable.

Keen to hear your (and others) thoughts!

JG

[0] http://david.woodhou.se/live-update-handover.pdf
[1] https://lwn.net/Articles/839216/
[2] https://lkml.org/lkml/2020/12/7/342