Re: [PATCH v7 14/22] mm: memfd_luo: allow preserving memfd

From: Mike Rapoport

Date: Sun Nov 23 2025 - 10:48:16 EST


On Sat, Nov 22, 2025 at 05:23:41PM -0500, Pasha Tatashin wrote:
> From: Pratyush Yadav <ptyadav@xxxxxxxxx>
>
> The ability to preserve a memfd allows userspace to use KHO and LUO to
> transfer its memory contents to the next kernel. This is useful in many
> ways. For one, it can be used with IOMMUFD as the backing store for
> IOMMU page tables. Preserving IOMMUFD is essential for performing a
> hypervisor live update with passthrough devices. memfd support provides
> the first building block for making that possible.
>
> For another, applications with a large amount of memory that takes time
> to reconstruct, reboots to consume kernel upgrades can be very
> expensive. memfd with LUO gives those applications reboot-persistent
> memory that they can use to quickly save and reconstruct that state.
>
> While memfd is backed by either hugetlbfs or shmem, currently only
> support on shmem is added. To be more precise, support for anonymous
> shmem files is added.
>
> The handover to the next kernel is not transparent. All the properties
> of the file are not preserved; only its memory contents, position, and
> size. The recreated file gets the UID and GID of the task doing the
> restore, and the task's cgroup gets charged with the memory.
>
> Once preserved, the file cannot grow or shrink, and all its pages are
> pinned to avoid migrations and swapping. The file can still be read from
> or written to.
>
> Use vmalloc to get the buffer to hold the folios, and preserve
> it using kho_preserve_vmalloc(). This doesn't have the size limit.
>
> Signed-off-by: Pratyush Yadav <ptyadav@xxxxxxxxx>
> Co-developed-by: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx>
> ---

...

> +static int memfd_luo_retrieve_folios(struct file *file,
> + struct memfd_luo_folio_ser *folios_ser,
> + u64 nr_folios)
> +{
> + struct inode *inode = file_inode(file);
> + struct address_space *mapping = inode->i_mapping;
> + struct folio *folio;
> + long i = 0;
> + int err;
> +
> + for (; i < nr_folios; i++) {
> + const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
> + phys_addr_t phys;
> + u64 index;
> + int flags;
> +
> + if (!pfolio->pfn)
> + continue;
> +
> + phys = PFN_PHYS(pfolio->pfn);
> + folio = kho_restore_folio(phys);
> + if (!folio) {
> + pr_err("Unable to restore folio at physical address: %llx\n",
> + phys);
> + goto put_folios;
> + }
> + index = pfolio->index;
> + flags = pfolio->flags;
> +
> + /* Set up the folio for insertion. */
> + __folio_set_locked(folio);
> + __folio_set_swapbacked(folio);
> +
> + err = mem_cgroup_charge(folio, NULL, mapping_gfp_mask(mapping));
> + if (err) {
> + pr_err("shmem: failed to charge folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + err = shmem_add_to_page_cache(folio, mapping, index, NULL,
> + mapping_gfp_mask(mapping));
> + if (err) {
> + pr_err("shmem: failed to add to page cache folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + if (flags & MEMFD_LUO_FOLIO_UPTODATE)
> + folio_mark_uptodate(folio);
> + if (flags & MEMFD_LUO_FOLIO_DIRTY)
> + folio_mark_dirty(folio);
> +
> + err = shmem_inode_acct_blocks(inode, 1);
> + if (err) {
> + pr_err("shmem: failed to account folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + shmem_recalc_inode(inode, 1, 0);
> + folio_add_lru(folio);
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return 0;
> +
> +unlock_folio:
> + folio_unlock(folio);
> + folio_put(folio);
> + i++;

I'd add a counter and use it int the below for loop.

> +put_folios:
> + /*
> + * Note: don't free the folios already added to the file. They will be
> + * freed when the file is freed. Free the ones not added yet here.
> + */
> + for (; i < nr_folios; i++) {
> + const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
> +
> + folio = kho_restore_folio(pfolio->pfn);
> + if (folio)
> + folio_put(folio);
> + }
> +
> + return err;
> +}

Reviewed-by: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx>

--
Sincerely yours,
Mike.