Re: [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and cleanup

From: Peter Xu
Date: Sun Dec 01 2024 - 12:59:53 EST


On Tue, Sep 10, 2024 at 11:43:45PM +0000, Ackerley Tng wrote:
> +/**
> + * Removes folios in range [@lstart, @lend) from page cache of inode, updates
> + * inode metadata and hugetlb reservations.
> + */
> +static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode,
> + loff_t lstart, loff_t lend)
> +{
> + struct kvm_gmem_hugetlb *hgmem;
> + struct hstate *h;
> + int gbl_reserve;
> + int num_freed;
> +
> + hgmem = kvm_gmem_hgmem(inode);
> + h = hgmem->h;
> +
> + num_freed = kvm_gmem_hugetlb_filemap_remove_folios(inode->i_mapping,
> + h, lstart, lend);
> +
> + gbl_reserve = hugepage_subpool_put_pages(hgmem->spool, num_freed);
> + hugetlb_acct_memory(h, -gbl_reserve);

I wonder whether this is needed, and whether hugetlb_acct_memory() needs to
be exported in the other patch.

IIUC subpools manages the global reservation on its own when min_pages is
set (which should be gmem's case, where both max/min set to gmem size).
That's in hugepage_put_subpool() -> unlock_or_release_subpool().

> +
> + spin_lock(&inode->i_lock);
> + inode->i_blocks -= blocks_per_huge_page(h) * num_freed;
> + spin_unlock(&inode->i_lock);
> +}

--
Peter Xu