Re: [PATCH] mshv: Replace fixed memory deposit with status driven helper
From: Mukesh R
Date: Fri Feb 20 2026 - 14:05:22 EST
On 2/20/26 09:05, Michael Kelley wrote:
From: Stanislav Kinsburskii <skinsburskii@xxxxxxxxxxxxxxxxxxx> Sent: Thursday, February 19, 2026 2:10 PM
Replace hardcoded HV_MAP_GPA_DEPOSIT_PAGES usage with
hv_deposit_memory() which derives the deposit size from
the hypercall status, and remove the now-unused constant.
The previous code always deposited a fixed 256 pages on
insufficient memory, ignoring the actual demand reported
by the hypervisor.
Does the hypervisor report a specific page count demand? I haven't
seen that anywhere. It seems like the deposit memory operation is
always something of a guess.
hv_deposit_memory() handles different
deposit statuses, aligning map-GPA retries with the rest
of the codebase.
This approach may require more allocation and deposit
hypercall iterations, but avoids over-depositing large
fixed chunks when fewer pages would suffice. Until any
performance impact is measured, the more frugal and
consistent behavior is preferred.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@xxxxxxxxxxxxxxxxxxx>
From a purely functional standpoint, this change addresses the
concern that I raised. But I don?t have any intuition on the performance
impact of having to iterate. hv_deposit_memory() adds only a single
Indeed, it is not insignificant. Some discussions with hyp team while
ago had resulted in suggestions around depositing larger sizes, but then
there are many places where single page suffices. This is just lateral
change. But as this thing bakes, heuristics will evolve and we'll do
some optimizations aroud it... my 2 cents...
Thanks,
-Mukesh
page for some of the statuses, so if there really is a large memory need,
the new code would iterate 256 times to achieve what the existing code
does.
Any idea where the 256 came from the first place? Was that
empirically determined like some of the other memory deposit counts?
In addition to a potential performance impact, I know the hypervisor tries
to detect denial-of-service attempts that make "too many" calls to the
hypervisor in a short period of time. In such a case, the hypervisor
suspends scheduling the VM for a few seconds before allowing it to resume.
Just need to make sure the hypervisor doesn't think the iterating is a
denial-of-service attack. Or maybe that denial-of-service detection
doesn't apply to the root partition VM.
But from a functional standpoint,
Reviewed-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
---
drivers/hv/mshv_root_hv_call.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c
index 7f91096f95a8..317191462b63 100644
--- a/drivers/hv/mshv_root_hv_call.c
+++ b/drivers/hv/mshv_root_hv_call.c
@@ -16,7 +16,6 @@
/* Determined empirically */
#define HV_INIT_PARTITION_DEPOSIT_PAGES 208
-#define HV_MAP_GPA_DEPOSIT_PAGES 256
#define HV_UMAP_GPA_PAGES 512
#define HV_PAGE_COUNT_2M_ALIGNED(pg_count) (!((pg_count) & (0x200 - 1)))
@@ -239,8 +238,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64
page_struct_count,
completed = hv_repcomp(status);
if (hv_result_needs_memory(status)) {
- ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id,
- HV_MAP_GPA_DEPOSIT_PAGES);
+ ret = hv_deposit_memory(partition_id, status);
if (ret)
break;