Re: [PATCH v3 4/4] x86/kvm: add boot parameter for setting max number of vcpus per guest
From: Sean Christopherson
Date: Wed Nov 17 2021 - 15:57:11 EST
On Tue, Nov 16, 2021, Juergen Gross wrote:
> Today the maximum number of vcpus of a kvm guest is set via a #define
> in a header file.
>
> In order to support higher vcpu numbers for guests without generally
> increasing the memory consumption of guests on the host especially on
> very large systems add a boot parameter for specifying the number of
> allowed vcpus for guests.
>
> The default will still be the current setting of 1024. The value 0 has
> the special meaning to limit the number of possible vcpus to the
> number of possible cpus of the host.
>
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
> V3:
> - rebase
> ---
> Documentation/admin-guide/kernel-parameters.txt | 7 +++++++
> arch/x86/include/asm/kvm_host.h | 5 ++++-
> arch/x86/kvm/x86.c | 9 ++++++++-
> 3 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index e269c3f66ba4..409a72c2d91b 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2445,6 +2445,13 @@
> feature (tagged TLBs) on capable Intel chips.
> Default is 1 (enabled)
>
> + kvm.max_vcpus= [KVM,X86] Set the maximum allowed numbers of vcpus per
> + guest. The special value 0 sets the limit to the number
> + of physical cpus possible on the host (including not
> + yet hotplugged cpus). Higher values will result in
> + slightly higher memory consumption per guest.
> + Default: 1024
Rather than makes this a module param, I would prefer to start with the below
patch (originally from TDX pre-enabling) and then wire up a way for userspace to
_lower_ the max on a per-VM basis, e.g. add a capability.
VMs largely fall into two categories: (1) the max number of vCPUs is known prior
to VM creation, or (2) the max number of vCPUs is unbounded (up to KVM's hard
limit), e.g. for container-style use cases where "vCPUs" are created on-demand in
response to the "guest" creating a new task.
For #1, a per-VM control lets userspace lower the limit to the bare minimum. For
#2, neither the module param nor the per-VM control is likely to be useful, but
a per-VM control does let mixed environments (both #1 and #2 VMs) lower the limits
for compatible VMs, whereas a module param must be set to the max of any potential VM.