Re: [PATCH] KVM: SEV: improve the code readability for ASID management
From: Sean Christopherson
Date: Mon Aug 02 2021 - 12:17:37 EST
On Fri, Jul 30, 2021, Mingwei Zhang wrote:
> KVM SEV code uses bitmaps to manage ASID states. ASID 0 was always skipped
> because it is never used by VM. Thus, ASID value and its bitmap postion
> always has an 'offset-by-1' relationship.
That's not necessarily a bad thing, assuming the bitmap is properly sized.
> Both SEV and SEV-ES shares the ASID space, thus KVM uses a dynamic range
> [min_asid, max_asid] to handle SEV and SEV-ES ASIDs separately.
>
> Existing code mixes the usage of ASID value and its bitmap position by
> using the same variable called 'min_asid'.
>
> Fix the min_asid usage: ensure that its usage is consistent with its name;
> adjust its value before using it as a bitmap position. Add comments on ASID
> bitmap allocation to clarify the skipping-ASID-0 property.
>
> Fixes: 80675b3ad45f (KVM: SVM: Update ASID allocation to support SEV-ES guests)
As Joerg commented, Fixes: is not appropriate unless there's an actual bug being
addressed. And based on the shortlog's "improve the code readability", I would
expect a pure refactoring, i.e. something's got to give. AFAICT, this is a pure
refactoring, so the Fixes: should be dropped.
> Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx>
> Cc: Tom Lendacky <thomas.lendacky@xxxxxxx>
> Cc: Marc Orr <marcorr@xxxxxxxxxx>
> Cc: David Rientjes <rientjes@xxxxxxxxxx>
> Cc: Alper Gun <alpergun@xxxxxxxxxx>
> Cc: Dionna Glaze <dionnaglaze@xxxxxxxxxx>
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: Vipin Sharma <vipinsh@xxxxxxxxxx>
> Ce: Peter Gonda <pgonda@xxxxxxxxxx>
> ---
> arch/x86/kvm/svm/sev.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 8d36f0c73071..e3902283cbf7 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -80,7 +80,7 @@ static int sev_flush_asids(int min_asid, int max_asid)
> int ret, pos, error = 0;
>
> /* Check if there are any ASIDs to reclaim before performing a flush */
> - pos = find_next_bit(sev_reclaim_asid_bitmap, max_asid, min_asid);
> + pos = find_next_bit(sev_reclaim_asid_bitmap, max_asid, min_asid - 1);
> if (pos >= max_asid)
> return -EBUSY;
>
> @@ -142,10 +142,10 @@ static int sev_asid_new(struct kvm_sev_info *sev)
> * SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.
> * SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.
> */
> - min_asid = sev->es_active ? 0 : min_sev_asid - 1;
> + min_asid = sev->es_active ? 1 : min_sev_asid;
> max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;
> again:
> - pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_asid);
> + pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_asid - 1);
IMO, this is only marginally better, as the checks against max_asid are still
misleading, and the "pos + 1" + "min_asid - 1" interaction is subtle.
> if (pos >= max_asid) {
This is the check that's misleading/confusing.
Rather than adjusting the bitmap index, what about simply umping the bitmap size?
IIRC, current CPUs have 512 ASIDs, counting ASID 0, i.e. bumping the size won't
consume any additional memory. And if it does, the cost is 8 bytes...
It'd be a bigger refactoring, but it should completely eliminate the mod-by-1
shenanigans, e.g. a partial patch could look like
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 62926f1a5f7b..7bcdc34546d7 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -64,6 +64,7 @@ static DEFINE_MUTEX(sev_bitmap_lock);
unsigned int max_sev_asid;
static unsigned int min_sev_asid;
static unsigned long sev_me_mask;
+static unsigned int nr_asids;
static unsigned long *sev_asid_bitmap;
static unsigned long *sev_reclaim_asid_bitmap;
@@ -81,8 +82,8 @@ static int sev_flush_asids(int min_asid, int max_asid)
int ret, pos, error = 0;
/* Check if there are any ASIDs to reclaim before performing a flush */
- pos = find_next_bit(sev_reclaim_asid_bitmap, max_asid, min_asid);
- if (pos >= max_asid)
+ pos = find_next_bit(sev_reclaim_asid_bitmap, nr_asids, min_asid);
+ if (pos > max_asid)
return -EBUSY;
/*
@@ -115,8 +116,8 @@ static bool __sev_recycle_asids(int min_asid, int max_asid)
/* The flush process will flush all reclaimable SEV and SEV-ES ASIDs */
bitmap_xor(sev_asid_bitmap, sev_asid_bitmap, sev_reclaim_asid_bitmap,
- max_sev_asid);
- bitmap_zero(sev_reclaim_asid_bitmap, max_sev_asid);
+ nr_asids);
+ bitmap_zero(sev_reclaim_asid_bitmap, nr_asids);
return true;
}
@@ -143,11 +144,11 @@ static int sev_asid_new(struct kvm_sev_info *sev)
* SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.
* SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.
*/
- min_asid = sev->es_active ? 0 : min_sev_asid - 1;
+ min_asid = sev->es_active ? 1 : min_sev_asid;
max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;
again:
- pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_asid);
- if (pos >= max_asid) {
+ pos = find_next_zero_bit(sev_asid_bitmap, sev_asid_bitmap_size, min_asid);
+ if (pos > max_asid) {
if (retry && __sev_recycle_asids(min_asid, max_asid)) {
retry = false;
goto again;
@@ -161,7 +162,7 @@ static int sev_asid_new(struct kvm_sev_info *sev)
mutex_unlock(&sev_bitmap_lock);
- return pos + 1;
+ return pos;
@@ -1855,12 +1942,17 @@ void __init sev_hardware_setup(void)
min_sev_asid = edx;
sev_me_mask = 1UL << (ebx & 0x3f);
- /* Initialize SEV ASID bitmaps */
- sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
+ /*
+ * Initialize SEV ASID bitmaps. Allocate space for ASID 0 in the
+ * bitmap, even though it's never used, so that the bitmap is indexed
+ * by the actual ASID.
+ */
+ nr_asids = max_sev_asid + 1;
+ sev_asid_bitmap = bitmap_zalloc(nr_asids, GFP_KERNEL);
if (!sev_asid_bitmap)
goto out;
- sev_reclaim_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
+ sev_reclaim_asid_bitmap = bitmap_zalloc(nr_asids, GFP_KERNEL);
if (!sev_reclaim_asid_bitmap) {
bitmap_free(sev_asid_bitmap);
sev_asid_bitmap = NULL;