Re: [PATCHv5 27/30] x86/kvm: Use bounce buffers for TD guest

From: Tom Lendacky
Date: Thu Mar 10 2022 - 09:43:40 EST


On 3/9/22 14:07, Dave Hansen wrote:
On 3/2/22 06:28, Kirill A. Shutemov wrote:
--- a/arch/x86/coco/tdx.c
+++ b/arch/x86/coco/tdx.c
@@ -5,6 +5,7 @@
#define pr_fmt(fmt) "tdx: " fmt
#include <linux/cpufeature.h>
+#include <linux/swiotlb.h>
#include <asm/coco.h>
#include <asm/tdx.h>
#include <asm/vmx.h>
@@ -627,5 +628,7 @@ void __init tdx_early_init(void)
x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required;
x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
+ swiotlb_force = SWIOTLB_FORCE;
+
pr_info("Guest detected\n");
}

AMD currently does:

if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
swiotlb_force = SWIOTLB_FORCE;

which somewhat begs the question of why we can't do the

swiotlb_force = SWIOTLB_FORCE;

thing in:

void __init mem_encrypt_init(void)
{
if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))

If you make this cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT), then it should work for both, I would think. If you use CC_ATTR_MEM_ENCRYPT, you'll force bare-metal SME to always use bounce buffers when doing I/O. But SME can do I/O to encrypted memory if the device supports 64-bit DMA or if the IOMMU is being used, so we don't want to force SWIOTLB in this case.

Thanks,
Tom

return;

/// Here

I recall there being a reason for this. But I don't see any mention in
the changelog.