Re: [PATCH v3] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests.

From: Ashish Kalra
Date: Wed Nov 04 2020 - 17:39:27 EST


Hello Konrad,

On Wed, Nov 04, 2020 at 05:14:52PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Nov 04, 2020 at 10:08:04PM +0000, Ashish Kalra wrote:
> > From: Ashish Kalra <ashish.kalra@xxxxxxx>
> >
> > For SEV, all DMA to and from guest has to use shared
> > (un-encrypted) pages. SEV uses SWIOTLB to make this
> > happen without requiring changes to device drivers.
> > However, depending on workload being run, the default
> > 64MB of SWIOTLB might not be enough and SWIOTLB
> > may run out of buffers to use for DMA, resulting
> > in I/O errors and/or performance degradation for
> > high I/O workloads.
> >
> > Increase the default size of SWIOTLB for SEV guests
> > using a minimum value of 128MB and a maximum value
>
> <blinks>
>
> 64MB for a 1GB VM is not enough?
>
> > of 512MB, determining on amount of provisioned guest
>
> I like the implementation on how this is done.. but
> the choices of memory and how much seems very much
> random. Could there be some math behind this?
>

Earlier the patch was based on using a % of guest memory, as below:

+#define SEV_ADJUST_SWIOTLB_SIZE_PERCENT 5
+#define SEV_ADJUST_SWIOTLB_SIZE_MAX (1UL << 30)
...
...
+ if (sev_active() && !io_tlb_nslabs) {
+ unsigned long total_mem = get_num_physpages() << PAGE_SHIFT;
+
+ default_size = total_mem *
+ SEV_ADJUST_SWIOTLB_SIZE_PERCENT / 100;
+
+ default_size = ALIGN(default_size, 1 << IO_TLB_SHIFT);
+
+ default_size = clamp_val(default_size, IO_TLB_DEFAULT_SIZE,
+ SEV_ADJUST_SWIOTLB_SIZE_MAX);
+ }

But, then it is difficult to predict what % of guest memory to use ?

Then there are other factors to consider, such as vcpu_count or if there
is going to be high I/O workload, etc.

But that all makes it very complicated, what we basically want is a
range from 128M to 512M and that's why the current patch which picks up
this range from the amount of allocated guest memory keeps it simple.

Thanks,
Ashish

> > memory.
> >
> > Using late_initcall() interface to invoke
> > swiotlb_adjust() does not work as the size
> > adjustment needs to be done before mem_encrypt_init()
> > and reserve_crashkernel() which use the allocated
> > SWIOTLB buffer size, hence calling it explicitly
> > from setup_arch().
> >
> > The SWIOTLB default size adjustment is added as an
> > architecture specific interface/callback to allow
> > architectures such as those supporting memory
> > encryption to adjust/expand SWIOTLB size for their
> > use.
> >
> > Signed-off-by: Ashish Kalra <ashish.kalra@xxxxxxx>
> > ---
> > arch/x86/kernel/setup.c | 2 ++
> > arch/x86/mm/mem_encrypt.c | 42 +++++++++++++++++++++++++++++++++++++++
> > include/linux/swiotlb.h | 1 +
> > kernel/dma/swiotlb.c | 27 +++++++++++++++++++++++++
> > 4 files changed, 72 insertions(+)
> >
> > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> > index 3511736fbc74..b073d58dd4a3 100644
> > --- a/arch/x86/kernel/setup.c
> > +++ b/arch/x86/kernel/setup.c
> > @@ -1166,6 +1166,8 @@ void __init setup_arch(char **cmdline_p)
> > if (boot_cpu_has(X86_FEATURE_GBPAGES))
> > hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >
> > + swiotlb_adjust();
> > +
> > /*
> > * Reserve memory for crash kernel after SRAT is parsed so that it
> > * won't consume hotpluggable memory.
> > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> > index 3f248f0d0e07..e0deb157cddd 100644
> > --- a/arch/x86/mm/mem_encrypt.c
> > +++ b/arch/x86/mm/mem_encrypt.c
> > @@ -489,7 +489,49 @@ static void print_mem_encrypt_feature_info(void)
> > pr_cont("\n");
> > }
> >
> > +#define TOTAL_MEM_1G 0x40000000UL
> > +#define TOTAL_MEM_4G 0x100000000UL
> > +
> > +#define SIZE_128M (128UL<<20)
> > +#define SIZE_256M (256UL<<20)
> > +#define SIZE_512M (512UL<<20)
> > +
> > /* Architecture __weak replacement functions */
> > +unsigned long __init arch_swiotlb_adjust(unsigned long iotlb_default_size)
> > +{
> > + unsigned long size = 0;
> > +
> > + /*
> > + * For SEV, all DMA has to occur via shared/unencrypted pages.
> > + * SEV uses SWOTLB to make this happen without changing device
> > + * drivers. However, depending on the workload being run, the
> > + * default 64MB of SWIOTLB may not be enough & SWIOTLB may
> > + * run out of buffers for DMA, resulting in I/O errors and/or
> > + * performance degradation especially with high I/O workloads.
> > + * Increase the default size of SWIOTLB for SEV guests using
> > + * a minimum value of 128MB and a maximum value of 512MB,
> > + * depending on amount of provisioned guest memory.
> > + */
> > + if (sev_active()) {
> > + phys_addr_t total_mem = memblock_phys_mem_size();
> > +
> > + if (total_mem <= TOTAL_MEM_1G)
> > + size = clamp(iotlb_default_size * 2, SIZE_128M,
> > + SIZE_128M);
> > + else if (total_mem <= TOTAL_MEM_4G)
> > + size = clamp(iotlb_default_size * 4, SIZE_256M,
> > + SIZE_256M);
> > + else
> > + size = clamp(iotlb_default_size * 8, SIZE_512M,
> > + SIZE_512M);
> > +
> > + pr_info("SEV adjusted max SWIOTLB size = %luMB",
> > + size >> 20);
> > + }
> > +
> > + return size;
> > +}
> > +
> > void __init mem_encrypt_init(void)
> > {
> > if (!sme_me_mask)
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 046bb94bd4d6..01ae6d891327 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -33,6 +33,7 @@ extern void swiotlb_init(int verbose);
> > int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
> > extern unsigned long swiotlb_nr_tbl(void);
> > unsigned long swiotlb_size_or_default(void);
> > +extern void __init swiotlb_adjust(void);
> > extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
> > extern void __init swiotlb_update_mem_attributes(void);
> >
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index c19379fabd20..66a9e627bb51 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -163,6 +163,33 @@ unsigned long swiotlb_size_or_default(void)
> > return size ? size : (IO_TLB_DEFAULT_SIZE);
> > }
> >
> > +unsigned long __init __weak arch_swiotlb_adjust(unsigned long size)
> > +{
> > + return 0;
> > +}
> > +
> > +void __init swiotlb_adjust(void)
> > +{
> > + unsigned long size;
> > +
> > + /*
> > + * If swiotlb parameter has not been specified, give a chance to
> > + * architectures such as those supporting memory encryption to
> > + * adjust/expand SWIOTLB size for their use.
> > + */
> > + if (!io_tlb_nslabs) {
> > + size = arch_swiotlb_adjust(IO_TLB_DEFAULT_SIZE);
> > + if (size) {
> > + size = ALIGN(size, 1 << IO_TLB_SHIFT);
> > + io_tlb_nslabs = size >> IO_TLB_SHIFT;
> > + io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
> > +
> > + pr_info("architecture adjusted SWIOTLB slabs = %lu\n",
> > + io_tlb_nslabs);
> > + }
> > + }
> > +}
> > +
> > void swiotlb_print_info(void)
> > {
> > unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT;
> > --
> > 2.17.1
> >