Re: [PATCHv14 5/9] efi: Add unaccepted memory support
From: Kirill A. Shutemov
Date: Fri Oct 13 2023 - 13:27:46 EST
On Fri, Oct 13, 2023 at 06:44:45PM +0200, Vlastimil Babka wrote:
> On 10/13/23 18:22, Kirill A. Shutemov wrote:
> > On Fri, Oct 13, 2023 at 03:33:58PM +0300, Kirill A. Shutemov wrote:
> >> > While testing SNP guests running today's tip/master (ef19bc9dddc3) I ran
> >> > into what seems to be fairly significant lock contention due to the
> >> > unaccepted_memory_lock spinlock above, which results in a constant stream
> >> > of soft-lockups until the workload gets all its memory accepted/faulted
> >> > in if the guest has around 16+ vCPUs.
> >> >
> >> > I've included the guest dmesg traces I was seeing below.
> >> >
> >> > In this case I was running a 32 vCPU guest with 200GB of memory running on
> >> > a 256 thread EPYC (Milan) system, and can trigger the above situation fairly
> >> > reliably by running the following workload in a freshly-booted guests:
> >> >
> >> > stress --vm 32 --vm-bytes 5G --vm-keep
> >> >
> >> > Scaling up the number of stress threads and vCPUs should make it easier
> >> > to reproduce.
> >> >
> >> > Other than unresponsiveness/lockup messages until the memory is accepted,
> >> > the guest seems to continue running fine, but for large guests where
> >> > unaccepted memory is more likely to be useful, it seems like it could be
> >> > an issue, especially when consider 100+ vCPU guests.
> >>
> >> Okay, sorry for delay. It took time to reproduce it with TDX.
> >>
> >> I will look what can be done.
> >
> > Could you check if the patch below helps?
> >
> > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c
> > index 853f7dc3c21d..591da3f368fa 100644
> > --- a/drivers/firmware/efi/unaccepted_memory.c
> > +++ b/drivers/firmware/efi/unaccepted_memory.c
> > @@ -8,6 +8,14 @@
> > /* Protects unaccepted memory bitmap */
> > static DEFINE_SPINLOCK(unaccepted_memory_lock);
> >
> > +struct accept_range {
> > + struct list_head list;
> > + unsigned long start;
> > + unsigned long end;
> > +};
> > +
> > +static LIST_HEAD(accepting_list);
> > +
> > /*
> > * accept_memory() -- Consult bitmap and accept the memory if needed.
> > *
> > @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
> > {
> > struct efi_unaccepted_memory *unaccepted;
> > unsigned long range_start, range_end;
> > + struct accept_range range, *entry;
> > unsigned long flags;
> > u64 unit_size;
> >
> > @@ -80,7 +89,25 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
> >
> > range_start = start / unit_size;
> >
> > + range.start = start;
> > + range.end = end;
> > +retry:
> > spin_lock_irqsave(&unaccepted_memory_lock, flags);
> > +
> > + list_for_each_entry(entry, &accepting_list, list) {
> > + if (entry->end < start)
> > + continue;
> > + if (entry->start > end)
> > + continue;
> > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> > +
> > + /* Somebody else accepting the range */
> > + cpu_relax();
>
> Should this be rather cond_resched()? I think cpu_relax() isn't enough to
> prevent soft lockups.
Right. For some reason, I thought we cannot call cond_resched() from
atomic context (we sometimes get there from atomic context), but we can.
> Although IIUC hitting this should be rare, as the contending tasks will pick
> different ranges via try_to_accept_memory_one(), right?
Yes, it should be rare.
Generally, with exception of memblock, we accept all memory with MAX_ORDER
chunks. As long as unit_size <= MAX_ORDER page allocator should never
trigger the conflict as the caller owns full range to accept.
I will test the idea with larger unit_size to see how it behaves.
--
Kiryl Shutsemau / Kirill A. Shutemov