Re: [PATCH v3 00/16] TDX: Enable Dynamic PAMT
From: Kiryl Shutsemau
Date: Mon Sep 29 2025 - 07:17:45 EST
On Sun, Sep 28, 2025 at 09:34:14AM +0800, Yan Zhao wrote:
> On Sat, Sep 27, 2025 at 03:00:31AM +0800, Edgecombe, Rick P wrote:
> > On Fri, 2025-09-26 at 09:11 -0700, Dave Hansen wrote:
> > > If it can't return failure then the _only_ other option is to spin.
> > > Right?
> >
> > Yea, but you could spin around the SEAMCALL or you could spin on
> > duplicate locks on the kernel side before making the SEAMCALL. Or put
> > more generally, you could prevent contention before you make the
> > SEACMALL. KVM does this also by kicking vCPUs out of the TDX module via
> > IPI in other cases.
> >
> > >
> > > I understand the reluctance to have such a nasty spin loop. But other
> > > than reworking the KVM code to do the retries at a higher level,
> >
> > Re-working KVM code would be tough, although teaching KVM to fail zap
> > calls has come up before for TDX/gmem interactions. It was looked at
> > and decided to be too complex. Now I guess the benefit side of the
> > equation changes a little bit, but doing it only for TDX might still be
> > a bridge to far.
> >
> > Unless anyone is holding onto another usage that might want this?
> >
> > > is there another option?
> >
> > I don't see why we can't just duplicate the locking in a more matching
> > way on the kernel side. Before the plan to someday drop the global lock
> > if needed, was to switch to 2MB granular locks to match the TDX
> > module's exclusive lock internal behavior.
> >
> > What Yan is basically pointing out is that there are shared locks that
> > are also taken on different ranges that could possibly contend with the
> > exclusive one that we are duplicating on the kernel side.
> >
> > So the problem is not fundamental to the approach I think. We just took
> > a shortcut by ignoring the shared locks. For line-of-sight to a path to
> > remove the global lock someday, I think we could make the 2MB granular
> > locks be reader/writer to match the TDX module. Then around the
> > SEAMCALLs that take these locks, we could take them on the kernel side
> > in the right order for whichever SEAMCALL we are making.
> Not sure if that would work.
>
> In the following scenario, where
> (a) adds PAMT pages B1, xx1 for A1's 2MB physical range.
> (b) adds PAMT pages A2, xx2 for B2's 2MB physical range.
>
> A1, B2 are not from the same 2MB physical range,
> A1, A2 are from the same 2MB physical range.
> B1, B2 are from the same 2MB physical range.
> Physical addresses of xx1, xx2 are irrelevant.
>
>
> CPU 0 CPU 1
> --------------------------------- -----------------------------
> write_lock(&rwlock-of-range-A1); write_lock(&rwlock-of-range-B2);
> read_lock(&rwlock-of-range-B1); read_lock(&rwlock-of-range-A2);
> ... ...
> (a) TDH.PHYMEM.PAMT.ADD(A1, B1, xx1) (b) TDH.PHYMEM.PAMT.ADD(B2, A2, xx2)
> ... ...
> read_unlock(&rwlock-of-range-B1); read_unlock(&rwlock-of-range-A2);
> write_unlock(&rwlock-of-range-A1); write_unlock(&rwlock-of-range-B2);
>
>
> To match the reader/writer locks in the TDX module, it looks like we may
> encounter an AB-BA lock issue.
>
> Do you have any suggestions for a better approach?
>
> e.g., could the PAMT pages be allocated from a dedicated pool that ensures they
> reside in different 2MB ranges from guest private pages and TD control pages?
It can work: allocate 2M a time for PAMT and piecemeal it to TDX module
as needed. But it means if 2M allocation is failed, TDX is not functional.
Maybe just use a dedicated kmem_cache for PAMT allocations. Although, I
am not sure if there's a way to specify to kmem_cache what pages to ask
from page allocator.
--
Kiryl Shutsemau / Kirill A. Shutemov