Re: [RFC PATCH v5 16/45] x86/virt/tdx: Add tdx_alloc/free_control_page() helpers
From: Edgecombe, Rick P
Date: Tue Feb 10 2026 - 19:50:43 EST
On Tue, 2026-02-10 at 09:44 -0800, Dave Hansen wrote:
> slow_path = atomic_dec_and_lock(fine-grained-refcount,
> pamt_lock)
> if (!slow_path)
> goto out;
I guess if it returns 0, the lock is not held. So we can just return.
>
> // fine-grained-refcount==0 and must stay that way with
> // pamt_lock held. Remove the DPAMT pages:
> tdh_phymem_pamt_remove(page, pamt_pa_array)
> out:
> spin_unlock(pamt_lock)
>
> On the acquire side, you do:
>
> fast_path = atomic_inc_not_zero(fine-grained-refcount)
> if (fast_path)
> return;
>
> // slow path:
> spin_lock(pamt_lock)
>
> // Was the race lost with another 0=>1 increment?
> if (atomic_read(fine-grained-refcount) > 0)
> goto out_inc
>
> tdh_phymem_pamt_add(page, pamt_pa_array)
> // Inc after the TDCALL so another thread won't race ahead of us
> // and try to use a non-existent PAMT entry
> out_inc:
> atomic_inc(fine-grained-refcount)
> spin_unlock(pamt_lock)
>
> Then, at least only the 0=>1 and 1=>0 transitions need the global lock.
> The fast paths only touch the refcount which isn't shared nearly as much
> as the global lock.
>
> BTW, this probably still needs to be spin_lock_irq(), not what I wrote
> above, but that's not a big deal to add.
>
> I've stared at this for a bit and don't see any holes. Does anyone else
> see any?
I don't see any issues. It is largely similar to the version in the next patch
except we don't need to handle the HPA_RANGE_NOT_FREE case specially. It does
this without taking the lock in any more cases. So seems like a nice code
reduction.
It probably is still worth keeping the comment about the get/put race somewhere.
I'll see if I can slot it in somewhere.