Re: [RFC] making nested spin_trylock() work on UP?

From: Matthew Wilcox

Date: Sat Feb 14 2026 - 01:29:07 EST


On Fri, Feb 13, 2026 at 12:57:43PM +0100, Vlastimil Babka wrote:
> The page allocator has been using a locking scheme for its percpu page
> caches (pcp) for years now, based on spin_trylock() with no _irqsave() part.
> The point is that if we interrupt the locked section, we fail the trylock
> and just fallback to something that's more expensive, but it's rare so we
> don't need to pay the irqsave cost all the time in the fastpaths.
>
> It's similar to but not exactly local_trylock_t (which is also newer anyway)
> because in some cases we do lock the pcp of a non-local cpu to flush it, in
> a way that's cheaper than IPI or queue_work_on().
>
> The complication of this scheme has been UP non-debug spinlock
> implementation which assumes spin_trylock() can't fail on UP and has no
> state to track it. It just doesn't anticipate this usage scenario. So to
> work around that we disable IRQs on UP, complicating the implementation.
> Also recently we found years old bug in the implementation - see
> 038a102535eb ("mm/page_alloc: prevent pcp corruption with SMP=n").
>
> So my question is if we could have spinlock implementation supporting this
> nested spin_trylock() usage, or if the UP optimization is still considered
> too important to lose it. I was thinking:
>
> - remove the UP implementation completely - would it increase the overhead
> on SMP=n systems too much and do we still care?
>
> - make the non-debug implementation a bit like the debug one so we do have
> the 'locked' state (see include/linux/spinlock_up.h and lock->slock). This
> also adds some overhead but not as much as the full SMP implementation?

What if we use an atomic_t on UP to simulate there being a spinlock,
but only for pcp? Your demo shows pcp_spin_trylock() continuing to
exist, so how about doing something like:

#ifdef CONFIG_SMP
#define pcp_spin_trylock(ptr) \
({ \
struct per_cpu_pages *__ret; \
__ret = pcpu_spin_trylock(struct per_cpu_pages, lock, ptr); \
__ret; \
})
#else
static atomic_t pcp_UP_lock = ATOMIC_INIT(0);
#define pcp_spin_trylock(ptr) \
({ \
struct per_cpu_pages *__ret = NULL; \
if (atomic_try_cmpxchg(&pcp_UP_lock, 0, 1)) \
__ret = (void *)&pcp_UP_lock; \
__ret; \
});
#endif

(obviously you need pcp_spin_lock/pcp_spin_unlock also defined)

That only costs us 4 extra bytes on UP, rather than 4 bytes per spinlock.
And some people still use routers with tiny amounts of memory and a
single CPU, or retrocomputers with single CPUs.