Re: [PATCH 2/6] mm/swap: Introduce alternative per-cpu LRU cache locking

From: nsaenzju
Date: Wed Sep 22 2021 - 05:50:14 EST


On Wed, 2021-09-22 at 11:20 +0200, Sebastian Andrzej Siewior wrote:
> On 2021-09-22 10:47:07 [+0200], nsaenzju@xxxxxxxxxx wrote:
> > > *why* use migrate_disable(), that's horrible!
> >
> > I was trying to be mindful of RT. They don't appreciate people taking spinlocks
> > just after having disabled preemption.
> >
> > I think getting local_lock(&locks->local) is my only option then. But it adds
> > an extra redundant spinlock in the RT+NOHZ_FULL case.
>
> spin_lock() does not disable preemption on PREEMPT_RT. You don't
> disables preemption on purpose or did I miss that?

Sorry my message wasn't clear. Adding code for context:

+ static inline void lru_cache_lock(struct lru_cache_locks *locks)
+ {
+ if (static_branch_unlikely(&remote_pcpu_cache_access)) {
+ /* Avoid migration between this_cpu_ptr() and spin_lock() */
+ migrate_disable();

IIUC PeterZ would've preferred that I disable preemption here. And what I meant
to explain is that, given the spinlock below, I choose migrate_disable() over
preempt_disable() to cater for RT.

+ spin_lock(this_cpu_ptr(&locks->spin));
+ } else {


So, to make both worlds happy, I think the only option left is using the
local_lock (at the expense of extra overhead in the RT+NOHZ_FULL case):

+ if (static_branch_unlikely(&remote_pcpu_cache_access)) {
+ /* Local lock avoids migration between this_cpu_ptr() and spin_lock() */
+ local_lock(&locks->local);
+ spin_lock(this_cpu_ptr(&locks->spin));
+ } else {

--
Nicolás Sáenz