Re: [PATCHv4 14/17] zsmalloc: make zspage lock preemptible

From: Sergey Senozhatsky
Date: Thu Feb 06 2025 - 21:49:09 EST


On (25/02/06 16:19), Yosry Ahmed wrote:
> > static void zspage_read_lock(struct zspage *zspage)
> > {
> > atomic_t *lock = &zspage->lock;
> > int old = atomic_read_acquire(lock);
> >
> > do {
> > if (old == ZS_PAGE_WRLOCKED) {
> > cpu_relax();
> > old = atomic_read_acquire(lock);
> > continue;
> > }
> > } while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1));
> >
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > rwsem_acquire_read(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > #endif
> > }
> >
> > static void zspage_read_unlock(struct zspage *zspage)
> > {
> > atomic_dec_return_release(&zspage->lock);
> >
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > #endif
> > }
> >
> > static bool zspage_try_write_lock(struct zspage *zspage)
> > {
> > atomic_t *lock = &zspage->lock;
> > int old = ZS_PAGE_UNLOCKED;
> >
> > preempt_disable();
> > if (atomic_try_cmpxchg_acquire(lock, &old, ZS_PAGE_WRLOCKED)) {
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > rwsem_acquire(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > #endif
> > return true;
> > }
> >
> > preempt_enable();
> > return false;
> > }
> >
> > static void zspage_write_unlock(struct zspage *zspage)
> > {
> > atomic_set_release(&zspage->lock, ZS_PAGE_UNLOCKED);
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > #endif
> > preempt_enable();
> > }
> > ---
> >
> > Maybe I'll just copy-paste the locking rules list, a list is always cleaner.
>
> Thanks. I think it would be nice if we could also get someone with
> locking expertise to take a look at this.

Sure.

I moved the lockdep acquire/release before atomic ops (except for try),
as was suggested by Sebastian in zram sub-thread.

[..]
> Seems like we have to compromise either way, custom locking or we enter
> into a new complexity realm with RCU freeing.

Let's take the blue pill? :)