Re: [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited
From: Suren Baghdasaryan
Date: Sat Jan 11 2025 - 05:00:57 EST
On Sat, Jan 11, 2025 at 1:59 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>
> On Fri, Jan 10, 2025 at 10:32 PM Hillf Danton <hdanton@xxxxxxxx> wrote:
> >
> > On Fri, 10 Jan 2025 20:25:57 -0800 Suren Baghdasaryan <surenb@xxxxxxxxxx>
> > > -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp)
> > > +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp,
> > > + int limit)
> > > {
> > > int old = refcount_read(r);
> > >
> > > do {
> > > if (!old)
> > > break;
> > > +
> > > + if (statically_true(limit == INT_MAX))
> > > + continue;
> > > +
> > > + if (i > limit - old) {
> > > + if (oldp)
> > > + *oldp = old;
> > > + return false;
> > > + }
> > > } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i));
> >
> > The acquire version should be used, see atomic_long_try_cmpxchg_acquire()
> > in kernel/locking/rwsem.c.
>
> This is how __refcount_add_not_zero() is already implemented and I'm
> only adding support for a limit. If you think it's implemented wrong
> then IMHO it should be fixed separately.
>
> >
> > Why not use the atomic_long_t without bothering to add this limited version?
>
> The check against the limit is not only for overflow protection but
> also to avoid refcount increment when the writer bit is set. It makes
> the locking code simpler if we have a function that prevents
> refcounting when the vma is detached (vm_refcnt==0) or when it's
> write-locked (vm_refcnt<VMA_REF_LIMIT).
s / vm_refcnt<VMA_REF_LIMIT / vm_refcnt>VMA_REF_LIMIT
>
> >
> > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx.
> >