[PATCH] kref: prefer atomic_inc_not_zero to atomic_add_unless
From: Jason A. Donenfeld
Date: Thu Dec 15 2016 - 13:56:23 EST
On most platforms, there exists this ifdef:
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)
This makes this patch functionally useless. However, on PPC, there is
actually an explicit definition of atomic_inc_not_zero with its own
assembly that is slightly more optimized than atomic_add_unless. So,
this patch changes kref to use atomic_inc_not_zero instead, for PPC and
any future platforms that might provide an explicit implementation.
This also puts this usage of kref more in line with a verbatim reading
of the examples in Paul McKenney's paper [1] in the section titled "2.4
Atomic Counting With Check and Release Memory Barrier", which uses
atomic_inc_not_zero.
[1] http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2167.pdf
Signed-off-by: Jason A. Donenfeld <Jason@xxxxxxxxx>
Reviewed-by: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
Reviewed-by: Christoph Hellwig <hch@xxxxxx>
---
Sorry to submit this again, but people keep reviewing it saying it's fine,
but then point to somebody else to actually merge this. At the end of the
chain of fingerpointing is usually Greg. "Just have Greg do it." At this
point I'm confused, but it's certainly been sufficiently reviewed and
accepted. So can one of you just respond saying "I'll take it!"
include/linux/kref.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/kref.h b/include/linux/kref.h
index e15828fd71f1..62f0a84ae94e 100644
--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -133,6 +133,6 @@ static inline int kref_put_mutex(struct kref *kref,
*/
static inline int __must_check kref_get_unless_zero(struct kref *kref)
{
- return atomic_add_unless(&kref->refcount, 1, 0);
+ return atomic_inc_not_zero(&kref->refcount);
}
#endif /* _KREF_H_ */
--
2.11.0