Re: [kernel-hardening] Re: [RFC v4 PATCH 00/13] HARDENED_ATOMIC
From: Peter Zijlstra
Date: Fri Nov 11 2016 - 08:00:51 EST
On Fri, Nov 11, 2016 at 01:47:55PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 11, 2016 at 12:41:27PM +0000, Mark Rutland wrote:
> > Regardless of atomic_t semantics, a refcount_t would be far more obvious
> > to developers than atomic_t and/or kref, and better documents the intent
> > of code using it.
> >
> > We'd still see abuse of atomic_t (and so this won't solve the problems
> > Kees mentioned), but even as something orthogonal I think that would
> > make sense to have.
>
> Furthermore, you could implement that refcount_t stuff using
> atomic_cmpxchg() in generic code. While that is sub-optimal for ll/sc
> architectures you at least get generic code that works to get started.
>
> Also, I suspect that if your refcounts are heavily contended, you'll
> have other problems than the performance of these primitives.
>
> Code for refcount_inc(), refcount_inc_not_zero() and
> refcount_sub_and_test() can be copy-pasted from the kref patch I send
> yesterday.
A wee bit like so...
---
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
new file mode 100644
index 000000000000..d1eae0d2345e
--- /dev/null
+++ b/include/linux/refcount.h
@@ -0,0 +1,75 @@
+#ifndef _LINUX_REFCOUNT_H
+#define _LINUX_REFCOUNT_H
+
+#include <linux/atomic.h>
+
+typedef struct refcount_struct {
+ atomic_t refs;
+} refcount_t;
+
+static inline void refcount_inc(refcount_t *r)
+{
+ unsigned int old, new, val = atomic_read(&r->refs);
+
+ for (;;) {
+ WARN_ON_ONCE(!val);
+
+ new = val + 1;
+ if (new < val)
+ BUG(); /* overflow */
+
+ old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+}
+
+static inline bool refcount_inc_not_zero(refcount_t *r)
+{
+ unsigned int old, new, val = atomic_read(&r->refs);
+
+ for (;;) {
+ if (!val)
+ return false;
+
+ new = val + 1;
+ if (new < val)
+ BUG(); /* overflow */
+
+ old = atomic_cmpxchg_relaxed(&r->refs, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ return true;
+}
+
+static inline bool refcount_sub_and_test(int i, refcount_t *r)
+{
+ unsigned int old, new, val = atomic_read(&r->refs);
+
+ for (;;) {
+ new = val - i;
+ if (new > val)
+ BUG(); /* underflow */
+
+ old = atomic_cmpxchg_release(&r->refs, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+
+ return !new;
+}
+
+static inline bool refcount_dec_and_test(refcount_t *r)
+{
+ return refcount_sub_and_test(1, r);
+}
+
+#endif /* _LINUX_REFCOUNT_H */