Re: [PATCH] arp_queue: serializing unlink + kfree_skb

From: David S. Miller
Date: Thu Feb 03 2005 - 18:21:00 EST


On Fri, 4 Feb 2005 01:27:05 +1100
Anton Blanchard <anton@xxxxxxxxx> wrote:

> Its difficult stuff, everyone gets it wrong and Andrew keeps
> hassling me to write up a document explaining it.

Ok, here goes nothing. Can someone run with this? It should
be rather complete, and require only minor editorial work.

--- atomic_ops.txt ---

This document is intended to serve as a guide to Linux port
maintainers on how to implement atomic counter and bitops operations
properly.

The atomic_t type should be defined as a signed integer.
Also, it should be made opaque such that any kind of cast to
a normal C integer type will fail. Something like the following
should suffice:

typedef struct { volatile int counter; } atomic_t;

The first operations to implement for atomic_t's are
the initializers and plain reads.

#define ATOMIC_INIT(i) { (i) }
#define atomic_set(v, i) ((v)->counter = (i))

The first macro is used in definitions, such as:

static atomic_t my_counter = ATOMIC_INIT(1);

The second interface can be used at runtime, as in:

k = kmalloc(sizeof(*k), GFP_KERNEL);
if (!k)
return -ENOMEM;
atomic_set(&k->counter, 0);

Next, we have:

#define atomic_read(v) ((v)->counter)

which simply reads the current value of the counter.

Now, we move onto the actual atomic operation interfaces.

void atomic_add(int i, atomic_t *v);
void atomic_sub(int i, atomic_t *v);
void atomic_inc(atomic_t *v);
void atomic_dec(atomic_t *v);

These four routines add and subtract integral values to/from the given
atomic_t value. The first two routines pass explicit integers by
which to make the adjustment, whereas the latter two use an implicit
adjustment value of "1".

One very important aspect of these two routines is that
they DO NOT require any explicit memory barriers. They
need only perform the atomic_t counter update in an SMP
safe manner.

Next, we have:

int atomic_inc_return(atomic_t *v);
int atomic_dec_return(atomic_t *v);

These routines add 1 and subtract 1, respectively, from
the given atomic_t and return the new counter value after
the operation is performed.

Unlike the above routines, it is required that explicit memory
barriers are performed before and after the operation. It must
be done such that all memory operations before and after the
atomic operation calls are strongly ordered with respect to the
atomic operation itself.

For example, it should behave as if a smp_mb() call existed both
before and after the atomic operation.

If the atomic instructions used in an implementation provide
explicit memory barrier semantics which satisfy the above requirements,
that is fine as well.

Let's move on:

int atomic_add_return(int i, atomic_t *v);
int atomic_sub_return(int i, atomic_t *v);

These behave just like atomic_{inc,dec}_return() except that an
explicit counter adjustment is given instead of the implicit "1".
This means that like atomic_{inc,dec}_return(), the memory barrier
semantics are required.

Next:

int atomic_inc_and_test(atomic_t *v);
int atomic_dec_and_test(atomic_t *v);

These two routines increment and decrement by 1, respectively,
the given atomic counter. They return a boolean indicating
whether the resulting counter value was zero or not.

It requires explicit memory barrier semantics around the operation
as above.

int atomic_sub_and_test(int i, atomic_t *v);

This is identical to atomic_dec_and_test() except that an explicit
decrement is given instead of the implicit "1". It requires explicit
memory barrier semantics around the operation.

int atomic_add_negative(int i, atomic_t *v);

The given increment is added to the given atomic counter value.
A boolean is return which indicates whether the resulting counter
value is negative. It requires explicit memory barrier semantics
around the operation.

If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces
are defined which accomplish this:

void smb_mb__before_atomic_dec(void);
void smb_mb__after_atomic_dec(void);
void smb_mb__before_atomic_inc(void);
void smb_mb__after_atomic_dec(void);

For example, smb_mb__before_atomic_dec() can be used like so:

obj->dead = 1;
smb_mb__before_atomic_dec();
atomic_dec(&obj->ref_count);

It makes sure that all memory operations preceeding the atomic_dec()
call are strongly ordered with respect to the atomic counter
operation. In the above example, it guarentees that the assignment
of "1" to obj->dead will be globally visible to other cpus before
the atomic counter decrement.

Without the explicitl smb_mb__before_atomic_dec() call, the
implementation could legally allow the atomic counter update
visible to other cpus before the "obj->dead = 1;" assignment.

The other three interfaces listed are used to provide explicit
ordering with respect to memory operations after an atomic_dec()
call (smb_mb__after_atomic_dec()) and around atomic_inc() calls
(smb_mb__{before,after}_atomic_inc()).

A missing memory barrier in the cases where they are required
by the atomic_t implementation above can have disasterous
results. Here is an example, which follows a pattern occuring
frequently in the Linux kernel. It is the use of atomic counters
to implement reference counting, and it works such that once
the counter falls to zero it can be guarenteed that no other
entity can be accessing the object. Observe:

list_del(&obj->list);
if (atomic_dec_and_test(&obj->ref_count))
kfree(obj);

Here, the list (say it is some linked list on which object
searches are performed) creates the reference to the object.
The insertion code probably looks something like this:

atomic_inc(&obj->ref_count);
list_add(&obj->list, &global_obj_list);

And searches look something like:

for_each(obj, &global_obj_list) {
if (key_compare(obj->key, key)) {
atomic_inc(&obj->ref_count);
return obj;
}
}
return NULL;

Given the above scheme, it must be the case that the list_del()
be visible to other processors before the atomic counter decrement
is performed. Otherwise, the counter could fall to zero, yet
the object is still visible for lookup on the linked list. So
we'd get a bogus sequence like this:

cpu 0 cpu 1
list_del(&obj->list);
... visibility delayed ...
lookup and find obj on
global_obj_list
atomic_dec_and_test();
obj refcount hits zero, this
is visible globally
atomic_inc()
obj refcount incremented
to one
list_del() becomes visible

kfree(obj);
obj is now freed up memory
--> CRASH

With the memory barrier semantics required of the atomic_t
operations which return values, the above sequence of memory
visibility can never happen.

We will now cover the atomic bitmask operations. You will find
that their SMP and memory barrier semantics are similar in shape
to the atomic_t ops above.

Native atomic bit operations are defined to operate on objects
aligned to the size of an "unsigned long" C data type, and are
least of that size. The endianness of the bits within each
"unsigned long" are the native endianness of the cpu.

void set_bit(unsigned long nr, volatils unsigned long *addr);
void clear_bit(unsigned long nr, volatils unsigned long *addr);
void change_bit(unsigned long nr, volatils unsigned long *addr);

These routines set, clear, and change, respectively, the bit
number indicated by "nr" on the bit mask pointed to by "ADDR".

They must execute atomically, yet there are no implicit memory
barrier semantics required of these interfaces.

long test_and_set_bit(unsigned long nr, volatils unsigned long *addr);
long test_and_clear_bit(unsigned long nr, volatils unsigned long *addr);
long test_and_change_bit(unsigned long nr, volatils unsigned long *addr);

Like the above, except that these routines return a boolean
which indicates whether the changed bit was set _BEFORE_
the atomic bit operation.

These routines, like the atomic_t counter operations returning
values, require explicit memory barrier semantics around their
execution. All memory operations before the atomic bit operation
call must be made visible globally before the atomic bit operation
is made visible. Likewise, the atomic bit operation must be
visible globally before any subsequent memory operation is made
visible. For example:

obj->dead = 1;
if (test_and_set_bit(0, &obj->flags))
/* ... */;
obj->killed = 1;

The implementation of test_and_set_bit() must guarentee that
"obj->dead = 1;" is visible to cpus before the atomic
memory operation done by test_and_set_bit() becomes visible.
Likewise, the atomic memory operation done by test_and_set_bit()
must become visible before "obj->killed = 1;" is visible.

Finally there is the basic operation:

long test_bit(unsigned long nr, __const__ volatile unsigned long *addr);

Which returns a boolean indicating if bit "nr" is set in the
bitmask pointed to by "addr".

If explicit memory barriers are required around clear_bit()
(which does not return a value, and thus does not need to
provide memory barrier semantics), two interfaces are provided:

void smp_mb__before_clear_bit(void);
void smp_mb__after_clear_bit(void);

They are used as follows, and are akin to their atomic_t
operation brothers:

/* All memory operations before this call will
* be globally visible before the clear_bit().
*/
smp_mb__before_clear_bit();
clear_bit( ... );

/* The clear_bit() will be visible before all
* subsequent memory operations.
*/
smp_mb__after_clear_bit();

Finally, there are non-atomic versions of the bitmask operations
provided. They are used in contexts where some other higher-level
SMP locking scheme is being used to protect the bitmask, and
thus less expensive non-atomic operations may be used in the
implementation. They have names similar to the above bitmask
operation interfaces, except that two underscores are prefixed
to the interface name.

void __set_bit(unsigned long nr, volatile unsigned long *addr);
void __clear_bit(unsigned long nr, volatile unsigned long *addr);
void __change_bit(unsigned long nr, volatile unsigned long *addr);
long __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
long __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
long __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);

These non-atomic variants also do not require any special
memory barrier semantics.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/