Re: [PATCH] percpu_counter: Fix __percpu_counter_sum()
From: Eric Dumazet
Date: Wed Dec 10 2008 - 17:58:00 EST
Andrew Morton a écrit :
> On Wed, 10 Dec 2008 06:09:08 +0100 Eric Dumazet <dada1@xxxxxxxxxxxxx> wrote:
>
>> Now percpu_counter_sum() is 'fixed', what about "percpu_counter_add()" ?
>>
>> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
>> {
>> s64 count;
>> s32 *pcount;
>> int cpu = get_cpu();
>>
>> pcount = per_cpu_ptr(fbc->counters, cpu);
>> count = *pcount + amount;
>> if (count >= batch || count <= -batch) {
>> spin_lock(&fbc->lock);
>> fbc->count += count;
>> *pcount = 0;
>> spin_unlock(&fbc->lock);
>> } else {
>> *pcount = count;
>> }
>> put_cpu();
>> }
>>
>>
>> If I read this well, this is not IRQ safe.
>
> Sure. It's racy against interrupts on this cpu, it'll deadlock over
> the non-irq-safe spinlock and lockdep will have a coronary over it.
>
>> get_cpu() only disables preemption IMHO
>
> yes
>
>> For nr_files, nr_dentry, nr_inodes, it should not be a problem.
>
> yes
>
>> But for network counters (only in net-next-2.6)
>> and lib/proportions.c, we have a problem ?
>
> yes
>
>> Using local_t instead of s32 for cpu
>> local counter here is possible, so that fast path doesnt have
>> to disable interrupts
>>
>> (use a local_t instead of s32 for fbc->counters)
>>
>> void __percpu_counter_add_irqsafe(struct percpu_counter *fbc, s64 amount, s32 batch)
>> {
>> long count;
>> local_t *pcount;
>>
>> /* following code only matters on 32bit arches */
>> if (sizeof(amount) != sizeof(local_t)) {
>> if (unlikely(amount >= batch || amount <= -batch))) {
>> spin_lock_irqsave(&fbc->lock, flags);
>> fbc->count += amount;
>> spin_unlock_irqrestore(&fbc->lock, flags);
>> return;
>> }
>> }
>> pcount = per_cpu_ptr(fbc->counters, get_cpu());
>> count = local_add_return((long)amount, pcount);
>> if (unlikely(count >= batch || count <= -batch)) {
>> unsigned long flags;
>>
>> local_sub(count, pcount);
>> spin_lock_irqsave(&fbc->lock, flags);
>> fbc->count += count;
>> spin_unlock_irqrestore(&fbc->lock, flags);
>> }
>> put_cpu();
>> }
>
>
> I think it's reasonable. If the batching is working as intended, the
> increased cost of s/spin_lock/spin_lock_irqsave/ should be
> insignificant.
>
> In fact, if *at all* possible it would be best to make percpu_counters
> irq-safe under all circumstances and avoid fattening and complicating the
> interface.
>
>
>
> But before adding more dependencies on local_t I do think we should
> refresh ourselves on Christoph's objections to them - I remember
> finding them fairly convincing at the time, but I don't recall the
> details.
>
> <searches for a long time>
>
> Here, I think:
> http://lkml.indiana.edu/hypermail/linux/kernel/0805.3/2482.html
>
> Rusty, Christoph: talk to me. If we add a new user of local_t in core
> kernel, will we regret it?
>
__percpu_counter_add() already disables preemption (calling get_cpu())
But then, some (all but x86 ;) ) arches dont have true local_t and we fallback
to plain atomic_long_t, and this is wrong because it would add a LOCKED
instruction in fast path.
I remember Christoph added FAST_CMPXCHG_LOCAL, but no more uses of it in current
tree.
Ie : using local_t only if CONFIG_FAST_CMPXCHG_LOCAL, else something like :
void __percpu_counter_add_irqsafe(struct percpu_counter *fbc, s64 amount, s32 batch)
{
s64 count;
s32 *pcount = per_cpu_ptr(fbc->counters, get_cpu());
unsigned long flags;
local_irq_save(flags);
count = *pcount + amount;
if (unlikely(count >= batch || count <= -batch)) {
spin_lock(&fbc->lock);
fbc->count += count;
spin_unlock(&fbc->lock);
count = 0;
}
*pcount = count;
local_irq_restore(flags);
put_cpu();
}
EXPORT_SYMBOL(__percpu_counter_add_irqsafe);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/