Re: [patch 3/4] net: Percpufy frequently used variables -- proto.sockets_allocated

From: Eric Dumazet
Date: Fri Jan 27 2006 - 19:26:40 EST


Andrew Morton a écrit :
Eric Dumazet <dada1@xxxxxxxxxxxxx> wrote:
Ravikiran G Thirumalai a écrit :
On Fri, Jan 27, 2006 at 12:16:02PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai <kiran@xxxxxxxxxxxx> wrote:
which can be assumed as not frequent. At sk_stream_mem_schedule(), read_sockets_allocated() is invoked only certain conditions, under memory pressure -- on a large CPU count machine, you'd have large memory, and I don't think read_sockets_allocated would get called often. It did not atleast on our 8cpu/16G box. So this should be OK I think.
That being said, the percpu_counters aren't a terribly successful concept
and probably do need a revisit due to the high inaccuracy at high CPU
counts. It might be better to do some generic version of vm_acct_memory()
instead.
AFAICS vm_acct_memory is no better. The deviation on large cpu counts is the same as percpu_counters -- (NR_CPUS * NR_CPUS * 2) ...
Ah... yes you are right, I read min(16, NR_CPUS*2)

So did I ;)

I wonder if it is not a typo... I mean, I understand the more cpus you have, the less updates on central atomic_t is desirable, but a quadratic offset seems too much...

I'm not sure whether it was a mistake or if I intended it and didn't do the
sums on accuracy :(

An advantage of retaining a spinlock in percpu_counter is that if accuracy
is needed at a low rate (say, /proc reading) we can take the lock and then
go spill each CPU's local count into the main one. It would need to be a
very low rate though. Or we make the cpu-local counters atomic too.

We might use atomic_long_t only (and no spinlocks)
Something like this ?

struct percpu_counter {
atomic_long_t count;
atomic_long_t *counters;
};

#ifdef CONFIG_SMP
void percpu_counter_mod(struct percpu_counter *fbc, long amount)
{
long old, new;
atomic_long_t *pcount;

pcount = per_cpu_ptr(fbc->counters, get_cpu());
start:
old = atomic_long_read(pcount);
new = old + amount;
if (new >= FBC_BATCH || new <= -FBC_BATCH) {
if (unlikely(atomic_long_cmpxchg(pcount, old, 0) != old))
goto start;
atomic_long_add(new, &fbc->count);
} else
atomic_long_add(amount, pcount);

put_cpu();
}
EXPORT_SYMBOL(percpu_counter_mod);

long percpu_counter_read_accurate(struct percpu_counter *fbc)
{
long res = 0;
int cpu;
atomic_long_t *pcount;

for_each_cpu(cpu) {
pcount = per_cpu_ptr(fbc->counters, cpu);
/* dont dirty cache line if not necessary */
if (atomic_long_read(pcount))
res += atomic_long_xchg(pcount, 0);
}
return res;
}
EXPORT_SYMBOL(percpu_counter_read_accurate);
#endif /* CONFIG_SMP */