On Sun, Jun 04, 2000 at 09:37:36PM +0400, A.N.Kuznetsov wrote:
> Hello!
>
> > The per bucket locks apparently perform well for outgoing traffic,
>
> No, hash table simply is not used for outgoing traffic,
> destinations are cached in socket.
I meant ``outgoing traffic for routing'', sorry for being unclear.
(although I think optimizing for SMP routing is not worth it)
>
>
> > but they are very bad for incoming traffic because it usually tends
> > to hit the same lock. Hmm....
>
> This phenomenon is known more than for year. BTW Andi, it was you who explained
> this. 8)
>
> It appeared that this ping-pong lock is washed out in real life
> due to "statistical" bucket decoupling, so that there were no reasons
> to be bothered about this.
Are you sure ? This would only occur if the server has lots of aliases,
otherwise the locks are just spread out over a few TOS variants of the
same address [and web servers tend to always get the same TOS so I doesn't
help there at all]
Hmm, would it make sense to hash the CPU number in ? (more update costs,
but it may amortize over runtime)
>
> More intersting case is net_rx_action. It has _no_ ping-pongs now,
> does almost no work and it is still visible in profiles. It is strange.
<braindump>
The cli / sti is probably costly (on x86 it could be in theory replaced
by a cmpxchg8 on the list head)
Also the lock prefix in the atomic inc of skb->users may hurt [it is useless
here now -- maybe we need a nonatomic_inc(atomic_t) @) ]
Hmm, another guess would be that CONFIG_X86_L1_CACHE_BYTES does not match
the CPU's real cache size.
</brain>
To find out I guess it needs a profile run with appropiate MSR performance
counters set up (like L1 cache and lock cycles)
-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Wed Jun 07 2000 - 21:00:20 EST