Re: Question on rhashtable in worst-case scenario.
From: Johannes Berg
Date: Wed Mar 30 2016 - 10:03:21 EST
On Wed, 2016-03-30 at 21:55 +0800, Herbert Xu wrote:
> Well to start with you should assess whether you really want to
> hash multiple objects with the same key.ÂÂIn particular, can an
> adversary generate a large number of such objects?
No, the only reason this happens is local - if you take the single
hardware and connect it to the same AP many times. This is what Ben is
doing - he's creating virtual interfaces on top of the same physical
hardware, and then connection all of these to the same AP, mostly for
testing the AP.
> If your conclusion is that yes you really want to do this, then
> we have the parameter insecure_elasticity that you can use to
> disable the rehashing based on chain length.
But we really don't want that either - in the normal case where you
don't create all these virtual interfaces for testing, you have a
certain number of peers that can vary a lot (zero to hundreds, in
theory thousands) where you *don't* have the same key, so we still want
to have the rehashing if the chains get longer in that case.
It's really just the degenerate case that Ben is creating locally
that's causing a problem, afaict, though it's a bit disconcerting that
rhashtable in general can cause strange failures at delete time.