Re: Question on rhashtable in worst-case scenario.
From: Herbert Xu
Date: Wed Mar 30 2016 - 10:09:45 EST
On Wed, Mar 30, 2016 at 04:03:08PM +0200, Johannes Berg wrote:
>
> But we really don't want that either - in the normal case where you
> don't create all these virtual interfaces for testing, you have a
> certain number of peers that can vary a lot (zero to hundreds, in
> theory thousands) where you *don't* have the same key, so we still want
> to have the rehashing if the chains get longer in that case.
insecure_elasticity only disables rehashing without growing, it
does not inhibit table expansion which is driven by the number of
objects in the whole table.
> It's really just the degenerate case that Ben is creating locally
> that's causing a problem, afaict, though it's a bit disconcerting that
> rhashtable in general can cause strange failures at delete time.
The failure is simple, rhashtable will rehash the table if a given
chain becomes too long. This simply doesn't work if you hash many
objects with the same key since the chain will never get shorter
even after a rehash (or expansion).
Therefore if your hashtable has to do this then you have to disable
the rehash logic using the insecure_elasticity flag.
Alternatively you can construct your own linked list for objects
with the same key outside of rhashtable and hash the list instead.
Cheers,
--
Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt