Re: regression: sysctl_check changes in 2.6.24 are O(n) resulting in slow creation of 10000 network interfaces

From: Andi Kleen
Date: Mon Jan 07 2008 - 17:55:41 EST


On Mon, Jan 07, 2008 at 09:30:54PM +0000, Alan Cox wrote:
> > I think that would be a better option than to complicate sysctl.c
> > for this uncommon case.
>
> What is so complicated about hashing the entries if you are checking for

One thing I'm worrying about is memory bloat (yes I know that's not
popular but someone has to do it ;-)

You would need a hash table for each table. To handle 100k entries
you would need a larger hash tables with at least a few hundred entries.
And that for each subdirectory.

% find /proc/sys -type d | wc -l
64

Assuming e.g. a 128 byte entry hash table (which is probably too small
for 100k entries) that would require 64 * 128 * 8 = 64k of memory.
Not gigantic, but lots of small fry bloat also adds up. Now if you
chose an actually realistic hash table size it gets even bigger.

Most likely you would need to implement a tree or a resizeable hash table
to do this sanely and then you quickly go into complicated territory.

> duplicates when debugging. You can set the hash function to "0" and the

My understanding was that the code was always on; not only for debugging.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/