RE: Limit hash table size

From: Chen, Kenneth W
Date: Wed Jan 14 2004 - 17:35:51 EST


Anton Blanchard wrote:
> Well x86 isnt very interesting here, its all the 64bit archs
> that will end up with TBs of memory in the future.

To address Anton's concerns on PPC64, we have revised the patch to
enforce maximum size base on number of entry instead of page order. So
differences in page size/pointer size etc doesn't affect the final
calculation. The upper bound is capped at 2M. All numbers on x86
remain the same as we don't want to disturb already established and
working number. See patch at the end of the email. It is diff'ed
relative to 2.6.1-mm3 tree.

> But look at the horrid worst case there. My point is limiting
> the hash without any data is not a good idea. In 2.4 we raised
> MAX_ORDER on ppc64 because we spent so much time walking
> pagecache chains,

I just have to re-iterate that when hash table is made too large, we
start trading cache misses on the head array accesses for misses on the
hash list traversal. Big hashes can hurt you if you don't actually use
the capacity.

> Why cant we do something like Andrews recent min_free_kbytes
> patch and make the rate of change non linear. Just slow the
> increase down as we get bigger. I agree a 2GB hashtable is
> pretty ludicrous, but a 4MB one on a 512GB machine (which
> we sell at the moment) could be too :)

It doesn't need to be over designed. Generally there is no one size fit
all type of solution either. Linear scale works fine for many years and
it just start to tip off on large machine. We just need to put a upper
bound before it runs away.

- Ken

Attachment: hash2.patch
Description: hash2.patch