Re: available memory imbalance on large NUMA systems
From: Andrew Morton
Date: Wed Nov 12 2003 - 16:10:00 EST
Erik Jacobson <erikj@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> As a side point, some of the hash tables allocated during startup get very
> large on large-memory systems (systems with a terrabyte of memory for example).
> Someone may wish to consider implementing a cap on the size of some of these
> tables.
The patch seems a reasonable way of implementing it, but I think your above
comment lies at the heart of the issue: those tables are just too darn big.
Both the pagecache hash table and the buffer_head hash tables were removed
from 2.6 (but I suspect the structures which replaced them are all still
crammed into the zeroeth node?). That leaves the dentry, inode and TCP
hash tables. These need stern examination and benchmarking to decide
whether we really are appropriately sizing them on large machines.
If we can get away with just making these sanely sized then the remaining
issue is the node-round-robining of pagecache allocations. I don't have an
opinion on the desirability of this for NUMA machines in general.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/