Re: [v4 1/1] mm: Adaptive hash table scaling
From: Pasha Tatashin
Date: Mon May 22 2017 - 09:19:19 EST
I have only noticed this email today because my incoming emails stopped
syncing since Friday. But this is _definitely_ not the right approachh.
64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
already proposed scaling up to 32M for 32b systems and Andi seems to be
suggesting the same. So can we fold or apply the following instead?
Hi Michal,
Thank you for your suggestion. I will update the patch.
64G base for 32bit systems is not meant to be ever used, as the adaptive
scaling for 32bit system is just not needed. 32M and 64G are going to be
exactly the same on such systems.
Here is theoretical limit for the max hash size of entries (dentry cache
example):
size of bucket: sizeof(struct hlist_bl_head) = 4 bytes
numentries: (1 << 32) / PAGE_SIZE = 1048576 (for 4K pages)
hash size: 4b * 1048576 = 4M
In practice it is going to be an order smaller, as number of kernel
pages is less then (1<<32).
However, I will apply your suggestions as there seems to be a problem of
overflowing in comparing ul vs. ull as reported by Michael Ellerman, and
having a large base on 32bit systems will solve this issue. I will
revert back to "ul" all the quantities.
Another approach is to make it a 64 bit only macro like this:
#if __BITS_PER_LONG > 32
#define ADAPT_SCALE_BASE (64ull << 30)
#define ADAPT_SCALE_SHIFT 2
#define ADAPT_SCALE_NPAGES (ADAPT_SCALE_BASE >> PAGE_SHIFT)
#define adapt_scale(high_limit, numentries, scalep)
if (!(high_limit)) { \
unsigned long adapt; \
for (adapt = ADAPT_SCALE_NPAGES; adapt < \
(numentries); adapt <<= ADAPT_SCALE_SHIFT) \
(*(scalep))++; \
}
#else
#define adapt_scale(high_limit, numentries scalep)
#endif
Pasha