On Thu, 4 Oct 2007 12:20:50 -0700 (PDT)Why would anyone need more than 640k... In addition to NUMA, who can tell what some future hardware might do, given that the size of memory is expanding as if it were covered in Moore's Law. As memory sizes increase someone will bump the page size again. Better to Let people make it as large as they feel they need and warn at build time performance may suck.
Christoph Lameter <clameter@xxxxxxx> wrote:
On Thu, 4 Oct 2007, Andi Kleen wrote:
We've known for ages that it is possible. But it has been always soWell we can now address the rarity. That is the whole point of the patchset.
rare that it was ignored.
Introducing complexity to fight a very rare problem with a good
fallback (refusing to fork more tasks, as well as lumpy reclaim)
somehow does not seem like a good tradeoff.
Is there any evidence this is more common now than it used to be?It will be more common if the stack size is increased beyond 8k.
Why would we want to do such a thing?
8kB stacks are large enough...