On Thu, 26 Apr 2007, Nick Piggin wrote:
Yeah. IMO anti-fragmentation and defragmentation is the hack, and we
should stay away from higher order allocations whenever possible.
Right and we need to create series of other approaches that we then label "non-hack" to replace it.
Hardware is built to handle many small pages efficintly, and I don't
understand how it could be an SGI-only issue. Sure, you may have an
order of magnitude or more memory than anyone else, but even my lowly
desktop _already_ has orders of magnitude more pages than it has TLB
entries or cache -- if a workload is cache-nice for me, it probably
will be on a 1TB machine as well, and if it is bad for the 1TB machine,
it is also bad on mine.
There have been numbers of people that have argued the same point. Just because we have developed a way of thinking to defend our traditional 4k values does not make them right.
If this is instead an issue of io path or reclaim efficiency, then it
would be really nice to see numbers... but I don't think making these
fundamental paths more complex and slower is a nice way to fix it
(larger PAGE_SIZE would be, though).
The code paths can stay the same. You can switch CONFIG_LARGE pages off
if you do not want it and it is as it was.
If you would have a look the patches: The code is significantly cleanup and easier to read.