On Mon, 4 Dec 2006, Andrew Morton wrote:
, but I would of course prefer to avoid
merging the anti-frag patches simply based on their stupendous size. It seems to me that lumpy-reclaim is suitable for the e1000 problem
, but perhaps not for the hugetlbpage problem.
I believe you'll hit similar problems even with lumpy-reclaim for the e1000 (I've added Andy to the cc so he can comment more). Lumpy provides a much smarter way of freeing higher-order contiguous blocks without having to reclaim 95%+ of memory - this is good. However, if you are currently seeing situations where the allocations fails even after you page out everything possible, smarter reclaim that eventually pages out everything anyway will not help you (chances are it's something like page tables that are in your way).
This is where anti-frag comes in. It clusters pages together based on their type - unmovable, reapable (inode caches, short-lived kernel allocations, skbuffs etc) and movable. When kswapd kicks in, the slab caches will be reaped. As reapable pages are clustered together, that will free some contiguous areas - probably enough for the e1000 allocations to succeed!
If that doesn't work, kswapd and direct reclaim will start reclaiming the "movable" pages. Without lumpy reclaim, 95%+ of memory could be paged out which is bad. Lumpy finds the contiguous pages faster and with less IO, that's why it's important.
Tests I am aware of show that lumpy-reclaim on it's own makes little or no difference to the hugetlb page problem. However, with anti-frag, hugetlb-sized allocations succeed much more often even when under memory pressure.
Whereas anti-fragmentation adds
vastly more code, but can address both problems? Or something.
Anti-frag goes a long way to addressing both problems. Lumpy-reclaim increases it's success rates under memory pressure and reduces the amount of reclaim that occurs.
IOW: big-picture where-do-we-go-from-here stuff.
Start with lumpy reclaim, then I'd like to merge page clustering piece by piece, ideally with one of the people with e1000 problems testing to see does it make a difference.
Assuming they are shown to help, where we'd go from there would be stuff like;
1. Keep non-movable and reapable allocations at the lower PFNs as much as
possible. This is so DIMMS for higher PFNs can be removed (doesn't
exist)
2. Use page migration to compact memory rather than depending solely on
reclaim (doesn't exist)
3. Introduce a mechanism for marking a group of pages as being offlined so
that they are not reallocated (code that does something like this
exists)
4. Resurrect the hotplug-remove code (exists, but probably very stale)
5. Allow allocations for hugepages outside of the pool as long as the
process remains with it's locked_vm limits (patches were posted to
libhugetlbfs last Friday. will post to linux-mm tomorrow).