[RFC PATCH v3 17/35] mm: Add aggressive bias to prefer lower regionsduring page allocation
From: Srivatsa S. Bhat
Date: Fri Aug 30 2013 - 09:23:16 EST
While allocating pages from buddy freelists, there could be situations
in which we have a ready freepage of the required order in a *higher*
numbered memory region, and there also exists a freepage of a higher
page order in a *lower* numbered memory region.
To make the consolidation logic more aggressive, try to split up the
higher order buddy page of a lower numbered region and allocate it,
rather than allocating pages from a higher numbered region.
This ensures that we spill over to a new region only when we truly
don't have enough contiguous memory in any lower numbered region to
satisfy that allocation request.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@xxxxxxxxxxxxxxxxxx>
---
mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++----------
1 file changed, 34 insertions(+), 10 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e711b9..0cc2a3e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1210,8 +1210,9 @@ static inline
struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
int migratetype)
{
- unsigned int current_order;
- struct free_area * area;
+ unsigned int current_order, alloc_order;
+ struct free_area *area, *other_area;
+ int alloc_region, other_region;
struct page *page;
/* Find a page of the appropriate size in the preferred list */
@@ -1220,17 +1221,40 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
if (list_empty(&area->free_list[migratetype].list))
continue;
- page = list_entry(area->free_list[migratetype].list.next,
- struct page, lru);
- rmqueue_del_from_freelist(page, &area->free_list[migratetype],
- current_order);
- rmv_page_order(page);
- area->nr_free--;
- expand(zone, page, order, current_order, area, migratetype);
- return page;
+ alloc_order = current_order;
+ alloc_region = area->free_list[migratetype].next_region -
+ area->free_list[migratetype].mr_list;
+ current_order++;
+ goto try_others;
}
return NULL;
+
+try_others:
+ /* Try to aggressively prefer lower numbered regions for allocations */
+ for ( ; current_order < MAX_ORDER; ++current_order) {
+ other_area = &(zone->free_area[current_order]);
+ if (list_empty(&other_area->free_list[migratetype].list))
+ continue;
+
+ other_region = other_area->free_list[migratetype].next_region -
+ other_area->free_list[migratetype].mr_list;
+
+ if (other_region < alloc_region) {
+ alloc_region = other_region;
+ alloc_order = current_order;
+ }
+ }
+
+ area = &(zone->free_area[alloc_order]);
+ page = list_entry(area->free_list[migratetype].list.next, struct page,
+ lru);
+ rmqueue_del_from_freelist(page, &area->free_list[migratetype],
+ alloc_order);
+ rmv_page_order(page);
+ area->nr_free--;
+ expand(zone, page, order, alloc_order, area, migratetype);
+ return page;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/