On 06/22/2012 10:05 AM, Minchan Kim wrote:
Second approach which is suggested by KOSAKI is what you mentioned.
But the concern about second approach is how to make sure matched count increase/decrease of nr_isolated_areas.
I mean how to make sure nr_isolated_areas would be zero when isolation is done.
Of course, we can investigate all of current caller and make sure they don't make mistake
now. But it's very error-prone if we consider future's user.
So we might need test_set_pageblock_migratetype(page, MIGRATE_ISOLATE);
It's an implementation about above approach.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index bf3404e..3e9a9e1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -474,6 +474,11 @@ struct zone {
* rarely used fields:
*/
const char *name;
+ /*
+ * the number of MIGRATE_ISOLATE pageblock
+ * We need this for accurate free page counting.
+ */
+ atomic_t nr_migrate_isolate;
} ____cacheline_internodealigned_in_smp;
typedef enum {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2c29b1c..6cb1f9f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -219,6 +219,11 @@ EXPORT_SYMBOL(nr_online_nodes);
int page_group_by_mobility_disabled __read_mostly;
+/*
+ * NOTE:
+ * Don't use set_pageblock_migratetype(page, MIGRATE_ISOLATE) direclty.
+ * Instead, use {un}set_pageblock_isolate.
+ */
void set_pageblock_migratetype(struct page *page, int migratetype)
{
if (unlikely(page_group_by_mobility_disabled))
@@ -1622,6 +1627,28 @@ bool zone_watermark_ok(struct zone *z, int order, unsigned long mark,
zone_page_state(z, NR_FREE_PAGES));
}