On Thu 03-08-17 14:38:18, Wei Wang wrote:
This patch adds support to walk through the free page blocks in theIs the ifdef necessary. Sure only virtio balloon driver will use this
system and report them via a callback function. Some page blocks may
leave the free list after the report function returns, so it is the
caller's responsibility to either detect or prevent the use of such
pages.
Signed-off-by: Wei Wang <wei.w.wang@xxxxxxxxx>
Signed-off-by: Liang Li <liang.z.li@xxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Michael S. Tsirkin <mst@xxxxxxxxxx>
---
include/linux/mm.h | 7 ++++
include/linux/mmzone.h | 5 +++
mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 121 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 46b9ac5..24481e3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
extern void free_initmem(void);
+#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
+extern void walk_free_mem_block(void *opaque1,
+ unsigned int min_order,
+ void (*visit)(void *opaque2,
+ unsigned long pfn,
+ unsigned long nr_pages));
+#endif
currently but this looks like a generic functionality not specific to
virtio at all so the ifdef is rather confusing.
extern int page_group_by_mobility_disabled;This is just too ugly and wrong actually. Never provide struct page
#define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6d30e91..b90b513 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
show_swap_cache_info();
}
+#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
+
+/*
+ * Heuristically get a free page block in the system.
+ *
+ * It is possible that pages from the page block are used immediately after
+ * report_free_page_block() returns. It is the caller's responsibility to
+ * either detect or prevent the use of such pages.
+ *
+ * The input parameters specify the free list to check for a free page block:
+ * zone->free_area[order].free_list[migratetype]
+ *
+ * If the caller supplied page block (i.e. **page) is on the free list, offer
+ * the next page block on the list to the caller. Otherwise, offer the first
+ * page block on the list.
+ *
+ * Return 0 when a page block is found on the caller specified free list.
+ * Otherwise, no page block is found.
+ */
+static int report_free_page_block(struct zone *zone, unsigned int order,
+ unsigned int migratetype, struct page **page)
pointers outside of the zone->lock. What I've had in mind was to simply
walk free lists of the suitable order and call the callback for each one.
Something as simple as
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = &pgdat->node_zones[i];
if (!populated_zone(zone))
continue;
spin_lock_irqsave(&zone->lock, flags);
for (order = min_order; order < MAX_ORDER; ++order) {
struct free_area *free_area = &zone->free_area[order];
enum migratetype mt;
struct page *page;
if (!free_area->nr_pages)
continue;
for_each_migratetype_order(order, mt) {
list_for_each_entry(page,
&free_area->free_list[mt], lru) {
pfn = page_to_pfn(page);
visit(opaque2, prn, 1<<order);
}
}
}
spin_unlock_irqrestore(&zone->lock, flags);
}
[...]
+/*Is there any reason why there is no node id? I guess you just do not
+ * Walk through the free page blocks in the system. The @visit callback is
+ * invoked to handle each free page block.
+ *
+ * Note: some page blocks may be used after the report function returns, so it
+ * is not safe for the callback to use any pages or discard data on such page
+ * blocks.
+ */
+void walk_free_mem_block(void *opaque1,
+ unsigned int min_order,
+ void (*visit)(void *opaque2,
+ unsigned long pfn,
+ unsigned long nr_pages))
care for your particular use case. Not that I care too much either. If
somebody wants this per node then it would be trivial to extend I was
just wondering whether this is a deliberate decision or an omission.