On 10/5/18 10:10 AM, Arun KS wrote:
When free pages are done with higher order, time spend on
coalescing pages by buddy allocator can be reduced. With
section size of 256MB, hot add latency of a single section
shows improvement from 50-60 ms to less than 1 ms, hence
improving the hot add latency by 60%. Modify external
providers of online callback to align with the change.
Signed-off-by: Arun KS <arunks@xxxxxxxxxxxxxx>
[...]
@@ -655,26 +655,44 @@ void __online_page_free(struct page *page)
}
EXPORT_SYMBOL_GPL(__online_page_free);
-static void generic_online_page(struct page *page)
+static int generic_online_page(struct page *page, unsigned int order)
{
- __online_page_set_limits(page);
This is now not called anymore, although the xen/hv variants still do
it. The function seems empty these days, maybe remove it as a followup
cleanup?
- __online_page_increment_counters(page);
- __online_page_free(page);
+ __free_pages_core(page, order);
+ totalram_pages += (1UL << order);
+#ifdef CONFIG_HIGHMEM
+ if (PageHighMem(page))
+ totalhigh_pages += (1UL << order);
+#endif
__online_page_increment_counters() would have used
adjust_managed_page_count() which would do the changes under
managed_page_count_lock. Are we safe without the lock? If yes, there
should perhaps be a comment explaining why.