Re: [PATCH 1/5] mm/debug_pagealloc: clean-up guard page handling code

From: Vlastimil Babka
Date: Thu Aug 11 2016 - 05:41:18 EST


On 08/10/2016 10:14 AM, Sergey Senozhatsky wrote:
@@ -1650,18 +1655,15 @@ static inline void expand(struct zone *zone, struct page *page,
size >>= 1;
VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);

- if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
- debug_guardpage_enabled() &&
- high < debug_guardpage_minorder()) {
- /*
- * Mark as guard pages (or page), that will allow to
- * merge back to allocator when buddy will be freed.
- * Corresponding page table entries will not be touched,
- * pages will stay not present in virtual address space
- */
- set_page_guard(zone, &page[size], high, migratetype);
+ /*
+ * Mark as guard pages (or page), that will allow to
+ * merge back to allocator when buddy will be freed.
+ * Corresponding page table entries will not be touched,
+ * pages will stay not present in virtual address space
+ */
+ if (set_page_guard(zone, &page[size], high, migratetype))
continue;
- }

so previously IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) could have optimized out
the entire branch -- no set_page_guard() invocation and checks, right? but
now we would call set_page_guard() every time?

No, there's a !CONFIG_DEBUG_PAGEALLOC version of set_page_guard() that returns false (static inline), so this whole if will be eliminated by the compiler, same as before.