On Mon, Aug 24, 2015 at 10:53:37PM +0200, Vlastimil Babka wrote:
On 24.8.2015 15:16, Mel Gorman wrote:
return read_seqcount_retry(¤t->mems_allowed_seq, seq);
@@ -139,7 +141,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
#else /* !CONFIG_CPUSETS */
-static inline bool cpusets_enabled(void) { return false; }
+static inline bool cpusets_mems_enabled(void) { return false; }
static inline int cpuset_init(void) { return 0; }
static inline void cpuset_init_smp(void) {}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 62ae28d8ae8d..2c1c3bf54d15 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2470,7 +2470,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
if (IS_ENABLED(CONFIG_NUMA) && zlc_active &&
!zlc_zone_worth_trying(zonelist, z, allowednodes))
continue;
- if (cpusets_enabled() &&
+ if (cpusets_mems_enabled() &&
(alloc_flags & ALLOC_CPUSET) &&
!cpuset_zone_allowed(zone, gfp_mask))
continue;
Here the benefits are less clear. I guess cpuset_zone_allowed() is
potentially costly...
Heck, shouldn't we just start the static key on -1 (if possible), so that
it's enabled only when there's 2+ cpusets?
Hm wait a minute, that's what already happens:
static inline int nr_cpusets(void)
{
/* jump label reference count + the top-level cpuset */
return static_key_count(&cpusets_enabled_key) + 1;
}
I.e. if there's only the root cpuset, static key is disabled, so I think this
patch is moot after all?
static_key_count is an atomic read on a field in struct static_key where
as static_key_false is a arch_static_branch which can be eliminated. The
patch eliminates an atomic read so I didn't think it was moot.