Re: [PATCH 6/6] mm: page_alloc: Reduce cost of the fair zone allocation policy

From: Johannes Weiner
Date: Tue Sep 02 2014 - 10:01:26 EST


On Mon, Aug 11, 2014 at 02:34:05PM +0200, Vlastimil Babka wrote:
> On 08/11/2014 02:12 PM, Mel Gorman wrote:
> >On Fri, Aug 08, 2014 at 05:27:15PM +0200, Vlastimil Babka wrote:
> >>On 07/09/2014 10:13 AM, Mel Gorman wrote:
> >>>--- a/mm/page_alloc.c
> >>>+++ b/mm/page_alloc.c
> >>>@@ -1604,6 +1604,9 @@ again:
> >>> }
> >>>
> >>> __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
> >>
> >>This can underflow zero, right?
> >>
> >
> >Yes, because of per-cpu accounting drift.
>
> I meant mainly because of order > 0.
>
> >>>+ if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
> >>
> >>AFAICS, zone_page_state will correct negative values to zero only for
> >>CONFIG_SMP. Won't this check be broken on !CONFIG_SMP?
> >>
> >
> >On !CONFIG_SMP how can there be per-cpu accounting drift that would make
> >that counter negative?
>
> Well original code used "if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)"
> elsewhere, that you are replacing with zone_is_fair_depleted check. I
> assumed it's because it can get negative due to order > 0. I might have not
> looked thoroughly enough but it seems to me there's nothing that would
> prevent it, such as skipping a zone because its remaining batch is lower
> than 1 << order.
> So I think the check should be "<= 0" to be safe.

Any updates on this?

The counter can definitely underflow on !CONFIG_SMP, and then the flag
gets out of sync with the actual batch state. I'd still prefer just
removing this flag again; it's extra complexity and error prone (case
in point) while the upsides are not even measurable in real life.

---

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 318df7051850..0bd77f730b38 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -534,7 +534,6 @@ typedef enum {
ZONE_WRITEBACK, /* reclaim scanning has recently found
* many pages under writeback
*/
- ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
} zone_flags_t;

static inline void zone_set_flag(struct zone *zone, zone_flags_t flag)
@@ -572,11 +571,6 @@ static inline int zone_is_reclaim_locked(const struct zone *zone)
return test_bit(ZONE_RECLAIM_LOCKED, &zone->flags);
}

-static inline int zone_is_fair_depleted(const struct zone *zone)
-{
- return test_bit(ZONE_FAIR_DEPLETED, &zone->flags);
-}
-
static inline int zone_is_oom_locked(const struct zone *zone)
{
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d4c8a2..d913809a328f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1612,9 +1612,6 @@ again:
}

__mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
- if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
- !zone_is_fair_depleted(zone))
- zone_set_flag(zone, ZONE_FAIR_DEPLETED);

__count_zone_vm_events(PGALLOC, zone, 1 << order);
zone_statistics(preferred_zone, zone, gfp_flags);
@@ -1934,7 +1931,6 @@ static void reset_alloc_batches(struct zone *preferred_zone)
mod_zone_page_state(zone, NR_ALLOC_BATCH,
high_wmark_pages(zone) - low_wmark_pages(zone) -
atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
- zone_clear_flag(zone, ZONE_FAIR_DEPLETED);
} while (zone++ != preferred_zone);
}

@@ -1985,7 +1981,7 @@ zonelist_scan:
if (alloc_flags & ALLOC_FAIR) {
if (!zone_local(preferred_zone, zone))
break;
- if (zone_is_fair_depleted(zone)) {
+ if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) {
nr_fair_skipped++;
continue;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/