Re: [PATCH] mm/vmstat: Reduce zone lock hold time when reading /proc/pagetypeinfo
From: Michal Hocko
Date: Wed Oct 23 2019 - 02:15:16 EST
On Tue 22-10-19 14:59:02, Andrew Morton wrote:
> On Tue, 22 Oct 2019 12:21:56 -0400 Waiman Long <longman@xxxxxxxxxx> wrote:
[...]
> > - for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> > - seq_printf(m, "Node %4d, zone %8s, type %12s ",
> > - pgdat->node_id,
> > - zone->name,
> > - migratetype_names[mtype]);
> > - for (order = 0; order < MAX_ORDER; ++order) {
> > + lockdep_assert_held(&zone->lock);
> > + lockdep_assert_irqs_disabled();
> > +
> > + /*
> > + * MIGRATE_MOVABLE is usually the largest one in large memory
> > + * systems. We skip iterating that list. Instead, we compute it by
> > + * subtracting the total of the rests from free_area->nr_free.
> > + */
> > + for (order = 0; order < MAX_ORDER; ++order) {
> > + unsigned long nr_total = 0;
> > + struct free_area *area = &(zone->free_area[order]);
> > +
> > + for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> > unsigned long freecount = 0;
> > - struct free_area *area;
> > struct list_head *curr;
> >
> > - area = &(zone->free_area[order]);
> > -
> > + if (mtype == MIGRATE_MOVABLE)
> > + continue;
> > list_for_each(curr, &area->free_list[mtype])
> > freecount++;
> > - seq_printf(m, "%6lu ", freecount);
> > + nfree[order][mtype] = freecount;
> > + nr_total += freecount;
> > }
> > + nfree[order][MIGRATE_MOVABLE] = area->nr_free - nr_total;
> > +
> > + /*
> > + * If we have already iterated more than 64k of list
> > + * entries, we might have hold the zone lock for too long.
> > + * Temporarily release the lock and reschedule before
> > + * continuing so that other lock waiters have a chance
> > + * to run.
> > + */
> > + if (nr_total > (1 << 16)) {
> > + spin_unlock_irq(&zone->lock);
> > + cond_resched();
> > + spin_lock_irq(&zone->lock);
> > + }
> > + }
> > +
> > + for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> > + seq_printf(m, "Node %4d, zone %8s, type %12s ",
> > + pgdat->node_id,
> > + zone->name,
> > + migratetype_names[mtype]);
> > + for (order = 0; order < MAX_ORDER; ++order)
> > + seq_printf(m, "%6lu ", nfree[order][mtype]);
> > seq_putc(m, '\n');
>
> This is not exactly a thing of beauty :( Presumably there might still
> be situations where the irq-off times remain excessive.
Yes. It is the list_for_each over the free_list that needs the lock and
that is the actual problem here. This can be really large with a _lot_
of memory. And this is why I objected to the patch. Because it doesn't
really address this problem. I would like to hear from Mel and Vlastimil
how would they feel about making free_list fully migrate type aware
(including nr_free).
> Why are we actually holding zone->lock so much? Can we get away with
> holding it across the list_for_each() loop and nothing else? If so,
> this still isn't a bulletproof fix. Maybe just terminate the list
> walk if freecount reaches 1024. Would anyone really care?
>
> Sigh. I wonder if anyone really uses this thing for anything
> important. Can we just remove it all?
Vlastimil would know much better but I have seen this being used for
fragmentation related debugging. That should imply that 0400 should be
sufficient and a quick and easily backportable fix for the most pressing
immediate problem.
--
Michal Hocko
SUSE Labs