Re: [PATCH] mm/vmstat: Reduce zone lock hold time when reading /proc/pagetypeinfo

From: Andrew Morton
Date: Tue Oct 22 2019 - 17:59:18 EST


On Tue, 22 Oct 2019 12:21:56 -0400 Waiman Long <longman@xxxxxxxxxx> wrote:

> The pagetypeinfo_showfree_print() function prints out the number of
> free blocks for each of the page orders and migrate types. The current
> code just iterates the each of the free lists to get counts. There are
> bug reports about hard lockup panics when reading the /proc/pagetyeinfo
> file just because it look too long to iterate all the free lists within
> a zone while holing the zone lock with irq disabled.
>
> Given the fact that /proc/pagetypeinfo is readable by all, the possiblity
> of crashing a system by the simple act of reading /proc/pagetypeinfo
> by any user is a security problem that needs to be addressed.

Yes.

> There is a free_area structure associated with each page order. There
> is also a nr_free count within the free_area for all the different
> migration types combined. Tracking the number of free list entries
> for each migration type will probably add some overhead to the fast
> paths like moving pages from one migration type to another which may
> not be desirable.
>
> we can actually skip iterating the list of one of the migration types
> and used nr_free to compute the missing count. Since MIGRATE_MOVABLE
> is usually the largest one on large memory systems, this is the one
> to be skipped. Since the printing order is migration-type => order, we
> will have to store the counts in an internal 2D array before printing
> them out.
>
> Even by skipping the MIGRATE_MOVABLE pages, we may still be holding the
> zone lock for too long blocking out other zone lock waiters from being
> run. This can be problematic for systems with large amount of memory.
> So a check is added to temporarily release the lock and reschedule if
> more than 64k of list entries have been iterated for each order. With
> a MAX_ORDER of 11, the worst case will be iterating about 700k of list
> entries before releasing the lock.
>
> ...
>
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1373,23 +1373,54 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> pg_data_t *pgdat, struct zone *zone)
> {
> int order, mtype;
> + unsigned long nfree[MAX_ORDER][MIGRATE_TYPES];

600+ bytes is a bit much. I guess it's OK in this situation.

> - for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> - seq_printf(m, "Node %4d, zone %8s, type %12s ",
> - pgdat->node_id,
> - zone->name,
> - migratetype_names[mtype]);
> - for (order = 0; order < MAX_ORDER; ++order) {
> + lockdep_assert_held(&zone->lock);
> + lockdep_assert_irqs_disabled();
> +
> + /*
> + * MIGRATE_MOVABLE is usually the largest one in large memory
> + * systems. We skip iterating that list. Instead, we compute it by
> + * subtracting the total of the rests from free_area->nr_free.
> + */
> + for (order = 0; order < MAX_ORDER; ++order) {
> + unsigned long nr_total = 0;
> + struct free_area *area = &(zone->free_area[order]);
> +
> + for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> unsigned long freecount = 0;
> - struct free_area *area;
> struct list_head *curr;
>
> - area = &(zone->free_area[order]);
> -
> + if (mtype == MIGRATE_MOVABLE)
> + continue;
> list_for_each(curr, &area->free_list[mtype])
> freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + nfree[order][mtype] = freecount;
> + nr_total += freecount;
> }
> + nfree[order][MIGRATE_MOVABLE] = area->nr_free - nr_total;
> +
> + /*
> + * If we have already iterated more than 64k of list
> + * entries, we might have hold the zone lock for too long.
> + * Temporarily release the lock and reschedule before
> + * continuing so that other lock waiters have a chance
> + * to run.
> + */
> + if (nr_total > (1 << 16)) {
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> + }
> + }
> +
> + for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> + seq_printf(m, "Node %4d, zone %8s, type %12s ",
> + pgdat->node_id,
> + zone->name,
> + migratetype_names[mtype]);
> + for (order = 0; order < MAX_ORDER; ++order)
> + seq_printf(m, "%6lu ", nfree[order][mtype]);
> seq_putc(m, '\n');

This is not exactly a thing of beauty :( Presumably there might still
be situations where the irq-off times remain excessive.

Why are we actually holding zone->lock so much? Can we get away with
holding it across the list_for_each() loop and nothing else? If so,
this still isn't a bulletproof fix. Maybe just terminate the list
walk if freecount reaches 1024. Would anyone really care?

Sigh. I wonder if anyone really uses this thing for anything
important. Can we just remove it all?