Re: [RFC PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo
From: Waiman Long
Date: Wed Oct 23 2019 - 12:17:54 EST
On 10/23/19 12:10 PM, Michal Hocko wrote:
> On Wed 23-10-19 10:56:30, Waiman Long wrote:
>> On 10/23/19 6:27 AM, Michal Hocko wrote:
>>> From: Michal Hocko <mhocko@xxxxxxxx>
>>>
>>> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
>>> This is not really nice because it blocks both any interrupts on that
>>> cpu and the page allocator. On large machines this might even trigger
>>> the hard lockup detector.
>>>
>>> Considering the pagetypeinfo is a debugging tool we do not really need
>>> exact numbers here. The primary reason to look at the outuput is to see
>>> how pageblocks are spread among different migratetypes therefore putting
>>> a bound on the number of pages on the free_list sounds like a reasonable
>>> tradeoff.
>>>
>>> The new output will simply tell
>>> [...]
>>> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>>>
>>> instead of
>>> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>>>
>>> The limit has been chosen arbitrary and it is a subject of a future
>>> change should there be a need for that.
>>>
>>> Suggested-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>>> Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
>>> ---
>>> mm/vmstat.c | 19 ++++++++++++++++++-
>>> 1 file changed, 18 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>>> index 4e885ecd44d1..762034fc3b83 100644
>>> --- a/mm/vmstat.c
>>> +++ b/mm/vmstat.c
>>> @@ -1386,8 +1386,25 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
>>>
>>> area = &(zone->free_area[order]);
>>>
>>> - list_for_each(curr, &area->free_list[mtype])
>>> + list_for_each(curr, &area->free_list[mtype]) {
>>> freecount++;
>>> + /*
>>> + * Cap the free_list iteration because it might
>>> + * be really large and we are under a spinlock
>>> + * so a long time spent here could trigger a
>>> + * hard lockup detector. Anyway this is a
>>> + * debugging tool so knowing there is a handful
>>> + * of pages in this order should be more than
>>> + * sufficient
>>> + */
>>> + if (freecount > 100000) {
>>> + seq_printf(m, ">%6lu ", freecount);
>>> + spin_unlock_irq(&zone->lock);
>>> + cond_resched();
>>> + spin_lock_irq(&zone->lock);
>>> + continue;
>> list_for_each() is a for loop. The continue statement will just iterate
>> the rests with the possibility that curr will be stale. Should we use
>> goto to jump after the seq_print() below?
> You are right. Kinda brown paper back material. Sorry about that. What
> about this on top?
> ---
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 762034fc3b83..c156ce24a322 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,11 +1383,11 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> list_for_each(curr, &area->free_list[mtype]) {
> - freecount++;
> /*
> * Cap the free_list iteration because it might
> * be really large and we are under a spinlock
> @@ -1397,15 +1397,15 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> * of pages in this order should be more than
> * sufficient
> */
> - if (freecount > 100000) {
> - seq_printf(m, ">%6lu ", freecount);
> + if (++freecount >= 100000) {
> + overflow = true;
> spin_unlock_irq(&zone->lock);
> cond_resched();
> spin_lock_irq(&zone->lock);
> - continue;
> + break;
> }
> }
> - seq_printf(m, "%6lu ", freecount);
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> }
> seq_putc(m, '\n');
> }
>
Yes, that looks good to me. There is still a small chance that the
description will be a bit off if it is exactly 100,000. However, it is
not a big deal and I can live with that.
Thanks,
Longman