Re: [RFC PATCH] x86, numa: always initialize all possible nodes

From: Michal Hocko
Date: Mon Feb 11 2019 - 09:52:30 EST


On Mon 11-02-19 14:49:09, Ingo Molnar wrote:
>
> * Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> > On Thu 24-01-19 11:10:50, Dave Hansen wrote:
> > > On 1/24/19 6:17 AM, Michal Hocko wrote:
> > > > and nr_cpus set to 4. The underlying reason is tha the device is bound
> > > > to node 2 which doesn't have any memory and init_cpu_to_node only
> > > > initializes memory-less nodes for possible cpus which nr_cpus restrics.
> > > > This in turn means that proper zonelists are not allocated and the page
> > > > allocator blows up.
> > >
> > > This looks OK to me.
> > >
> > > Could we add a few DEBUG_VM checks that *look* for these invalid
> > > zonelists? Or, would our existing list debugging have caught this?
> >
> > Currently we simply blow up because those zonelists are NULL. I do not
> > think we have a way to check whether an existing zonelist is actually
> > _correct_ other thatn check it for NULL. But what would we do in the
> > later case?
> >
> > > Basically, is this bug also a sign that we need better debugging around
> > > this?
> >
> > My earlier patch had a debugging printk to display the zonelists and
> > that might be worthwhile I guess. Basically something like this
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 2e097f336126..c30d59f803fb 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -5259,6 +5259,11 @@ static void build_zonelists(pg_data_t *pgdat)
> >
> > build_zonelists_in_node_order(pgdat, node_order, nr_nodes);
> > build_thisnode_zonelists(pgdat);
> > +
> > + pr_info("node[%d] zonelist: ", pgdat->node_id);
> > + for_each_zone_zonelist(zone, z, &pgdat->node_zonelists[ZONELIST_FALLBACK], MAX_NR_ZONES-1)
> > + pr_cont("%d:%s ", zone_to_nid(zone), zone->name);
> > + pr_cont("\n");
> > }
>
> Looks like this patch fell through the cracks - any update on this?

I was waiting for some feedback. As there were no complains about the
above debugging output I will make it a separate patch and post both
patches later this week. I just have to go through my backlog pile after
vacation.
--
Michal Hocko
SUSE Labs