RE: [PATCH] cpu-topology: warn if NUMA configurations conflicts with lower layer

From: Zengtao (B)
Date: Sun Jan 05 2020 - 20:38:15 EST


> -----Original Message-----
> From: Sudeep Holla [mailto:sudeep.holla@xxxxxxx]
> Sent: Friday, January 03, 2020 7:40 PM
> To: Zengtao (B)
> Cc: Valentin Schneider; Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> linux-kernel@xxxxxxxxxxxxxxx; Morten Rasmussen; Sudeep Holla
> Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts
> with lower layer
>
> On Fri, Jan 03, 2020 at 04:24:04AM +0000, Zengtao (B) wrote:
> > > -----Original Message-----
> > > From: Valentin Schneider [mailto:valentin.schneider@xxxxxxx]
> > > Sent: Thursday, January 02, 2020 9:22 PM
> > > To: Zengtao (B); Sudeep Holla
> > > Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > > linux-kernel@xxxxxxxxxxxxxxx; Morten Rasmussen
> > > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations
> conflicts
> > > with lower layer
> > >
>
> [...]
>
> > >
> > > Right, and that is checked when you have sched_debug on the cmdline
> > > (or write 1 to /sys/kernel/debug/sched_debug & regenerate the sched
> > > domains)
> > >
> >
> > No, here I think you don't get my issue, please try to understand my
> example
> > First:.
> >
> > *************************************
> > NUMA: 0-2, 3-7
> > core_siblings: 0-3, 4-7
> > *************************************
> > When we are building the sched domain, per the current code:
> > (1) For core 3
> > MC sched domain fallbacks to 3~7
> > DIE sched domain is 3~7
> > (2) For core 4:
> > MC sched domain is 4~7
> > DIE sched domain is 3~7
> >
> > When we are build sched groups for the MC level:
> > (1). core3's sched groups chain is built like as: 3->4->5->6->7->3
> > (2). core4's sched groups chain is built like as: 4->5->6->7->4
> > so after (2),
> > core3's sched groups is overlapped, and it's not a chain any more.
> > In the afterwards usecase of core3's sched groups, deadloop happens.
> >
> > And it's difficult for the scheduler to find out such errors,
> > that is why I think a warning is necessary here.
> >
>
> We can figure out a way to warn if it's absolutely necessary, but I
> would like to understand the system topology here. You haven't answered
> my query on cache topology. Please give more description on why the
> NUMA configuration is like the above example with specific hardware
> design details. Is this just a case where user can specify anything
> they wish ?
>

Sorry for the late response, In fact, it's a VM usecase, you can simply
understand it as a test case. It's a corner case, but it will hang the kernel,
that is why I suggest a warning is needed.

I think we need an sanity check or just simply warning, either in the scheduler
or arch topology parsing.

Regards
Zengtao