Re: [PATCH] arm64/numa: Add more vetting in numa_set_distance()
From: Will Deacon
Date: Mon Oct 29 2018 - 10:48:41 EST
On Mon, Oct 29, 2018 at 06:15:42PM +0530, Anshuman Khandual wrote:
> On 10/29/2018 06:02 PM, John Garry wrote:
> > On 29/10/2018 12:16, Will Deacon wrote:
> >> On Mon, Oct 29, 2018 at 12:14:09PM +0000, John Garry wrote:
> >>> On 29/10/2018 11:25, Will Deacon wrote:
> >>>> On Fri, Oct 26, 2018 at 09:57:47PM +0800, John Garry wrote:
> >>>>> Currently it is acceptable to set the distance between 2 separate nodes to
> >>>>> LOCAL_DISTANCE.
> >>>>>
> >>>>> Reject this as it is invalid.
> >>>>>
> >>>>> This change avoids a crash reported in [1].
> >>>>>
> >>>>> [1] https://www.spinics.net/lists/arm-kernel/msg683304.html
> >>>>>
> >>>>> Signed-off-by: John Garry <john.garry@xxxxxxxxxx>
> >>>>>
> >>>>> diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
> >>>>> index 146c04c..6092e3d 100644
> >>>>> --- a/arch/arm64/mm/numa.c
> >>>>> +++ b/arch/arm64/mm/numa.c
> >>>>> @@ -335,7 +335,8 @@ void __init numa_set_distance(int from, int to, int distance)
> >>>>> }
> >>>>>
> >>>>> if ((u8)distance != distance ||
> >>>>> - (from == to && distance != LOCAL_DISTANCE)) {
> >>>>> + (from == to && distance != LOCAL_DISTANCE) ||
> >>>>> + (from != to && distance == LOCAL_DISTANCE)) {
> >>>>
> >>>> The current code here is more-or-less lifted from the x86 implementation
> >>>> of numa_set_distance().
> >>>
> >>> Right, I did notice this. I didn't think that x86 folks would be so
> >>> concerned since they generally only use ACPI, and the ACPI code already
> >>> validates these distances in drivers/acpi/numa.c: slit_valid() [unlike OF
> >>> code].
> >>>
> >>> I think we should either factor out the sanity check
> >>>> into a core helper or make the core code robust to these funny configurations.
> >>>
> >>> OK, so to me it would make sense to factor out a sanity check into a core
> >>> helper.
> >>
> >> That, or have the OF code perform the same validation that slit_valid() is
> >> doing for ACPI. I'm just trying to avoid other architectures running into
> >> this problem down the line.
> >>
> >
> > Right, OF code should do this validation job if ACPI is doing it
> > (especially since the DT bindings actually specify the distance rules),
> > and not rely on the arch NUMA code to accept/reject numa_set_distance()
> > combinations.
>
> I would say this particular condition checking still falls under arch NUMA init
> code sanity check like other basic tests what numa_set_distance() currently does
> already but it should not be a necessity for the OF driver to check these. It can
> choose to check but arch NUMA should check basic things like two different NUMA
> nodes should not have LOCAL_DISTANCE as distance like in this case.
>
> (from == to && distance != LOCAL_DISTANCE) ||
> (from != to && distance == LOCAL_DISTANCE))
>
>
> >
> > And, in addition to this, I'd say OF should disable NUMA if given an
> > invalid table (like ACPI does).
>
> Taking a decision to disable NUMA should be with kernel (arch NUMA) once kernel
> starts booting. Platform should have sent right values, OF driver trying to
> adjust stuff what platform has sent with FDT once the kernel starts booting is
> not right. For example "Kernel NUMA wont like the distance factors lets clean
> then up before passing on to MM". Disabling NUMA is one such major decision which
> should be with arch NUMA code not with OF driver.
I don't fully understand what you're getting at here, but why would the
check posted by John be arch-specific? It's already done in the core code
for ACPI, so there's a discrepancy between ACPI and FDT that should be
resolved. I'd also argue that the subtleties of this check are actually
based on what the core code is willing to accept in terms of the NUMA
description, so it's also the best place to enforce it.
Will