[+cc linux-pci, Myron, Suravee, Kim, Aravind]
On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman <daniel@xxxxxxxxxxxxx> wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
by definition, which are on NUMA node 0 by definition, so this is invarient
on most systems.
Tested on fam10h and fam15h single and multi-fabric systems and candidate
for stable.
I wish this had been cc'd to linux-pci. We're talking about a related
change by Suravee there. In fact, we were hoping this quirk could be
removed altogether.
I don't understand what this quirk is doing. Normally we discover the>
NUMA node for a PCI host bridge via the ACPI _PXM method. The way
_PXM works is that every PCI device in the hierarchy below the bridge
inherits the same node number as the host bridge. I first thought
this might be a workaround for a system that lacks _PXM, but I don't
think that can be right, because you're only changing the node for a
few devices, not the whole hierarchy.
So I suspect the problem is more complicated, and maybe _PXM is
insufficient to describe the topology? Are there subtrees that should
have nodes different from the host bridge?
I know this patch is already in v3.14-rc7, but I'd still like to
understand it so we can do the right thing with Suravee's patch.