RE: [PATCH 2/7] x86/amd_nb: Add support for northbridges on Aldebaran
From: Chatradhi, Naveen Krishna
Date: Tue Aug 10 2021 - 08:45:27 EST
[Public]
Hi Yazen
Regards,
Naveenk
-----Original Message-----
From: Ghannam, Yazen <Yazen.Ghannam@xxxxxxx>
Sent: Tuesday, July 20, 2021 1:56 AM
To: Chatradhi, Naveen Krishna <NaveenKrishna.Chatradhi@xxxxxxx>
Cc: linux-edac@xxxxxxxxxxxxxxx; x86@xxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; bp@xxxxxxxxx; mingo@xxxxxxxxxx; mchehab@xxxxxxxxxx
Subject: Re: [PATCH 2/7] x86/amd_nb: Add support for northbridges on Aldebaran
On Wed, Jun 30, 2021 at 08:58:23PM +0530, Naveen Krishna Chatradhi wrote:
> From: Muralidhara M K <muralimk@xxxxxxx>
>
> On newer heterogeneous systems from AMD, there is a possibility of
> having GPU nodes along with CPU nodes with the MCA banks. The GPU
> nodes (noncpu nodes) starts enumerating from northbridge index 8.
>
"there is a possibility of having GPU nodes along with CPU nodes with the MCA banks" doesn't read clearly to me. It could be more explicit.
For example, "On newer systems...the CPUs manages MCA errors reported from the GPUs. Enumerate the GPU nodes with the AMD NB framework to support EDAC, etc." or something like this.
Also, "northbridge index" isn't a hardware thing rather it's an internal Linux value. I think you are referring to the "AMD Node ID" value from CPUID. The GPUs don't have CPUID, so the "AMD Node ID" value can't be directly read like for CPUs. But the current hardware implementation is such that the GPU nodes are enumerated in sequential order based on the PCI hierarchy, and the first GPU node is assumed to have an "AMD Node ID" value of 8 (the second GPU node has 9, etc.). With this implemenation detail, the Data Fabric on the GPU nodes can be accessed the same way as the Data Fabric on CPU nodes.
> Aldebaran GPUs have 2 root ports, with 4 misc port for each root.
>
I don't fully understand this sentence. There are 2 "Nodes"/Data Fabrics per GPU package, but what do "4 misc port for each root" mean? In any case, is this relevant to this patch?
Also, there should be an imperitive in the commit message, i.e. "Add ...".
[naveenk:] Modified the commit message
> Signed-off-by: Muralidhara M K <muralimk@xxxxxxx>
> Signed-off-by: Naveen Krishna Chatradhi <nchatrad@xxxxxxx>
> ---
> arch/x86/include/asm/amd_nb.h | 6 ++++
> arch/x86/kernel/amd_nb.c | 62 ++++++++++++++++++++++++++++++++---
> 2 files changed, 63 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/include/asm/amd_nb.h
> b/arch/x86/include/asm/amd_nb.h index 00d1a400b7a1..e71581cf00e3
> 100644
> --- a/arch/x86/include/asm/amd_nb.h
> +++ b/arch/x86/include/asm/amd_nb.h
> @@ -79,6 +79,12 @@ struct amd_northbridge_info {
>
> #ifdef CONFIG_AMD_NB
>
> +/*
> + * On Newer heterogeneous systems from AMD with CPU and GPU nodes
> +connected
> + * via xGMI links, the NON CPU Nodes are enumerated from index 8 */
> +#define NONCPU_NODE_INDEX 8
"Newer" doesn't need to be capatilized. And there should be a period at the end of the sentence.
I don't think "xGMI links" would mean much to most folks. I think the implication here is that the CPUs and GPUs are connected directly together (or rather their Data Fabrics are connected) like is done with
2 socket CPU systems and also within a socket for Multi-chip Module
(MCM) CPUs like Naples.
[naveenk:] Modified the message
> +
> u16 amd_nb_num(void);
> bool amd_nb_has_feature(unsigned int feature); struct
> amd_northbridge *node_to_amd_nb(int node); diff --git
> a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c index
> 5884dfa619ff..489003e850dd 100644
> --- a/arch/x86/kernel/amd_nb.c
> +++ b/arch/x86/kernel/amd_nb.c
> @@ -26,6 +26,8 @@
> #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F4 0x1444
> #define PCI_DEVICE_ID_AMD_19H_DF_F4 0x1654
> #define PCI_DEVICE_ID_AMD_19H_M50H_DF_F4 0x166e
> +#define PCI_DEVICE_ID_AMD_ALDEBARAN_ROOT 0x14bb
> +#define PCI_DEVICE_ID_AMD_ALDEBARAN_DF_F4 0x14d4
>
These PCI IDs look correct.
> /* Protect the PCI config register pairs used for SMN. */ static
> DEFINE_MUTEX(smn_mutex); @@ -94,6 +96,21 @@ static const struct
> pci_device_id hygon_nb_link_ids[] = {
> {}
> };
>
> +static const struct pci_device_id amd_noncpu_root_ids[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_ALDEBARAN_ROOT) },
> + {}
> +};
> +
> +static const struct pci_device_id amd_noncpu_nb_misc_ids[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_ALDEBARAN_DF_F3) },
> + {}
> +};
> +
> +static const struct pci_device_id amd_noncpu_nb_link_ids[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_ALDEBARAN_DF_F4) },
> + {}
> +};
> +
I think separating the CPU and non-CPU IDs is a good idea.
> const struct amd_nb_bus_dev_range amd_nb_bus_dev_ranges[] __initconst = {
> { 0x00, 0x18, 0x20 },
> { 0xff, 0x00, 0x20 },
> @@ -182,11 +199,16 @@ int amd_cache_northbridges(void)
> const struct pci_device_id *misc_ids = amd_nb_misc_ids;
> const struct pci_device_id *link_ids = amd_nb_link_ids;
> const struct pci_device_id *root_ids = amd_root_ids;
> +
> + const struct pci_device_id *noncpu_misc_ids = amd_noncpu_nb_misc_ids;
> + const struct pci_device_id *noncpu_link_ids = amd_noncpu_nb_link_ids;
> + const struct pci_device_id *noncpu_root_ids = amd_noncpu_root_ids;
> +
> struct pci_dev *root, *misc, *link;
> struct amd_northbridge *nb;
> u16 roots_per_misc = 0;
> - u16 misc_count = 0;
> - u16 root_count = 0;
> + u16 misc_count = 0, misc_count_noncpu = 0;
> + u16 root_count = 0, root_count_noncpu = 0;
> u16 i, j;
>
> if (amd_northbridges.num)
> @@ -205,10 +227,16 @@ int amd_cache_northbridges(void)
> if (!misc_count)
> return -ENODEV;
>
> + while ((misc = next_northbridge(misc, noncpu_misc_ids)) != NULL)
> + misc_count_noncpu++;
> +
> root = NULL;
> while ((root = next_northbridge(root, root_ids)) != NULL)
> root_count++;
>
> + while ((root = next_northbridge(root, noncpu_root_ids)) != NULL)
> + root_count_noncpu++;
> +
> if (root_count) {
> roots_per_misc = root_count / misc_count;
>
> @@ -222,15 +250,27 @@ int amd_cache_northbridges(void)
> }
> }
>
> - nb = kcalloc(misc_count, sizeof(struct amd_northbridge), GFP_KERNEL);
> + /*
> + * The valid amd_northbridges are in between (0 ~ misc_count) and
> + * (NONCPU_NODE_INDEX ~ NONCPU_NODE_INDEX + misc_count_noncpu)
> + */
This comment isn't clear to me. Is it even necessary?
[naveenk:] moved the message
> + if (misc_count_noncpu)
> + /*
> + * There are NONCPU Nodes with pci root ports starting at index 8
> + * allocate few extra cells for simplicity in handling the indexes
> + */
I think this comment can be more explicit. The first non-CPU Node ID starts at 8 even if there are fewer than 8 CPU nodes. To maintain the AMD Node ID to Linux amd_nb indexing scheme, allocate the number of GPU nodes plus 8. Some allocated amd_northbridge structures will go unused when the number of CPU nodes is less than 8, but this tradeoff is to keep things relatively simple.
> + amd_northbridges.num = NONCPU_NODE_INDEX + misc_count_noncpu;
> + else
> + amd_northbridges.num = misc_count;
The if-else statements should have {}s even though there's only a single line of code in each. This is just to make it easier to read multiple lines. Or the second code comment can be merged with the first outside the if-else.
[naveenk:] Done
> +
> + nb = kcalloc(amd_northbridges.num, sizeof(struct amd_northbridge),
> +GFP_KERNEL);
> if (!nb)
> return -ENOMEM;
>
> amd_northbridges.nb = nb;
> - amd_northbridges.num = misc_count;
>
> link = misc = root = NULL;
> - for (i = 0; i < amd_northbridges.num; i++) {
> + for (i = 0; i < misc_count; i++) {
> node_to_amd_nb(i)->root = root =
> next_northbridge(root, root_ids);
> node_to_amd_nb(i)->misc = misc =
> @@ -251,6 +291,18 @@ int amd_cache_northbridges(void)
> root = next_northbridge(root, root_ids);
> }
>
> + link = misc = root = NULL;
This line can go inside the if statement below.
[naveenk:] Done
I'm not sure it's totally necessary since the GPU devices should be listed after the CPU devices. But I guess better safe than sorry in case that implementation detail doesn't hold in the future. If you keep it, then I think you should do the same above when finding the counts.
> + if (misc_count_noncpu) {
> + for (i = NONCPU_NODE_INDEX; i < NONCPU_NODE_INDEX + misc_count_noncpu; i++) {
> + node_to_amd_nb(i)->root = root =
> + next_northbridge(root, noncpu_root_ids);
> + node_to_amd_nb(i)->misc = misc =
> + next_northbridge(misc, noncpu_misc_ids);
> + node_to_amd_nb(i)->link = link =
> + next_northbridge(link, noncpu_link_ids);
> + }
> + }
> +
> if (amd_gart_present())
> amd_northbridges.flags |= AMD_NB_GART;
>
> --
Thanks,
Yazen
[naveenk:] Than you