Re: [PATCH v3 1/3] x86/amd_nb: Add support for northbridges on Aldebaran

From: Yazen Ghannam
Date: Mon Sep 13 2021 - 14:07:43 EST


On Thu, Sep 02, 2021 at 07:30:24PM +0200, Borislav Petkov wrote:
> On Wed, Sep 01, 2021 at 06:17:21PM +0000, Yazen Ghannam wrote:
> > These devices aren't officially GPUs, since they don't have graphics/video
> > capabilities. Can we come up with a new term for this class of devices? Maybe
> > accelerators or something?
> >
> > In any case, GPU is still used throughout documentation and code, so it's fair
> > to just stick with "gpu".
>
> Hmm, yeah, everybody is talking about special-purpose processing units
> now, i.e., accelerators or whatever they call them. I guess this is the
> new fancy thing since sliced bread.
>
> Well, what are those PCI IDs going to represent? Devices which have RAS
> capabilities on them?
>
> We have this nomenclature called "uncore" in the perf subsystem for
> counters which are not part of the CPU core or whatever. But there we
> use that term on AMD already so that might cause confusion.
>
> But I guess the type of those devices doesn't matter for amd_nb.c,
> right?
>
> All that thing cares for is having an array of northbridges, each with
> the respective PCI devices and that's it. So for amd_nb.c I think that
> differentiation doesn't matter... but keep reading...
>
> > We use the Node ID to index into the amd_northbridge.nb array, e.g. in
> > node_to_amd_nb().
> >
> > We can get the Node ID of a GPU node when processing an MCA error as in Patch
> > 2 of this set. The hardware is going to give us a value of 8 or more.
> >
> > So, for example, if we set up the "nb" array like this for 1 CPU and 2 GPUs:
> > [ID:Type] : [0: CPU], [8: GPU], [9: GPU]
> >
> > Then I think we'll need some more processing at runtime to map, for example,
> > an error from GPU Node 9 to NB array Index 2, etc.
> >
> > Or we can manage this at init time like this:
> > [0: CPU], [1: NULL], [2: NULL], [3: NULL], [4: NULL], [5: NULL], [6: NULL],
> > [7, NULL], [8: GPU], [9: GPU]
> >
> > And at runtime, the code which does Node ID to NB entry just works. This
> > applies to node_to_amd_nb(), places where we loop over amd_nb_num(), etc.
> >
> > What do you think?
>
> Ok, looking at patch 2, it does:
>
> node_id = ((m->ipid >> 44) & 0xF);
>
> So how ugly would it become if you do here:
>
> node_id = ((m->ipid >> 44) & 0xF);
> node_id -= accel_id_offset;
>
> where that accel_id_offset is the thing you've read out from one of the
> Data Fabric registers before?
>
> This way, the gap between CPU IDs and accel IDs is gone and in the
> software view, there is none.
>
> Or are we reading other hardware registers which are aware of that gap
> and we would have to remove it again to get the proper index? And if so,
> and if it becomes real ugly, maybe we will have to bite the bullet and
> do the gap in the array but that would be yucky...
>
> Hmmm.
>

I really like this idea. I've gone over the current and future code a few
times to make sure things are okay. As far as I can tell, this idea should
work most of the time, since the "node_id" value is mostly used to look up the
right devices in the nb array. But there is one case so far where the "real"
hardware node_id is needed during address translation. This case is in the new
code in review for Data Fabric v3.5, and it only applies to the GPU devices.

What do you think about having a couple of helper functions to go between the
hardware and Linux index IDs? Most cases will use "hardware -> Linux index",
and when needed there can be a "Linux index -> hardware".

I think we still need some piece of info to indicate a device is a GPU based
on its node_id. The AMD NB code doesn't need to know, but the address
translation code does. The AMD NB enumeration can be mostly generic. I think
it may be enough to save an "id offset" value and also a "first special index"
value. Then we can go back and forth between the appropriate values without
having to allocate a bunch of unused memory or hardcoding certain values.

Thanks for the idea!

-Yazen