Re: [PATCH v3 1/3] x86/amd_nb: Add support for northbridges on Aldebaran

From: Yazen Ghannam
Date: Wed Sep 01 2021 - 14:17:34 EST


On Wed, Aug 25, 2021 at 12:42:43PM +0200, Borislav Petkov wrote:
> On Tue, Aug 24, 2021 at 12:24:35AM +0530, Naveen Krishna Chatradhi wrote:
...
> >
> > The GPU nodes are enumerated in sequential order based on the
> > PCI hierarchy, and the first GPU node is assumed to have an "AMD Node
> > ID" value of 8 (the second GPU node has 9, etc.).
>
> What does that mean? The GPU nodes are simply numerically after the CPU
> nodes or how am I to understand this nomenclature?
>

Yes, the GPU nodes will be numerically after the CPU nodes. However, there
will be a gap in the "Node ID" values. For example, if there is one CPU node
and two GPU nodes, then the "Node ID" values will look like this:

CPU Node0 -> System Node ID 0
GPU Node0 -> System Node ID 8
GPU Node1 -> System Node ID 9

...
> > + * of 8 (the second GPU node has 9, etc.).
> > + */
> > +#define NONCPU_NODE_INDEX 8
>
> Why is this assumed? Can it instead be read from the hardware somewhere?
> Or there simply won't be more than 8 CPU nodes anyway? Not at least in
> the near future?
>

Yes, the intention is to leave a big enough gap for at least the forseeable
future.

> I'd prefer stuff to be read out directly from the hardware so that when
> the hardware changes, the code just works instead of doing assumptions
> which get invalidated later.
>

So after going through the latest documentation and asking the one of our
hardware folks, it looks like we have an option to read this value from one of
the Data Fabric registers. Hopefully, whatever solution we settle on will
stick for a while. The Data Fabric registers are not architectural, and
registers and fields have changed between model groups.

...
> > +static const struct pci_device_id amd_noncpu_root_ids[] = {
>
> Why is that "noncpu" thing everywhere? Is this thing going to be
> anything else besides a GPU?
>
> If not, you can simply call it
>
> amd_gpu_root_ids
>
> to mean *exactly* what they are. PCI IDs on the GPU.
>

These devices aren't officially GPUs, since they don't have graphics/video
capabilities. Can we come up with a new term for this class of devices? Maybe
accelerators or something?

In any case, GPU is still used throughout documentation and code, so it's fair
to just stick with "gpu".

...
> >
> > - nb = kcalloc(misc_count, sizeof(struct amd_northbridge), GFP_KERNEL);
> > + if (misc_count_noncpu) {
> > + /*
> > + * The first non-CPU Node ID starts at 8 even if there are fewer
> > + * than 8 CPU nodes. To maintain the AMD Node ID to Linux amd_nb
> > + * indexing scheme, allocate the number of GPU nodes plus 8.
> > + * Some allocated amd_northbridge structures will go unused when
> > + * the number of CPU nodes is less than 8, but this tradeoff is to
> > + * keep things relatively simple.
>
> Why simple?
>
> What's wrong with having
>
> [node IDs][GPU node IDs]
>
> i.e., the usual nodes come first and the GPU ones after it.
>
> You enumerate everything properly here so you can control what goes
> where. Which means, you don't need this NONCPU_NODE_INDEX non-sense at
> all.
>
> Hmmm?
>

We use the Node ID to index into the amd_northbridge.nb array, e.g. in
node_to_amd_nb().

We can get the Node ID of a GPU node when processing an MCA error as in Patch
2 of this set. The hardware is going to give us a value of 8 or more.

So, for example, if we set up the "nb" array like this for 1 CPU and 2 GPUs:
[ID:Type] : [0: CPU], [8: GPU], [9: GPU]

Then I think we'll need some more processing at runtime to map, for example,
an error from GPU Node 9 to NB array Index 2, etc.

Or we can manage this at init time like this:
[0: CPU], [1: NULL], [2: NULL], [3: NULL], [4: NULL], [5: NULL], [6: NULL],
[7, NULL], [8: GPU], [9: GPU]

And at runtime, the code which does Node ID to NB entry just works. This
applies to node_to_amd_nb(), places where we loop over amd_nb_num(), etc.

What do you think?

Thanks,
Yazen