Re: [PATCH 1/7] node: Link memory nodes to their compute nodes

From: Keith Busch
Date: Mon Nov 19 2018 - 10:52:56 EST


On Mon, Nov 19, 2018 at 08:45:25AM +0530, Anshuman Khandual wrote:
> On 11/17/2018 12:02 AM, Keith Busch wrote:
> > On Thu, Nov 15, 2018 at 12:36:54PM -0800, Matthew Wilcox wrote:
> >> So ... let's imagine a hypothetical system (I've never seen one built like
> >> this, but it doesn't seem too implausible). Connect four CPU sockets in
> >> a square, each of which has some regular DIMMs attached to it. CPU A is
> >> 0 hops to Memory A, one hop to Memory B and Memory C, and two hops from
> >> Memory D (each CPU only has two "QPI" links). Then maybe there's some
> >> special memory extender device attached on the PCIe bus. Now there's
> >> Memory B1 and B2 that's attached to CPU B and it's local to CPU B, but
> >> not as local as Memory B is ... and we'd probably _prefer_ to allocate
> >> memory for CPU A from Memory B1 than from Memory D. But ... *mumble*,
> >> this seems hard.
> >
> > Indeed, that particular example is out of scope for this series. The
> > first objective is to aid a process running in node B's CPUs to allocate
> > memory in B1. Anything that crosses QPI are their own.
>
> This is problematic. Any new kernel API interface should accommodate B2 type
> memory as well from the above example which is on a PCIe bus. Because
> eventually they would be represented as some sort of a NUMA node and then
> applications will have to depend on this sysfs interface for their desired
> memory placement requirements. Unless this interface is thought through for
> B2 type of memory, it might not be extensible in the future.

I'm not sure I understand the concern. The proposal allows linking B
to B2 memory.