Re: RFC: Memory Tiering Kernel Interfaces

From: Aneesh Kumar K.V
Date: Mon May 02 2022 - 02:26:34 EST


Wei Xu <weixugc@xxxxxxxxxx> writes:

....

>
> Tiering Hierarchy Initialization
> ================================
>
> By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY).
>
> A device driver can remove its memory nodes from the top tier, e.g.
> a dax driver can remove PMEM nodes from the top tier.

Should we look at the tier in which to place the memory an option that
device drivers like dax driver can select? Or dax driver just selects
the desire to mark a specific memory only numa node as demotion target
and won't explicity specify the tier in which it should be placed. I
would like to go for the later and choose the tier details based on the
current memory tiers and the NUMA distance value (even HMAT at some
point in the future). The challenge with NUMA distance though is which
distance value we will pick. For example, in your example1.

node 0 1 2 3
0 10 20 30 40
1 20 10 40 30
2 30 40 10 40
3 40 30 40 10

When Node3 is registered, how do we decide to create a Tier2 or add it
to Tier1? . We could say devices that wish to be placed in the same tier
will have same distance as the existing tier device ie, for the above
case,

node_distance[2][2] == node_distance[2][3] ? Can we expect the firmware
to have distance value like that?

>
> The kernel builds the memory tiering hierarchy and per-node demotion
> order tier-by-tier starting from N_TOPTIER_MEMORY. For a node N, the
> best distance nodes in the next lower tier are assigned to
> node_demotion[N].preferred and all the nodes in the next lower tier
> are assigned to node_demotion[N].allowed.
>
> node_demotion[N].preferred can be empty if no preferred demotion node
> is available for node N.
>
> If the userspace overrides the tiers via the memory_tiers sysfs
> interface, the kernel then only rebuilds the per-node demotion order
> accordingly.
>
> Memory tiering hierarchy is rebuilt upon hot-add or hot-remove of a
> memory node, but is NOT rebuilt upon hot-add or hot-remove of a CPU
> node.
>
>
> Memory Allocation for Demotion
> ==============================
>
> When allocating a new demotion target page, both a preferred node
> and the allowed nodemask are provided to the allocation function.
> The default kernel allocation fallback order is used to allocate the
> page from the specified node and nodemask.
>
> The memopolicy of cpuset, vma and owner task of the source page can
> be set to refine the demotion nodemask, e.g. to prevent demotion or
> select a particular allowed node as the demotion target.
>
>
> Examples
> ========
>
> * Example 1:
> Node 0 & 1 are DRAM nodes, node 2 & 3 are PMEM nodes.
>
> Node 0 has node 2 as the preferred demotion target and can also
> fallback demotion to node 3.
>
> Node 1 has node 3 as the preferred demotion target and can also
> fallback demotion to node 2.
>
> Set mempolicy to prevent cross-socket demotion and memory access,
> e.g. cpuset.mems=0,2
>
> node distances:
> node 0 1 2 3
> 0 10 20 30 40
> 1 20 10 40 30
> 2 30 40 10 40
> 3 40 30 40 10
>
> /sys/devices/system/node/memory_tiers
> 0-1
> 2-3

How can I make Node3 the demotion target for Node2 in this case? Can
we have one file for each tier? ie, we start with
/sys/devices/system/node/memory_tier0. Removing a node with memory from
the above file/list results in the creation of new tiers.

/sys/devices/system/node/memory_tier0
0-1
/sys/devices/system/node/memory_tier1
2-3

echo 2 > /sys/devices/system/node/memory_tier1
/sys/devices/system/node/memory_tier1
2
/sys/devices/system/node/memory_tier2
3

>
> N_TOPTIER_MEMORY: 0-1
>
> node_demotion[]:
> 0: [2], [2-3]
> 1: [3], [2-3]
> 2: [], []
> 3: [], []
>
> * Example 2:
> Node 0 & 1 are DRAM nodes.
> Node 2 is a PMEM node and closer to node 0.
>
> Node 0 has node 2 as the preferred and only demotion target.
>
> Node 1 has no preferred demotion target, but can still demote
> to node 2.
>
> Set mempolicy to prevent cross-socket demotion and memory access,
> e.g. cpuset.mems=0,2
>
> node distances:
> node 0 1 2
> 0 10 20 30
> 1 20 10 40
> 2 30 40 10
>
> /sys/devices/system/node/memory_tiers
> 0-1
> 2
>
> N_TOPTIER_MEMORY: 0-1
>
> node_demotion[]:
> 0: [2], [2]
> 1: [], [2]
> 2: [], []
>
>
> * Example 3:
> Node 0 & 1 are DRAM nodes.
> Node 2 is a PMEM node and has the same distance to node 0 & 1.
>
> Node 0 has node 2 as the preferred and only demotion target.
>
> Node 1 has node 2 as the preferred and only demotion target.
>
> node distances:
> node 0 1 2
> 0 10 20 30
> 1 20 10 30
> 2 30 30 10
>
> /sys/devices/system/node/memory_tiers
> 0-1
> 2
>
> N_TOPTIER_MEMORY: 0-1
>
> node_demotion[]:
> 0: [2], [2]
> 1: [2], [2]
> 2: [], []
>
>
> * Example 4:
> Node 0 & 1 are DRAM nodes, Node 2 is a memory-only DRAM node.
>
> All nodes are top-tier.
>
> node distances:
> node 0 1 2
> 0 10 20 30
> 1 20 10 30
> 2 30 30 10
>
> /sys/devices/system/node/memory_tiers
> 0-2
>
> N_TOPTIER_MEMORY: 0-2
>
> node_demotion[]:
> 0: [], []
> 1: [], []
> 2: [], []
>
>
> * Example 5:
> Node 0 is a DRAM node with CPU.
> Node 1 is a HBM node.
> Node 2 is a PMEM node.
>
> With userspace override, node 1 is the top tier and has node 0 as
> the preferred and only demotion target.
>
> Node 0 is in the second tier, tier 1, and has node 2 as the
> preferred and only demotion target.
>
> Node 2 is in the lowest tier, tier 2, and has no demotion targets.
>
> node distances:
> node 0 1 2
> 0 10 21 30
> 1 21 10 40
> 2 30 40 10
>
> /sys/devices/system/node/memory_tiers (userspace override)
> 1
> 0
> 2
>
> N_TOPTIER_MEMORY: 1
>
> node_demotion[]:
> 0: [2], [2]
> 1: [0], [0]
> 2: [], []
>
> -- Wei