Re: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes

From: Qi Zheng
Date: Tue Feb 14 2023 - 06:39:07 EST




On 2023/2/14 19:29, David Hildenbrand wrote:
On 14.02.23 12:26, Qi Zheng wrote:


On 2023/2/14 19:22, David Hildenbrand wrote:
On 14.02.23 11:26, Qi Zheng wrote:


On 2023/2/14 17:43, Mike Rapoport wrote:
On Tue, Feb 14, 2023 at 10:17:03AM +0100, David Hildenbrand wrote:
On 14.02.23 09:42, Vlastimil Babka wrote:
On 2/13/23 12:00, Qi Zheng wrote:


On 2023/2/13 16:47, Vlastimil Babka wrote:
On 2/12/23 12:03, Qi Zheng wrote:
In x86, numa_register_memblks() is only interested in
those nodes which have enough memory, so it skips over
all nodes with memory below NODE_MIN_SIZE (treated as
a memoryless node). Later on, we will initialize these
memoryless nodes (allocate pgdat in free_area_init()
and build zonelist etc), and will online these nodes
in init_cpu_to_node() and init_gi_nodes().

After boot, these memoryless nodes are in N_ONLINE
state but not in N_MEMORY state. But we can still allocate
pages from these memoryless nodes.

In SLUB, we only process nodes in the N_MEMORY state,
such as allocating their struct kmem_cache_node. So if
we allocate a page from the memoryless node above to
SLUB, the struct kmem_cache_node of the node corresponding
to this page is NULL, which will cause panic.

For example, if we use qemu to start a two numa node kernel,
one of the nodes has 2M memory (less than NODE_MIN_SIZE),
and the other node has 2G, then we will encounter the
following panic:

[    0.149844] BUG: kernel NULL pointer dereference, address:
0000000000000000
[    0.150783] #PF: supervisor write access in kernel mode
[    0.151488] #PF: error_code(0x0002) - not-present page
<...>
[    0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40
<...>
[    0.169781] Call Trace:
[    0.170159]  <TASK>
[    0.170448]  deactivate_slab+0x187/0x3c0
[    0.171031]  ? bootstrap+0x1b/0x10e
[    0.171559]  ? preempt_count_sub+0x9/0xa0
[    0.172145]  ? kmem_cache_alloc+0x12c/0x440
[    0.172735]  ? bootstrap+0x1b/0x10e
[    0.173236]  bootstrap+0x6b/0x10e
[    0.173720]  kmem_cache_init+0x10a/0x188
[    0.174240]  start_kernel+0x415/0x6ac
[    0.174738]  secondary_startup_64_no_verify+0xe0/0xeb
[    0.175417]  </TASK>
[    0.175713] Modules linked in:
[    0.176117] CR2: 0000000000000000

In addition, we can also encountered this panic in the actual
production environment. We set up a 2c2g container with two
numa nodes, and then reserved 128M for kdump, and then we
can encountered the above panic in the kdump kernel.

To fix it, we can filter memoryless nodes when allocating
pages.

Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
Reported-by: Teng Hu <huteng.ht@xxxxxxxxxxxxx>

Well AFAIK the key mechanism to only allocate from "good" nodes
is the
zonelist, we shouldn't need to start putting extra checks like
this. So it
seems to me that the code building the zonelists should take the
NODE_MIN_SIZE constraint in mind.

Indeed. How about the following patch:

+Cc also David, forgot earlier.

Looks good to me, at least.

@@ -6382,8 +6378,11 @@ int find_next_best_node(int node, nodemask_t
*used_node_mask)
             int min_val = INT_MAX;
             int best_node = NUMA_NO_NODE;

-       /* Use the local node if we haven't already */
-       if (!node_isset(node, *used_node_mask)) {
+       /*
+        * Use the local node if we haven't already. But for
memoryless
local
+        * node, we should skip it and fallback to other nodes.
+        */
+       if (!node_isset(node, *used_node_mask) && node_state(node,
N_MEMORY)) {
                     node_set(node, *used_node_mask);
                     return node;
             }

For memoryless node, we skip it and fallback to other nodes when
build its zonelists.

Say we have node0 and node1, and node0 is memoryless, then:

[    0.102400] Fallback order for Node 0: 1
[    0.102931] Fallback order for Node 1: 1

In this way, we will not allocate pages from memoryless node0.


In offline_pages(), we'll first build_all_zonelists() to then
node_states_clear_node()->node_clear_state(node, N_MEMORY);

So at least on the offlining path, we wouldn't detect it properly yet I
assume, and build a zonelist that contains a now-memory-less node?

Another question is what happens if a new memory is plugged into a node
that had < NODE_MIN_SIZE of memory and after hotplug it stops being
"memoryless".

When going online and offline a memory will re-call
build_all_zonelists() to re-establish the zonelists (the zonelist of
itself and other nodes). So it can stop being "memoryless"
automatically.

But in online_pages(), did not see the check of < NODE_MIN_SIZE.

TBH, this is the first time I hear of NODE_MIN_SIZE and it seems to be a
pretty x86 specific thing.

Are we sure we want to get NODE_MIN_SIZE involved?

Maybe add an arch_xxx() to handle it?

I still haven't figured out what we want to achieve with NODE_MIN_SIZE at all. It smells like an arch-specific hack looking at

"Don't confuse VM with a node that doesn't have the minimum amount of memory"

I'm also confused about this comment.

I found the patch that originally introduced this:

https://github.com/torvalds/linux/commit/9391a3f9c7f17bdd82adf9a98905450642cc8970

But the commit message didn't explain it very clearly. :(


Why shouldn't mm-core deal with that?

I'd appreciate an explanation of the bigger picture, what the issue is and what the approach to solve it is (including memory onlining/offlining).


--
Thanks,
Qi