[RFC PATCH] mm: hugetlb: remove __GFP_THISNODE flag when dissolving the old hugetlb

From: Baolin Wang
Date: Thu Feb 01 2024 - 08:31:39 EST


Since commit 369fa227c219 ("mm: make alloc_contig_range handle free
hugetlb pages"), the alloc_contig_range() can handle free hugetlb pages
by allocating a new fresh hugepage, and replacing the old one in the
free hugepage pool.

However, our customers can still see the failure of alloc_contig_range()
when seeing a free hugetlb page. The reason is that, there are few memory
on the old hugetlb page's node, and it can not allocate a fresh hugetlb
page on the old hugetlb page's node in isolate_or_dissolve_huge_page() with
setting __GFP_THISNODE flag. This makes sense to some degree.

Later, the commit ae37c7ff79f1 (" mm: make alloc_contig_range handle
in-use hugetlb pages") handles the in-use hugetlb pages by isolating it
and doing migration in __alloc_contig_migrate_range(), but it can allow
fallbacking to other numa node when allocating a new hugetlb in
alloc_migration_target().

This introduces inconsistency to handling free and in-use hugetlb.
Considering the CMA allocation and memory hotplug relying on the
alloc_contig_range() are important in some scenarios, as well as keeping
the consistent hugetlb handling, we should remove the __GFP_THISNODE flag
in isolate_or_dissolve_huge_page() to allow fallbacking to other numa node,
which can solve the failure of alloc_contig_range() in our case.

Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9d996fe4ecd9..9c832709728e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3029,7 +3029,7 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
struct folio *old_folio, struct list_head *list)
{
- gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
+ gfp_t gfp_mask = htlb_alloc_mask(h);
int nid = folio_nid(old_folio);
struct folio *new_folio;
int ret = 0;
@@ -3088,7 +3088,7 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
* Ref count on new_folio is already zero as it was dropped
* earlier. It can be directly added to the pool free list.
*/
- __prep_account_new_huge_page(h, nid);
+ __prep_account_new_huge_page(h, folio_nid(new_folio));
enqueue_hugetlb_folio(h, new_folio);

/*
--
2.39.3