Re: [RFC PATCH] mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA

From: Yang Shi
Date: Mon Aug 30 2021 - 14:50:03 EST


Gently ping...

Does this patch make sense? BTW, I have a couple of other khugepaged
related patches in my queue. I plan to send them with this patch
together. It would be great to hear some feedback before resending
this one.

Thank,
Yang

On Tue, Aug 17, 2021 at 1:21 PM Yang Shi <shy828301@xxxxxxxxx> wrote:
>
> The khugepaged has optimization to reduce huge page allocation calls for
> !CONFIG_NUMA by carrying the allocated but failed to collapse huge page to
> the next loop. CONFIG_NUMA doesn't do so since the next loop may try to
> collapse huge page from a different node, so it doesn't make too much sense
> to carry it.
>
> But when NUMA=n, the huge page is allocated by khugepaged_prealloc_page()
> before scanning the address space, so it means huge page may be allocated
> even though there is no suitable range for collapsing. Then the page would
> be just freed if khugepaged already made enough progress. This could make
> NUMA=n run have 5 times as much thp_collapse_alloc as NUMA=y run. This
> problem actually makes things worse due to the way more pointless THP
> allocations and makes the optimization pointless.
>
> This could be fixed by carrying the huge page across scans, but it will
> complicate the code further and the huge page may be carried
> indefinitely. But if we take one step back, the optimization itself seems
> not worth keeping nowadays since:
> * Not too many users build NUMA=n kernel nowadays even though the kernel is
> actually running on a non-NUMA machine. Some small devices may run NUMA=n
> kernel, but I don't think they actually use THP.
> * Since commit 44042b449872 ("mm/page_alloc: allow high-order pages to be
> stored on the per-cpu lists"), THP could be cached by pcp. This actually
> somehow does the job done by the optimization.
>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
> Signed-off-by: Yang Shi <shy828301@xxxxxxxxx>
> ---
> mm/khugepaged.c | 74 ++++---------------------------------------------
> 1 file changed, 6 insertions(+), 68 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 6b9c98ddcd09..d6beb10e29e2 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -855,6 +855,12 @@ static int khugepaged_find_target_node(void)
> last_khugepaged_target_node = target_node;
> return target_node;
> }
> +#else
> +static inline int khugepaged_find_target_node(void)
> +{
> + return 0;
> +}
> +#endif
>
> static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
> {
> @@ -889,74 +895,6 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
> count_vm_event(THP_COLLAPSE_ALLOC);
> return *hpage;
> }
> -#else
> -static int khugepaged_find_target_node(void)
> -{
> - return 0;
> -}
> -
> -static inline struct page *alloc_khugepaged_hugepage(void)
> -{
> - struct page *page;
> -
> - page = alloc_pages(alloc_hugepage_khugepaged_gfpmask(),
> - HPAGE_PMD_ORDER);
> - if (page)
> - prep_transhuge_page(page);
> - return page;
> -}
> -
> -static struct page *khugepaged_alloc_hugepage(bool *wait)
> -{
> - struct page *hpage;
> -
> - do {
> - hpage = alloc_khugepaged_hugepage();
> - if (!hpage) {
> - count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
> - if (!*wait)
> - return NULL;
> -
> - *wait = false;
> - khugepaged_alloc_sleep();
> - } else
> - count_vm_event(THP_COLLAPSE_ALLOC);
> - } while (unlikely(!hpage) && likely(khugepaged_enabled()));
> -
> - return hpage;
> -}
> -
> -static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
> -{
> - /*
> - * If the hpage allocated earlier was briefly exposed in page cache
> - * before collapse_file() failed, it is possible that racing lookups
> - * have not yet completed, and would then be unpleasantly surprised by
> - * finding the hpage reused for the same mapping at a different offset.
> - * Just release the previous allocation if there is any danger of that.
> - */
> - if (*hpage && page_count(*hpage) > 1) {
> - put_page(*hpage);
> - *hpage = NULL;
> - }
> -
> - if (!*hpage)
> - *hpage = khugepaged_alloc_hugepage(wait);
> -
> - if (unlikely(!*hpage))
> - return false;
> -
> - return true;
> -}
> -
> -static struct page *
> -khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
> -{
> - VM_BUG_ON(!*hpage);
> -
> - return *hpage;
> -}
> -#endif
>
> /*
> * If mmap_lock temporarily dropped, revalidate vma
> --
> 2.26.2
>