Re: [PATCH v7 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support

From: Nico Pache
Date: Sun Jun 29 2025 - 02:52:55 EST


On Fri, May 16, 2025 at 11:15 AM Liam R. Howlett
<Liam.Howlett@xxxxxxxxxx> wrote:
>
> * Nico Pache <npache@xxxxxxxxxx> [250514 23:23]:
> > For khugepaged to support different mTHP orders, we must generalize this
> > to check if the PMD is not shared by another VMA and the order is
> > enabled.
> >
> > No functional change in this patch.
>
> This patch needs to be with the functional change for git blame and
> reviewing the changes.
I don't think that is the case. I've seen many series' that piecemeal
their changes including separating out nonfunctional changes before
the actual functional change. A lot of small changes were required to
generalize this for mTHP collapse. Doing it all in one patch would
have made the mTHP support patch huge and noisy. I tried to make that
patch cleaner (for review purposes) by separating out some of the
noise.


-- Nico
>
> >
> > Reviewed-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> > Co-developed-by: Dev Jain <dev.jain@xxxxxxx>
> > Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
> > Signed-off-by: Nico Pache <npache@xxxxxxxxxx>
> > ---
> > mm/khugepaged.c | 10 +++++-----
> > 1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 5457571d505a..0c4d6a02d59c 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -920,7 +920,7 @@ static int khugepaged_find_target_node(struct collapse_control *cc)
> > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> > bool expect_anon,
> > struct vm_area_struct **vmap,
> > - struct collapse_control *cc)
> > + struct collapse_control *cc, int order)
> > {
> > struct vm_area_struct *vma;
> > unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> > @@ -934,7 +934,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> >
> > if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
> > return SCAN_ADDRESS_RANGE;
> > - if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER))
> > + if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, order))
> > return SCAN_VMA_CHECK;
> > /*
> > * Anon VMA expected, the address may be unmapped then
> > @@ -1130,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > goto out_nolock;
> >
> > mmap_read_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED) {
> > mmap_read_unlock(mm);
> > goto out_nolock;
> > @@ -1164,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * mmap_lock.
> > */
> > mmap_write_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED)
> > goto out_up_write;
> > /* check if the pmd is still valid */
> > @@ -2782,7 +2782,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > mmap_read_lock(mm);
> > mmap_locked = true;
> > result = hugepage_vma_revalidate(mm, addr, false, &vma,
> > - cc);
> > + cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED) {
> > last_fail = result;
> > goto out_nolock;
> > --
> > 2.49.0
> >
>