Re: [PATCH] mm: limit THP alignment – performance gain observed in AI inference workloads

From: Dev Jain
Date: Tue Jul 01 2025 - 01:46:23 EST



On 01/07/25 10:58 am, Lorenzo Stoakes wrote:
On Tue, Jul 01, 2025 at 10:53:09AM +0530, Dev Jain wrote:
On 30/06/25 4:24 pm, Lorenzo Stoakes wrote:
+cc Vlastimil, please keep him cc'd on discussions here as the author of this
fix in the conversation.

On Mon, Jun 30, 2025 at 10:55:52AM +0530, Dev Jain wrote:
For this workload, do you enable mTHPs on your system? My plan is to make a
similar patch for

the mTHP case and I'd be grateful if you can get me some results : )
I'd urge caution here.

The reason there was a big perf improvement is that, for certain workloads, the
original patch by Rik caused issues with VMA fragmentation. So rather than
getting adjacent VMAs that might later be khugepage'd, you'd get a bunch of VMAs
that were auto-aligned and thus fragmented from one another.
How does getting two different adjacent VMAs allow them to be khugepage'd if
both are less than PMD size? khugepaged operates per vma, I'm missing something.
(future) VMA merge

Consider allocations that are >PMD but < 2*PMD for instance. Now you get
fragmentation. For some workloads you would have previously eventually got PMD
leaf mapping, PMD leaf mapping, PMD leaf mapping, etc. contiguouosly, with this
arragenement you get PMD mapping, <bunch of PTE mappings>, PMD mapping, etc.

Sorry I am not following, don't know in detail about the VMA merge stuff.
Are you saying the after the patch, the VMAs will eventually get merged?
Is it possible in the kernel to get a merge in the "future"; as I understand
it only happens at mmap() time?

Suppose before the patch, you have two consecutive VMAs between (PMD, 2*PMD) size.
If they are able to get merged after the patch, why won't they be merged before the patch,
since the VMA characteristics are the same?