Re: [PATCH 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping

From: Baolin Wang
Date: Fri May 06 2022 - 21:32:21 EST




On 5/7/2022 2:55 AM, Mike Kravetz wrote:
On 4/29/22 01:14, Baolin Wang wrote:
On some architectures (like ARM64), it can support CONT-PTE/PMD size
hugetlb, which means it can support not only PMD/PUD size hugetlb:
2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
size specified.

When unmapping a hugetlb page, we will get the relevant page table
entry by huge_pte_offset() only once to nuke it. This is correct
for PMD or PUD size hugetlb, since they always contain only one
pmd entry or pud entry in the page table.

However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
since they can contain several continuous pte or pmd entry with
same page table attributes, so we will nuke only one pte or pmd
entry for this CONT-PTE/PMD size hugetlb page.

And now we only use try_to_unmap() to unmap a poisoned hugetlb page,

Since try_to_unmap can be called for non-hugetlb pages, perhaps the following
is more accurate?

try_to_unmap is only passed a hugetlb page in the case where the
hugetlb page is poisoned.

Yes, will update in next version.

It does concern me that this assumption is built into the code as
pointed out in your discussion with Gerald. Should we perhaps add
a VM_BUG_ON() to make sure the passed huge page is poisoned? This
would be in the same 'if block' where we call
adjust_range_if_pmd_sharing_possible.
Good point. Will do in next version. Thanks.