Re: [PATCH 03/10] mm/hugetlb: Document huge_pte_offset usage

From: Mike Kravetz
Date: Tue Nov 29 2022 - 23:57:57 EST


On 11/29/22 14:35, Peter Xu wrote:
> huge_pte_offset() is potentially a pgtable walker, looking up pte_t* for a
> hugetlb address.
>
> Normally, it's always safe to walk a generic pgtable as long as we're with
> the mmap lock held for either read or write, because that guarantees the
> pgtable pages will always be valid during the process.
>
> But it's not true for hugetlbfs, especially shared: hugetlbfs can have its
> pgtable freed by pmd unsharing, it means that even with mmap lock held for
> current mm, the PMD pgtable page can still go away from under us if pmd
> unsharing is possible during the walk.
>
> So we have two ways to make it safe even for a shared mapping:
>
> (1) If we're with the hugetlb vma lock held for either read/write, it's
> okay because pmd unshare cannot happen at all.
>
> (2) If we're with the i_mmap_rwsem lock held for either read/write, it's
> okay because even if pmd unshare can happen, the pgtable page cannot
> be freed from under us.
>
> Document it.
>
> Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
> ---
> include/linux/hugetlb.h | 32 ++++++++++++++++++++++++++++++++
> 1 file changed, 32 insertions(+)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 551834cd5299..81efd9b9baa2 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -192,6 +192,38 @@ extern struct list_head huge_boot_pages;
>
> pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> unsigned long addr, unsigned long sz);
> +/*
> + * huge_pte_offset(): Walk the hugetlb pgtable until the last level PTE.
> + * Returns the pte_t* if found, or NULL if the address is not mapped.
> + *
> + * Since this function will walk all the pgtable pages (including not only
> + * high-level pgtable page, but also PUD entry that can be unshared
> + * concurrently for VM_SHARED), the caller of this function should be
> + * responsible of its thread safety. One can follow this rule:
> + *
> + * (1) For private mappings: pmd unsharing is not possible, so it'll
> + * always be safe if we're with the mmap sem for either read or write.
> + * This is normally always the case, IOW we don't need to do anything
> + * special.
> + *
> + * (2) For shared mappings: pmd unsharing is possible (so the PUD-ranged
> + * pgtable page can go away from under us! It can be done by a pmd
> + * unshare with a follow up munmap() on the other process), then we
> + * need either:
> + *
> + * (2.1) hugetlb vma lock read or write held, to make sure pmd unshare
> + * won't happen upon the range (it also makes sure the pte_t we
> + * read is the right and stable one), or,
> + *
> + * (2.2) hugetlb mapping i_mmap_rwsem lock held read or write, to make
> + * sure even if unshare happened the racy unmap() will wait until
> + * i_mmap_rwsem is released.

Is that 100% correct? IIUC, the page tables will be released via the
call to tlb_finish_mmu(). In most cases, the tlb_finish_mmu() call is
performed when holding i_mmap_rwsem. However, in the final teardown of
a hugetlb vma via __unmap_hugepage_range_final, the tlb_finish_mmu call
is done outside the i_mmap_rwsem lock. In this case, I think we are
still safe because nobody else should be walking the page table.

I really like the documentation. However, if i_mmap_rwsem is not 100%
safe I would prefer not to document it here. I don't think anyone
relies on this do they?
--
Mike Kravetz

> + *
> + * Option (2.1) is the safest, which guarantees pte stability from pmd
> + * sharing pov, until the vma lock released. Option (2.2) doesn't protect
> + * a concurrent pmd unshare, but it makes sure the pgtable page is safe to
> + * access.
> + */
> pte_t *huge_pte_offset(struct mm_struct *mm,
> unsigned long addr, unsigned long sz);
> unsigned long hugetlb_mask_last_page(struct hstate *h);
> --
> 2.37.3
>