Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at()

From: Muchun Song
Date: Tue Aug 16 2022 - 22:53:47 EST




> On Aug 16, 2022, at 21:05, Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>
> The memory barrier smp_wmb() is needed to make sure that preceding stores
> to the page contents become visible before the below set_pte_at() write.

I’m not sure if you are right. I think it is set_pte_at()’s responsibility.
Take arm64 (since it is a Relaxed Memory Order model) as an example (the
following code snippet is set_pte()), I see a barrier guarantee. So I am
curious what issues you are facing. So I want to know the basis for you to
do this change.

static inline void set_pte(pte_t *ptep, pte_t pte)
{
*ptep = pte;

/*
* Only if the new pte is valid and kernel, otherwise TLB maintenance
* or update_mmu_cache() have the necessary barriers.
*/
if (pte_valid_not_user(pte)) {
dsb(ishst);
isb();
}
}

Thanks.

>
> Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx>
> ---
> mm/hugetlb_vmemmap.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 20f414c0379f..76b2d03a0d8d 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -287,6 +287,11 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> copy_page(to, (void *)walk->reuse_addr);
> reset_struct_pages(to);
>
> + /*
> + * Makes sure that preceding stores to the page contents become visible
> + * before the set_pte_at() write.
> + */
> + smp_wmb();
> set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> }
>
> --
> 2.23.0
>
>