Re: [PATCH v5 2/4] riscv: Improve flush_tlb_range() for hugetlb pages

From: Samuel Holland
Date: Sat Oct 28 2023 - 14:53:15 EST


On 2023-10-19 9:01 AM, Alexandre Ghiti wrote:
> flush_tlb_range() uses a fixed stride of PAGE_SIZE and in its current form,
> when a hugetlb mapping needs to be flushed, flush_tlb_range() flushes the
> whole tlb: so set a stride of the size of the hugetlb mapping in order to
> only flush the hugetlb mapping. However, if the hugepage is a NAPOT region,
> all PTEs that constitute this mapping must be invalidated, so the stride
> size must actually be the size of the PTE.
>
> Note that THPs are directly handled by flush_pmd_tlb_range().
>
> Signed-off-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx>
> ---
> arch/riscv/mm/tlbflush.c | 31 ++++++++++++++++++++++++++++++-
> 1 file changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index fa03289853d8..5933744df91a 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -3,6 +3,7 @@
> #include <linux/mm.h>
> #include <linux/smp.h>
> #include <linux/sched.h>
> +#include <linux/hugetlb.h>
> #include <asm/sbi.h>
> #include <asm/mmu_context.h>
>
> @@ -147,7 +148,35 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> unsigned long end)
> {
> - __flush_tlb_range(vma->vm_mm, start, end - start, PAGE_SIZE);
> + unsigned long stride_size;
> +
> + if (!is_vm_hugetlb_page(vma)) {
> + stride_size = PAGE_SIZE;
> + } else {
> + stride_size = huge_page_size(hstate_vma(vma));
> +
> +#ifdef CONFIG_RISCV_ISA_SVNAPOT

There is a fallback implementation of has_svnapot(), so you do not need this
preprocessor check. With that removed:

Reviewed-by: Samuel Holland <samuel.holland@xxxxxxxxxx>

> + /*
> + * As stated in the privileged specification, every PTE in a
> + * NAPOT region must be invalidated, so reset the stride in that
> + * case.
> + */
> + if (has_svnapot()) {
> + if (stride_size >= PGDIR_SIZE)
> + stride_size = PGDIR_SIZE;
> + else if (stride_size >= P4D_SIZE)
> + stride_size = P4D_SIZE;

As a side note, and this is probably premature optimization... PGDIR_SIZE and
P4D_SIZE check pgtable_l{4,5}_enabled. That's not really necessary here, since
we are just trying to round down, and there won't be any higher-order hugepages
if those paging levels are disabled.

> + else if (stride_size >= PUD_SIZE)
> + stride_size = PUD_SIZE;
> + else if (stride_size >= PMD_SIZE)
> + stride_size = PMD_SIZE;
> + else
> + stride_size = PAGE_SIZE;
> + }
> +#endif
> + }
> +
> + __flush_tlb_range(vma->vm_mm, start, end - start, stride_size);
> }
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,