Re: [RFC 13/20] mm/tlb: introduce tlb_start_ptes() and tlb_end_ptes()
From: Peter Zijlstra
Date: Mon Feb 01 2021 - 08:20:54 EST
On Sat, Jan 30, 2021 at 04:11:25PM -0800, Nadav Amit wrote:
> +#define tlb_start_ptes(tlb) \
> + do { \
> + struct mmu_gather *_tlb = (tlb); \
> + \
> + flush_tlb_batched_pending(_tlb->mm); \
> + } while (0)
> +
> +static inline void tlb_end_ptes(struct mmu_gather *tlb) { }
> tlb_change_page_size(tlb, PAGE_SIZE);
> orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> - flush_tlb_batched_pending(mm);
> + tlb_start_ptes(tlb);
> arch_enter_lazy_mmu_mode();
> for (; addr < end; pte++, addr += PAGE_SIZE) {
> ptent = *pte;
> @@ -468,6 +468,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> }
>
> arch_leave_lazy_mmu_mode();
> + tlb_end_ptes(tlb);
> pte_unmap_unlock(orig_pte, ptl);
> if (pageout)
> reclaim_pages(&page_list);
I don't like how you're dubbling up on arch_*_lazy_mmu_mode(). It seems
to me those should be folded into tlb_{start,end}_ptes().
Alternatively, the even more work approach would be to, add an optional
@tlb argument to pte_offset_map_lock()/pte_unmap_unlock() and friends.