Re: [PATCH 01/20] mm: mmu_gather rework
From: Andrew Morton
Date: Tue Apr 19 2011 - 16:08:23 EST
On Fri, 01 Apr 2011 14:12:59 +0200
Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
> Remove the first obstackle towards a fully preemptible mmu_gather.
>
> The current scheme assumes mmu_gather is always done with preemption
> disabled and uses per-cpu storage for the page batches. Change this to
> try and allocate a page for batching and in case of failure, use a
> small on-stack array to make some progress.
>
> Preemptible mmu_gather is desired in general and usable once
> i_mmap_lock becomes a mutex. Doing it before the mutex conversion
> saves us from having to rework the code by moving the mmu_gather
> bits inside the pte_lock.
>
> Also avoid flushing the tlb batches from under the pte lock,
> this is useful even without the i_mmap_lock conversion as it
> significantly reduces pte lock hold times.
There doesn't seem much point in reviewing this closely, as a lot of it
gets tossed away later in the series..
> free_pages_and_swap_cache(tlb->pages, tlb->nr);
It seems inappropriate that this code uses
free_page[s]_and_swap_cache(). It should go direct to put_page() and
release_pages()? Please review this code's implicit decision to pass
"cold==0" into release_pages().
> -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
> +static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
I wonder if all the inlining which remains in this code is needed and
desirable.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/