Re: [PATCH v3 4/5] mm: mlock: update the interface to use folios

From: Lorenzo Stoakes
Date: Thu Jan 12 2023 - 07:07:49 EST


On Thu, Jan 12, 2023 at 11:55:13AM +0100, Vlastimil Babka wrote:
> On 12/26/22 09:44, Lorenzo Stoakes wrote:
> > This patch updates the mlock interface to accept folios rather than pages,
> > bringing the interface in line with the internal implementation.
> >
> > munlock_vma_page() still requires a page_folio() conversion, however this
> > is consistent with the existent mlock_vma_page() implementation and a
> > product of rmap still dealing in pages rather than folios.
> >
> > Signed-off-by: Lorenzo Stoakes <lstoakes@xxxxxxxxx>
>
> Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
>
> With some suggestion:
>
> > ---
> > mm/internal.h | 26 ++++++++++++++++----------
> > mm/mlock.c | 32 +++++++++++++++-----------------
> > mm/swap.c | 2 +-
> > 3 files changed, 32 insertions(+), 28 deletions(-)
> >
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 1d6f4e168510..8a6e83315369 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> > * should be called with vma's mmap_lock held for read or write,
> > * under page table lock for the pte/pmd being added or removed.
> > *
> > - * mlock is usually called at the end of page_add_*_rmap(),
> > - * munlock at the end of page_remove_rmap(); but new anon
> > - * pages are managed by lru_cache_add_inactive_or_unevictable()
> > - * calling mlock_new_page().
> > + * mlock is usually called at the end of page_add_*_rmap(), munlock at
> > + * the end of page_remove_rmap(); but new anon folios are managed by
> > + * folio_add_lru_vma() calling mlock_new_folio().
> > *
> > * @compound is used to include pmd mappings of THPs, but filter out
> > * pte mappings of THPs, which cannot be consistently counted: a pte
> > @@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page,
> > mlock_vma_folio(page_folio(page), vma, compound);
> > }
> >
> > -void munlock_page(struct page *page);
> > -static inline void munlock_vma_page(struct page *page,
> > +void munlock_folio(struct folio *folio);
> > +
> > +static inline void munlock_vma_folio(struct folio *folio,
> > struct vm_area_struct *vma, bool compound)
> > {
> > if (unlikely(vma->vm_flags & VM_LOCKED) &&
> > - (compound || !PageTransCompound(page)))
> > - munlock_page(page);
> > + (compound || !folio_test_large(folio)))
> > + munlock_folio(folio);
> > +}
> > +
> > +static inline void munlock_vma_page(struct page *page,
> > + struct vm_area_struct *vma, bool compound)
> > +{
> > + munlock_vma_folio(page_folio(page), vma, compound);
> > }
> > -void mlock_new_page(struct page *page);
> > +void mlock_new_folio(struct folio *folio);
> > bool need_mlock_page_drain(int cpu);
> > void mlock_page_drain_local(void);
> > void mlock_page_drain_remote(int cpu);
>
> I think these drain related functions could use a rename as well?
> Maybe replace "page" with "fbatch" or "folio_batch"? Even the old name isn't
> great, should have been "pagevec".

Agreed, though I feel it's more readable if we just drop this bit altogether,
which is also more consistent with the core batch drain functions like
e.g. lru_add_drain().

In this case we'd go to need_mlock_drain(), mlock_drain_local() and
mlock_drain_remote().

> But maybe it would fit patch 2/5 rather than 4/5 as it's logically internal
> even if in a .h file.
>
>

Even though it is an internal interface across the board I feel like it makes
the patch series a little easier to read keeping this separate, so I think it
makes sense to keep it here so we can have a separation between
internal-to-mlock changes vs. internal-to-mm ones :)