Re: [PATCH mm-unstable 5/5] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add()

From: Vishal Moola
Date: Fri Jan 20 2023 - 14:42:09 EST


On Wed, Jan 18, 2023 at 5:24 PM Yin, Fengwei <fengwei.yin@xxxxxxxxx> wrote:
>
>
>
> On 1/19/2023 7:22 AM, Vishal Moola (Oracle) wrote:
> > Replace migrate_page_add() with migrate_folio_add().
> > migrate_folio_add() does the same a migrate_page_add() but takes in a
> > folio instead of a page. This removes a couple of calls to
> > compound_head().
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx>
> > ---
> > mm/mempolicy.c | 34 +++++++++++++++-------------------
> > 1 file changed, 15 insertions(+), 19 deletions(-)
> >
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index 0a3690ecab7d..253ce368cf16 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -414,7 +414,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
> > },
> > };
> >
> > -static int migrate_page_add(struct page *page, struct list_head *pagelist,
> > +static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
> > unsigned long flags);
> >
> > struct queue_pages {
> > @@ -476,7 +476,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
> > /* go to folio migration */
> > if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
> > if (!vma_migratable(walk->vma) ||
> > - migrate_page_add(&folio->page, qp->pagelist, flags)) {
> > + migrate_folio_add(folio, qp->pagelist, flags)) {
> > ret = 1;
> > goto unlock;
> > }
> > @@ -544,7 +544,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
> > * temporary off LRU pages in the range. Still
> > * need migrate other LRU pages.
> > */
> > - if (migrate_page_add(&folio->page, qp->pagelist, flags))
> > + if (migrate_folio_add(folio, qp->pagelist, flags))
> > has_unmovable = true;
> > } else
> > break;
> > @@ -1022,27 +1022,23 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
> > }
> >
> > #ifdef CONFIG_MIGRATION
> > -/*
> > - * page migration, thp tail pages can be passed.
> > - */
> > -static int migrate_page_add(struct page *page, struct list_head *pagelist,
> > +static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
> > unsigned long flags)
> > {
> > - struct page *head = compound_head(page);
> > /*
> > - * Avoid migrating a page that is shared with others.
> > + * Avoid migrating a folio that is shared with others.
> > */
> > - if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) == 1) {
> > - if (!isolate_lru_page(head)) {
> > - list_add_tail(&head->lru, pagelist);
> > - mod_node_page_state(page_pgdat(head),
> > - NR_ISOLATED_ANON + page_is_file_lru(head),
> > - thp_nr_pages(head));
> > + if ((flags & MPOL_MF_MOVE_ALL) || folio_mapcount(folio) == 1) {
> One question to the page_mapcount -> folio_mapcount here.
>
> For a large folio with 0 entire mapcount, if the first sub-page and any
> other sub-page are mapped, page_mapcount(head) == 1 is true while
> folio_mapcount(folio) == 1 is not.

Hmm, you're right. Using page_mapcount(&folio->page) would definitely
maintain the same behavior, but I'm not sure that's what we actually want.

My understanding of the purpose of this check is to avoid migrating
pages shared with other processes. Meaning if a folio (or any pages
within) are mapped to different processes we would want to skip that
folio.

Although looking at it now, I don't think using folio_mapcount()
accomplishes this either in the case that multiple pages in a large
folio are mapped to the same process.

Does anyone have any better ideas for this?