Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*()

From: Ira Weiny
Date: Thu Aug 08 2019 - 19:41:41 EST


On Thu, Aug 08, 2019 at 03:59:15PM -0700, John Hubbard wrote:
> On 8/8/19 12:20 PM, John Hubbard wrote:
> > On 8/8/19 4:09 AM, Vlastimil Babka wrote:
> >> On 8/8/19 8:21 AM, Michal Hocko wrote:
> >>> On Wed 07-08-19 16:32:08, John Hubbard wrote:
> >>>> On 8/7/19 4:01 AM, Michal Hocko wrote:
> >>>>> On Mon 05-08-19 15:20:17, john.hubbard@xxxxxxxxx wrote:
> >>>>>> From: John Hubbard <jhubbard@xxxxxxxxxx>
> >>>> Actually, I think follow_page_mask() gets all the pages, right? And the
> >>>> get_page() in __munlock_pagevec_fill() is there to allow a pagevec_release()
> >>>> later.
> >>>
> >>> Maybe I am misreading the code (looking at Linus tree) but munlock_vma_pages_range
> >>> calls follow_page for the start address and then if not THP tries to
> >>> fill up the pagevec with few more pages (up to end), do the shortcut
> >>> via manual pte walk as an optimization and use generic get_page there.
> >>
> >
> > Yes, I see it finally, thanks. :)
> >
> >> That's true. However, I'm not sure munlocking is where the
> >> put_user_page() machinery is intended to be used anyway? These are
> >> short-term pins for struct page manipulation, not e.g. dirtying of page
> >> contents. Reading commit fc1d8e7cca2d I don't think this case falls
> >> within the reasoning there. Perhaps not all GUP users should be
> >> converted to the planned separate GUP tracking, and instead we should
> >> have a GUP/follow_page_mask() variant that keeps using get_page/put_page?
> >>
> >
> > Interesting. So far, the approach has been to get all the gup callers to
> > release via put_user_page(), but if we add in Jan's and Ira's vaddr_pin_pages()
> > wrapper, then maybe we could leave some sites unconverted.
> >
> > However, in order to do so, we would have to change things so that we have
> > one set of APIs (gup) that do *not* increment a pin count, and another set
> > (vaddr_pin_pages) that do.
> >
> > Is that where we want to go...?
> >
>
> Oh, and meanwhile, I'm leaning toward a cheap fix: just use gup_fast() instead
> of get_page(), and also fix the releasing code. So this incremental patch, on
> top of the existing one, should do it:
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index b980e6270e8a..2ea272c6fee3 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -318,18 +318,14 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
> /*
> * We won't be munlocking this page in the next phase
> * but we still need to release the follow_page_mask()
> - * pin. We cannot do it under lru_lock however. If it's
> - * the last pin, __page_cache_release() would deadlock.
> + * pin.
> */
> - pagevec_add(&pvec_putback, pvec->pages[i]);
> + put_user_page(pages[i]);
> pvec->pages[i] = NULL;
> }
> __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
> spin_unlock_irq(&zone->zone_pgdat->lru_lock);
>
> - /* Now we can release pins of pages that we are not munlocking */
> - pagevec_release(&pvec_putback);
> -

I'm not an expert but this skips a call to lru_add_drain(). Is that ok?

> /* Phase 2: page munlock */
> for (i = 0; i < nr; i++) {
> struct page *page = pvec->pages[i];
> @@ -394,6 +390,8 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
> start += PAGE_SIZE;
> while (start < end) {
> struct page *page = NULL;
> + int ret;
> +
> pte++;
> if (pte_present(*pte))
> page = vm_normal_page(vma, start, *pte);
> @@ -411,7 +409,13 @@ static unsigned long __munlock_pagevec_fill(struct pagevec *pvec,
> if (PageTransCompound(page))
> break;
>
> - get_page(page);
> + /*
> + * Use get_user_pages_fast(), instead of get_page() so that the
> + * releasing code can unconditionally call put_user_page().
> + */
> + ret = get_user_pages_fast(start, 1, 0, &page);
> + if (ret != 1)
> + break;

I like the idea of making this a get/put pair but I'm feeling uneasy about how
this is really supposed to work.

For sure the GUP/PUP was supposed to be separate from [get|put]_page.

Ira
> /*
> * Increase the address that will be returned *before* the
> * eventual break due to pvec becoming full by adding the page
>
>
> thanks,
> --
> John Hubbard
> NVIDIA