Re: [External] Re: [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages

From: Muchun Song
Date: Sun Jan 24 2021 - 23:02:16 EST


On Mon, Jan 25, 2021 at 7:55 AM David Rientjes <rientjes@xxxxxxxxxx> wrote:
>
>
> On Sun, 17 Jan 2021, Muchun Song wrote:
>
> > In the subsequent patch, we should allocate the vmemmap pages when
> > freeing HugeTLB pages. But update_and_free_page() is always called
> > with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate
> > vmemmap pages. However, we can defer the actual freeing in a kworker
> > to prevent from using GFP_ATOMIC to allocate the vmemmap pages.
> >
> > The update_hpage_vmemmap_workfn() is where the call to allocate
> > vmemmmap pages will be inserted.
> >
>
> I think it's reasonable to assume that userspace can release free hugetlb
> pages from the pool on oom conditions when reclaim has become too
> expensive. This approach now requires that we can allocate vmemmap pages
> in a potential oom condition as a prerequisite for freeing memory, which
> seems less than ideal.
>
> And, by doing this through a kworker, we can presumably get queued behind
> another work item that requires memory to make forward progress in this
> oom condition.
>
> Two thoughts:
>
> - We're going to be freeing the hugetlb page after we can allocate the
> vmemmap pages, so why do we need to allocate with GFP_KERNEL? Can't we
> simply dip into memory reserves using GFP_ATOMIC (and thus can be
> holding hugetlb_lock) because we know we'll be freeing more memory than
> we'll be allocating?

Right.

> I think requiring a GFP_KERNEL allocation to block
> to free memory for vmemmap when we'll be freeing memory ourselves is
> dubious. This simplifies all of this.

Thanks for your thoughts. I just thought that we can go to reclaim
when there is no memory in the system. But we cannot block when
using GFP_KERNEL. Actually, we cannot deal with fail of memory
allocating. In the next patch, I try to sleep 100ms and then try again
to allocate memory when allocating memory fails.

>
> - If the answer is that we actually have to use GFP_KERNEL for other
> reasons, what are your thoughts on pre-allocating the vmemmap as opposed
> to deferring to a kworker? In other words, preallocate the necessary
> memory with GFP_KERNEL and put it on a linked list in struct hstate
> before acquiring hugetlb_lock.

put_page() can be used in an atomic context. Actually, we cannot sleep
in the __free_huge_page(). It seems a little tricky. Right?

>
> > Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> > Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> > ---
> > mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> > mm/hugetlb_vmemmap.c | 12 ---------
> > mm/hugetlb_vmemmap.h | 17 ++++++++++++
> > 3 files changed, 89 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 140135fc8113..c165186ec2cf 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> > unsigned int order) { }
> > #endif
> >
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static void __free_hugepage(struct hstate *h, struct page *page);
> > +
> > +/*
> > + * As update_and_free_page() is always called with holding hugetlb_lock, so we
> > + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the
> > + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate
> > + * the vmemmap pages.
> > + *
> > + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap
> > + * pages will be inserted.
> > + *
> > + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages
> > + * to be freed and frees them one-by-one. As the page->mapping pointer is going
> > + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the
> > + * llist_node structure of a lockless linked list of huge pages to be freed.
> > + */
> > +static LLIST_HEAD(hpage_update_freelist);
> > +
> > +static void update_hpage_vmemmap_workfn(struct work_struct *work)
> > {
> > - int i;
> > + struct llist_node *node;
> > +
> > + node = llist_del_all(&hpage_update_freelist);
> > +
> > + while (node) {
> > + struct page *page;
> > + struct hstate *h;
> > +
> > + page = container_of((struct address_space **)node,
> > + struct page, mapping);
> > + node = node->next;
> > + page->mapping = NULL;
> > + h = page_hstate(page);
> > +
> > + spin_lock(&hugetlb_lock);
> > + __free_hugepage(h, page);
> > + spin_unlock(&hugetlb_lock);
> >
> > + cond_resched();
>
> Wouldn't it be better to hold hugetlb_lock for the iteration rather than
> constantly dropping it and reacquiring it? Use
> cond_resched_lock(&hugetlb_lock) instead?

Great. We can use it. Thanks.