Re: [PATCH mm-unstable v2 8/8] mm/hugetlb: convert demote_free_huge_page to folios

From: Mike Kravetz
Date: Tue Jan 10 2023 - 19:01:38 EST


On 01/10/23 21:40, Matthew Wilcox wrote:
> On Tue, Jan 10, 2023 at 03:28:21PM -0600, Sidhartha Kumar wrote:
> > @@ -3505,6 +3505,7 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
> > {
> > int nr_nodes, node;
> > struct page *page;
> > + struct folio *folio;
> >
> > lockdep_assert_held(&hugetlb_lock);
> >
> > @@ -3518,8 +3519,8 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
> > list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
> > if (PageHWPoison(page))
> > continue;
> > -
> > - return demote_free_huge_page(h, page);
> > + folio = page_folio(page);
> > + return demote_free_hugetlb_folio(h, folio);
> > }
> > }
>
> Can't this be
> list_for_each_entry(folio, &h->hugepage_freelists[node], lru)
>
> which avoids the call to page_folio() here.
>
> I think the call to PageHWPoison is actually wrong here. That would
> only check the hwpoison bit on the first page, whereas we want to know
> about the hwpoison bit on any page (don't we?) So this should be
> folio_test_has_hwpoisoned()?
>
> Or is that a THP-thing that is different for hugetlb pages?

I believe it is different for hugetlb pages. See hugetlb_set_page_hwpoison()
where it sets PageHWPoison on head page as well as allocating a raw_hwp_page
to track the actual page with poison. Note that we can not directly flag
hugetlb 'subpages' because we may not have the struct pages due to vmemmap
optimization. Adding Naoya just to be sure.

Do agree that this could be list_for_each_entry(folio ...
--
Mike Kravetz