Re: [PATCH 2/8] migrate_pages: separate hugetlb folios migration

From: Huang, Ying
Date: Thu Jan 05 2023 - 02:32:20 EST



[snip]

>
>>>> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>> nr_retry_pages = 0;
>>>>
>>>> list_for_each_entry_safe(folio, folio2, from, lru) {
>>>> + if (folio_test_hugetlb(folio)) {
>>>
>>> How do we hit this case? Shouldn't migrate_hugetlbs() have already moved
>>> any hugetlb folios off the from list?
>>
>> Retried hugetlb folios will be kept in from list.
>
> Couldn't migrate_hugetlbs() remove the failing retried pages from the
> list on the final pass? That seems cleaner to me.

To do that, we need to go through the folio list again to remove all
hugetlb pages. It could be time-consuming in some cases. So I think
that it's better to keep this.

Best Regards,
Huang, Ying

>>>> + list_move_tail(&folio->lru, &ret_folios);
>>>> + continue;
>>>> + }
>>>> +
>>>> /*
>>>> * Large folio statistics is based on the source large
>>>> * folio. Capture required information that might get
>>>> * lost during migration.
>>>> */
>>>> - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio);
>>>> + is_large = folio_test_large(folio);
>>>> is_thp = is_large && folio_test_pmd_mappable(folio);
>>>> nr_pages = folio_nr_pages(folio);
>>>> +
>>>> cond_resched();
>>>>
>>>> - if (folio_test_hugetlb(folio))
>>>> - rc = unmap_and_move_huge_page(get_new_page,
>>>> - put_new_page, private,
>>>> - &folio->page, pass > 2, mode,
>>>> - reason,
>>>> - &ret_folios);
>>>> - else
>>>> - rc = unmap_and_move(get_new_page, put_new_page,
>>>> - private, folio, pass > 2, mode,
>>>> - reason, &ret_folios);
>>>> + rc = unmap_and_move(get_new_page, put_new_page,
>>>> + private, folio, pass > 2, mode,
>>>> + reason, &ret_folios);
>>>> /*
>>>> * The rules are:
>>>> - * Success: non hugetlb folio will be freed, hugetlb
>>>> - * folio will be put back
>>>> + * Success: folio will be freed
>>>> * -EAGAIN: stay on the from list
>>>> * -ENOMEM: stay on the from list
>>>> * -ENOSYS: stay on the from list
>>>> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>>> stats.nr_thp_split += is_thp;
>>>> break;
>>>> }
>>>> - /* Hugetlb migration is unsupported */
>>>> } else if (!no_split_folio_counting) {
>>>> nr_failed++;
>>>> }