Re: [RFC PATCH 2/3] mm, memory_hotplug: deobfuscate migration part of offlining

From: osalvador
Date: Tue Nov 20 2018 - 10:34:40 EST



> Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
[...]
> + do {
> + for (pfn = start_pfn; pfn;)
> + {
> + /* start memory hot removal */

Should we change thAT comment? I mean, this is not really the hot-
removal stage.

Maybe "start memory migration" suits better? or memory offlining?

> + ret = -EINTR;
> + if (signal_pending(current)) {
> + reason = "signal backoff";
> + goto failed_removal_isolated;
> + }
>
> - cond_resched();
> - lru_add_drain_all();
> - drain_all_pages(zone);
> + cond_resched();
> + lru_add_drain_all();
> + drain_all_pages(zone);
>
> - pfn = scan_movable_pages(start_pfn, end_pfn);
> - if (pfn) { /* We have movable pages */
> - ret = do_migrate_range(pfn, end_pfn);
> - goto repeat;
> - }
> + pfn = scan_movable_pages(pfn, end_pfn);
> + if (pfn) {
> + /* TODO fatal migration failures
> should bail out */
> + do_migrate_range(pfn, end_pfn);
> + }
> + }
> +
> + /*
> + * dissolve free hugepages in the memory block
> before doing offlining
> + * actually in order to make hugetlbfs's object
> counting consistent.
> + */
> + ret = dissolve_free_huge_pages(start_pfn, end_pfn);
> + if (ret) {
> + reason = "failure to dissolve huge pages";
> + goto failed_removal_isolated;
> + }
> + /* check again */
> + offlined_pages = check_pages_isolated(start_pfn,
> end_pfn);
> + } while (offlined_pages < 0);
>
> - /*
> - * dissolve free hugepages in the memory block before doing
> offlining
> - * actually in order to make hugetlbfs's object counting
> consistent.
> - */
> - ret = dissolve_free_huge_pages(start_pfn, end_pfn);
> - if (ret) {
> - reason = "failure to dissolve huge pages";
> - goto failed_removal_isolated;
> - }
> - /* check again */
> - offlined_pages = check_pages_isolated(start_pfn, end_pfn);
> - if (offlined_pages < 0)
> - goto repeat;

This indeed looks much nicer and it is easier to follow.
With the changes commented by David:

Reviewed-by: Oscar Salvador <osalvador@xxxxxxx>