[PATCH] compaction: skip buddy page during compaction

From: Minchan Kim
Date: Wed Aug 14 2013 - 11:37:21 EST


When isolate_migratepages_range meets free page, it just skip
a page instead whole free page block. It makes unnecessary
overhead of compaction so we might want to use page_order to skip whole
free page block but it's not safe without zone->lock.

With more thinking, it's always not right because CMA and memory-hotplug
already isolated free pages in the range to MIGRATE_ISOLATE right before
starting migration so we could use page_order safely in those contexts
even if we don't hold zone->lock.

In addition to that, it's likely to have many free pages in case of CMA
because CMA makes MIGRATE_CMA fallback of MIGRATE_MOVABLE to minimize
number of migrations. Even CMA area was full, it could have many free pages
once driver who is CMA area's owner releases the CMA area.
So, the bigger CMA space is, the bigger patch's benefit is.
And it could help memory-hotplug, too.

Only problem is normal compaction. The worst case is just skipping
pageblock_nr_pages, for instace, 4M(of course, it depends on configuration).
but we can make the race window very small by dobule checking PageBuddy.
Still, it could make the race theoretically but I think it's really really
unlikely and enhance compaction overhead withtout holding the lock.
Even if the race happens, normal compaction's customers which want to
higher order allocatoin don't have critical result due to failing and
can fallback.

Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
mm/compaction.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 05ccb4c..2341d52 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -520,8 +520,18 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
goto next_pageblock;

/* Skip if free */
- if (PageBuddy(page))
+ if (PageBuddy(page)) {
+ /*
+ * page_order is racy without zone->lock but worst case
+ * by the racing is just skipping pageblock_nr_pages.
+ * but even the race is really unlikely by double
+ * check of PageBuddy.
+ */
+ unsigned long order = page_order(page);
+ if (PageBuddy(page))
+ low_pfn += (1 << order) - 1;
continue;
+ }

/*
* For async migration, also only scan in MOVABLE blocks. Async
--
1.8.3.2


> buddy allocator (per-cpu allocator would be ok except for refills). I expect
> it would not be a good tradeoff to acquire the lock just to use page_order.
>
> Nak.
>
> --
> Mel Gorman
> SUSE Labs

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/