[PATCH] mm: migrate.c: migrate PG_readahead flag

From: Yang Shi
Date: Thu Feb 13 2020 - 19:30:01 EST


Currently migration code doesn't migrate PG_readahead flag.
Theoretically this would incur slight performance loss as the
application might have to ramp its readahead back up again. Even though
such problem happens, it might be hidden by something else since
migration is typically triggered by compaction and NUMA balancing, any
of which should be more noticeable.

Migrate the flag after end_page_writeback() since it may clear
PG_reclaim flag, which is the same bit as PG_readahead, for the new
page.

Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
---
I didn't experience any real problem, found by visual inspection. And, this was
discussed in thread: https://lore.kernel.org/linux-mm/185ce762-f25d-a013-6daa-8c288f1ff791@xxxxxxxxxxxxxxxxx/T/#m1977ce1de513401b7d09d6fa14fcffe849580aae

mm/migrate.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/mm/migrate.c b/mm/migrate.c
index edf42ed..f3c492d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page)
if (PageWriteback(newpage))
end_page_writeback(newpage);

+ /*
+ * PG_readahead share the same bit with PG_reclaim, the above
+ * end_page_writeback() may clear PG_readahead mistakenly, so set
+ * the bit after that.
+ */
+ if (PageReadahead(page))
+ SetPageReadahead(newpage);
+
copy_page_owner(page, newpage);

mem_cgroup_migrate(page, newpage);
--
1.8.3.1