Re: [RFC 2/2] cma: support MIGRATE_DISCARD

From: Rik van Riel
Date: Wed Aug 15 2012 - 15:00:21 EST


On 08/14/2012 04:57 AM, Minchan Kim wrote:
This patch introudes MIGRATE_DISCARD mode in migration.
It drop clean cache pages instead of migration so that
migration latency could be reduced. Of course, it could
evict code pages but latency of big contiguous memory
is more important than some background application's slow down
in mobile embedded enviroment.

Would it be an idea to only drop clean UNMAPPED
page cache pages?

Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>

@@ -799,12 +802,39 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
goto skip_unmap;
}

+ file = page_is_file_cache(page);
+ ttu_flags = TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS;
+
+ if (!(mode & MIGRATE_DISCARD) || !file || PageDirty(page))
+ ttu_flags |= TTU_MIGRATION;
+ else
+ discard_mode = true;
+
/* Establish migration ptes or remove ptes */
- try_to_unmap(page, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);
+ try_to_unmap(page, ttu_flags);

This bit looks wrong, because you end up ignoring
mlock and then discarding the page.

Only dropping clean page cache pages that are not
mapped would avoid that problem, without introducing
much complexity in the code.

That would turn the test above into:

if (!page_mapped(page))
discard_mode = true;

skip_unmap:
- if (!page_mapped(page))
- rc = move_to_new_page(newpage, page, remap_swapcache, mode);
+ if (!page_mapped(page)) {
+ if (!discard_mode)
+ rc = move_to_new_page(newpage, page, remap_swapcache, mode);
+ else {
+ struct address_space *mapping;
+ mapping = page_mapping(page);
+
+ if (page_has_private(page)) {
+ if (!try_to_release_page(page, GFP_KERNEL)) {
+ rc = -EAGAIN;
+ goto uncharge;
+ }
+ }
+
+ if (remove_mapping(mapping, page))
+ rc = 0;
+ else
+ rc = -EAGAIN;
+ goto uncharge;
+ }
+ }

This big piece of code could probably be split out
into its own function.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/