Re: [PATCH 2/9] ksm: let shared pages be swappable

From: Andrea Arcangeli
Date: Fri Dec 04 2009 - 09:45:59 EST


On Fri, Dec 04, 2009 at 02:06:07PM +0900, KOSAKI Motohiro wrote:
> Windows kernel have zero page thread and it clear the pages in free list
> periodically. because many windows subsystem prerefer zero filled page.
> hen, if we use windows guest, zero filled page have plenty mapcount rather
> than other typical sharing pages, I guess.
>
> So, can we mark as unevictable to zero filled ksm page?

I don't like magics for zero ksm page, or magic number after which we
consider unevictable.

Just breaking the loop after 64 young are cleared and putting it back
to the head of the active list is enough. Clearly it requires a bit
more changes to fit into current code that uses page_referenced to
clear all young bits ignoring if they were set during the clear loop.

I think it's fishy to ignore the page_referenced retval and I don't
like the wipe_page_referenced concept. page_referenced should only be
called when we're in presence of VM pressure that requires
unmapping. And we should always re-add the page to active list head,
if it was found referenced as retval of page_referenced. I cannot care
less about first swapout burst to be FIFO because it'll be close to
FIFO anyway. The wipe_page_referenced thing was called 1 year ago
shortly after the page was allocated, then app touches the page after
it's in inactive anon, and then the app never touches the page again
for one year. And yet we consider it active after 1 year we cleared
its referenced bit. It's all very fishy... Plus that VM_EXEC is still
there. The only magic allowed that I advocate is to have a
page_mapcount() check to differentiate between pure cache pollution
(i.e. to avoid being forced to O_DIRECT without actually activating
unnecessary VM activity on mapped pages that aren't pure cache
pollution by somebody running a backup with tar).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/