Re: [PATCH 3/3] [RFC] tmpfs: Add FALLOC_FL_MARK_VOLATILE/UNMARK_VOLATILEhandlers

From: KOSAKI Motohiro
Date: Fri Jun 08 2012 - 00:50:50 EST


(6/7/12 11:03 PM), John Stultz wrote:
On 06/07/2012 04:41 PM, Dave Hansen wrote:
On 06/07/2012 03:55 AM, Dmitry Adamushko wrote:
but maybe we should also purge them before we swap out some non-tmpfs
pages or drop some file-backed pages?

Sure... I guess we could kick that from either direct reclaim or from
kswapd. But, then we're basically back to the places where
shrink_slab() is called.

I think that means that we think it's preferable to integrate this more
directly in the VM instead of sticking it off in the corner of tmpfs
only, or pretending it's a slab.

Dunno... The slab shrinker one isn't looking _so_ bad at the moment.

Dave also pointed out to me on irc that on a system without swap,
shmem_writepage doesn't even get called, which kills the utility of
triggering volatile purging from writepage.

Ah, right you are. swap-less system never try to reclaim anon pages. So,
volatile pages is no longer swap backed. swap backed lru is no longer suitable
place.

So I'm falling back to using a shrinker for now, but I think Dmitry's
point is an interesting one, and am interested in finding a better
place to trigger purging volatile ranges from the mm code. If anyone has any
suggestions, let me know, otherwise I'll go back to trying to better grok the mm code.

I hate vm feature to abuse shrink_slab(). because of, it was not designed generic callback.
it was designed for shrinking filesystem metadata. Therefore, vm keeping a balance between
page scanning and slab scanning. then, a lot of shrink_slab misuse may lead to break balancing
logic. i.e. drop icache/dcache too many and makes perfomance impact.

As far as a code impact is small, I'm prefer to connect w/ vm reclaim code directly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/