Re: [patch 0/5] refault distance-based file cache sizing

From: Andrea Arcangeli
Date: Tue May 01 2012 - 21:10:49 EST


Hi,

On Tue, May 01, 2012 at 02:26:56PM -0700, Andrew Morton wrote:
> Well, think of a stupid workload which creates a large number of very
> large but sparse files (populated with one page in each 64, for
> example). Get them all in cache, then sit there touching the inodes to
> keep then fresh. What's the worst case here?

I suspect in that scenario we may drop more inodes than before and so
a ton of their cache with it and actually worsen the LRU effect
instead of improving them.

I don't think it's a reliablity issue, or we would probably be bitten
by it already, especially with a ton of inodes with just one page at a
very large file offset accessed in a loop. This only makes more sticky
a badness we already have. Testing it for sure, wouldn't be a bad idea
though.

At first glance it sounds like a good tradeoff, as normally the
"worsening" effect of when we have too many and large radix trees that
would lead to more inodes to be dropped than before, shouldn't
materialize and we'd just make better use of the memory we already
allocated to make more accurate decisions on the active/inactive
LRU balancing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/