Re: mmap vs fs cache

From: Howard Chu
Date: Fri Mar 08 2013 - 10:01:22 EST

Chris Friesen wrote:
On 03/08/2013 03:40 AM, Howard Chu wrote:

There is no way that a process that is accessing only 30GB of a mmap
should be able to fill up 32GB of RAM. There's nothing else running on
the machine, I've killed or suspended everything else in userland
besides a couple shells running top and vmstat. When I manually
drop_caches repeatedly, then eventually slapd RSS/SHR grows to 30GB and
the physical I/O stops.

Is it possible that the kernel is doing some sort of automatic
readahead, but it ends up reading pages corresponding to data that isn't
ever queried and so doesn't get mapped by the application?

Yes, that's what I was thinking. I added a posix_madvise(..POSIX_MADV_RANDOM) but that had no effect on the test.

First obvious conclusion - kswapd is being too aggressive. When free memory hits the low watermark, the reclaim shrinks slapd down from 25GB to 18-19GB, while the page cache still contains ~7GB of unmapped pages. Ideally I'd like a tuning knob so I can say to keep no more than 2GB of unmapped pages in the cache. (And the desired effect of that would be to allow user processes to grow to 30GB total, in this case.)

I mentioned this "unmapped page cache control" post already but it seems that the idea was ultimately rejected. Is there anything else similar in current kernels?

-- Howard Chu
CTO, Symas Corp.
Director, Highland Sun
Chief Architect, OpenLDAP
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at