Re: [RFC][PATCH 1/2] Linux/Guest unmapped page cache control

From: KAMEZAWA Hiroyuki
Date: Sun Jun 13 2010 - 20:32:56 EST


On Mon, 14 Jun 2010 00:01:45 +0530
Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote:

> * Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> [2010-06-08 21:21:46]:
>
> > Selectively control Unmapped Page Cache (nospam version)
> >
> > From: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
> >
> > This patch implements unmapped page cache control via preferred
> > page cache reclaim. The current patch hooks into kswapd and reclaims
> > page cache if the user has requested for unmapped page control.
> > This is useful in the following scenario
> >
> > - In a virtualized environment with cache=writethrough, we see
> > double caching - (one in the host and one in the guest). As
> > we try to scale guests, cache usage across the system grows.
> > The goal of this patch is to reclaim page cache when Linux is running
> > as a guest and get the host to hold the page cache and manage it.
> > There might be temporary duplication, but in the long run, memory
> > in the guests would be used for mapped pages.
> > - The option is controlled via a boot option and the administrator
> > can selectively turn it on, on a need to use basis.
> >
> > A lot of the code is borrowed from zone_reclaim_mode logic for
> > __zone_reclaim(). One might argue that the with ballooning and
> > KSM this feature is not very useful, but even with ballooning,
> > we need extra logic to balloon multiple VM machines and it is hard
> > to figure out the correct amount of memory to balloon. With these
> > patches applied, each guest has a sufficient amount of free memory
> > available, that can be easily seen and reclaimed by the balloon driver.
> > The additional memory in the guest can be reused for additional
> > applications or used to start additional guests/balance memory in
> > the host.
> >
> > KSM currently does not de-duplicate host and guest page cache. The goal
> > of this patch is to help automatically balance unmapped page cache when
> > instructed to do so.
> >
> > There are some magic numbers in use in the code, UNMAPPED_PAGE_RATIO
> > and the number of pages to reclaim when unmapped_page_control argument
> > is supplied. These numbers were chosen to avoid aggressiveness in
> > reaping page cache ever so frequently, at the same time providing control.
> >
> > The sysctl for min_unmapped_ratio provides further control from
> > within the guest on the amount of unmapped pages to reclaim.
> >
>
> Are there any major objections to this patch?
>

This kind of patch needs "how it works well" measurement.

- How did you measure the effect of the patch ? kernbench is not enough, of course.
- Why don't you believe LRU ? And if LRU doesn't work well, should it be
fixed by a knob rather than generic approach ?
- No side effects ?

- Linux vm guys tend to say, "free memory is bad memory". ok, for what
free memory created by your patch is used ? IOW, I can't see the benefit.
If free memory that your patch created will be used for another page-cache,
it will be dropped soon by your patch itself.

If your patch just drops "duplicated, but no more necessary for other kvm",
I agree your patch may increase available size of page-caches. But you just
drops unmapped pages.
Hmm.

Thanks,
-Kame


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/