Re: [PATCHv12 3/4] zswap: add to mm/

From: Andrew Morton
Date: Wed May 29 2013 - 17:16:43 EST


On Wed, 29 May 2013 16:08:20 -0500 Seth Jennings <sjenning@xxxxxxxxxxxxxxxxxx> wrote:

> On Wed, May 29, 2013 at 12:57:47PM -0700, Andrew Morton wrote:
> > On Wed, 29 May 2013 14:50:27 -0500 Seth Jennings <sjenning@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > > On Wed, May 29, 2013 at 11:29:29AM -0700, Andrew Morton wrote:
> > > > On Wed, 29 May 2013 09:57:20 -0500 Seth Jennings <sjenning@xxxxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > > > > +/*********************************
> > > > > > > +* helpers
> > > > > > > +**********************************/
> > > > > > > +static inline bool zswap_is_full(void)
> > > > > > > +{
> > > > > > > + return (totalram_pages * zswap_max_pool_percent / 100 <
> > > > > > > + zswap_pool_pages);
> > > > > > > +}
> > > > > >
> > > > > > We have had issues in the past where percentage-based tunables were too
> > > > > > coarse on very large machines. For example, a terabyte machine where 0
> > > > > > bytes is too small and 10GB is too large.
> > > > >
> > > > > Yes, this is known limitation of the code right now and it is a high priority
> > > > > to come up with something better. It isn't clear what dynamic sizing policy
> > > > > should be used so, until such time as that policy can be determined, this is a
> > > > > simple stop-gap that works well enough for simple setups.
> > > >
> > > > It's a module parameter and hence is part of the userspace interface.
> > > > It's undesirable that the interface be changed, and it would be rather
> > > > dumb to merge it as-is when we *know* that it will be changed.
> > > >
> > > > I don't think we can remove the parameter altogether (or can we?), so I
> > > > suggest we finalise it ASAP. Perhaps rename it to
> > > > zswap_max_pool_ratio, with a range 1..999999. Better ideas needed :(
> > >
> > > zswap_max_pool_ratio is fine with me. I'm not entirely clear on the change
> > > though. Would that just be a name change or a change in meaning?
> >
> > It would be a change in behaviour. The problem which I'm suggesting we
> > address is that a 1% increment is too coarse.
>
> Sorry, but I'm not getting this. This zswap_max_pool_ratio is a ratio of what
> to what? Maybe if you wrote out the calculation of the max pool size using
> this ratio I'll get it.
>

This:

totalram_pages * zswap_max_pool_percent / 100

means that we have are able to control the pool size in 10GB increments
on a 1TB machine. Past experience with other tunables tells us that
this can be a problem. Hence my (lame) suggestion that we replace it
with

totalram_pages * zswap_max_pool_ratio / 1000000


Another approach would be to stop using a ratio altogether, and make the
tunable specify an absolute number of bytes. That's how we approached
this problem in the case of /proc/sys/vm/dirty_background_ratio. See
https://lkml.org/lkml/2008/11/23/160.

(And it's "bytes", not "pages" because PAGE_SIZE can vary by a factor
of 16, which is a lot).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/