High slab usage testing with zcache/zswap (Was: [PATCH 7/8] zswap:add to mm/)

From: Dan Magenheimer
Date: Tue Jan 22 2013 - 18:59:07 EST


> From: Dave Chinner [mailto:david@xxxxxxxxxxxxx]
> Sent: Thursday, January 03, 2013 12:34 AM
> Subject: Re: [PATCH 7/8] zswap: add to mm/
>
> > > On 01/02/2013 09:26 AM, Dan Magenheimer wrote:
> > > > However if one compares the total percentage
> > > > of RAM used for zpages by zswap vs the total percentage of RAM
> > > > used by slab, I suspect that the zswap number will dominate,
> > > > perhaps because zswap is storing primarily data and slab is
> > > > storing primarily metadata?
> > >
> > > That's *obviously* 100% dependent on how you configure zswap. But, that
> > > said, most of _my_ systems tend to sit with about 5% of memory in
> > > reclaimable slab
> >
> > The 5% "sitting" number for slab is somewhat interesting, but
> > IMHO irrelevant here. The really interesting value is what percent
> > is used by slab when the system is under high memory pressure; I'd
> > imagine that number would be much smaller. True?
>
> Not at all. The amount of slab memory used is wholly dependent on
> workload. I have plenty of workloads with severe memory pressure
> that I test with that sit at a steady state of >80% of ram in slab
> caches. These workloads are filesytem metadata intensive rather than
> data intensive, that's exactly the right cache balance for the
> system to have....

Hey Dave --

I'd like to do some zcache policy testing where the severe
memory pressure is a result of something like the above
where >80% of ram is in slab caches. Any thoughts on how
to do this or easily simulate it on a very simple hardware
system (e.g. PC with one SATA disk)? Or is a "big data"
configuration required?

Thanks for any advice!
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/