Re: Adding compression before/above swapcache

From: Dan Streetman
Date: Mon Mar 31 2014 - 11:36:27 EST


On Mon, Mar 31, 2014 at 8:43 AM, Bob Liu <lliubbo@xxxxxxxxx> wrote:
> On Fri, Mar 28, 2014 at 10:47 PM, Dan Streetman <ddstreet@xxxxxxxx> wrote:
>> On Fri, Mar 28, 2014 at 10:32 AM, Rik van Riel <riel@xxxxxxxxxx> wrote:
>>> On 03/28/2014 08:36 AM, Dan Streetman wrote:
>>>
>>>> Well my general idea was to modify shrink_page_list() so that instead
>>>> of calling add_to_swap() and then pageout(), anonymous pages would be
>>>> added to a compressed cache. I haven't worked out all the specific
>>>> details, but I am initially thinking that the compressed cache could
>>>> simply repurpose incoming pages to use as the compressed cache storage
>>>> (using its own page mapping, similar to swap page mapping), and then
>>>> add_to_swap() the storage pages when the compressed cache gets to a
>>>> certain size. Pages that don't compress well could just bypass the
>>>> compressed cache, and get sent the current route directly to
>>>> add_to_swap().
>>>
>>>
>>> That sounds a lot like what zswap does. How is your
>>> proposal different?
>>
>> Two main ways:
>> 1) it's above swap, so it would still work without any real swap.
>
> Zswap can also be extended without any real swap device.

Ok I'm interested - how is that possible? :-)

>> 2) compressed pages could be written to swap disk.
>>
>
> Yes, how to handle the write back of zswap is a problem. And I think
> your patch making zswap write through is a good start.

but it's still writethrough of uncompressed pages.

>> Essentially, the two existing memory compression approaches are both
>> tied to swap. But, AFAIK there's no reason that memory compression
>> has to be tied to swap. So my approach uncouples it.
>>
>
> Yes, it's not necessary but swap page is a good candidate and easy to
> handle. There are also clean file pages which may suitable for
> compression. See http://lwn.net/Articles/545244/.

Yep, and what is the current state of cleancache? Was there a
definitive reason it hasn't made it in yet?

>>> And, is there an easier way to implement that difference? :)
>>
>> I'm hoping that it wouldn't actually be too complex. But that's part
>> of why I emailed for feedback before digging into a prototype... :-)
>>
>
> I'm afraid your idea may not that easy to be implemented and need to
> add many tricky code to current mm subsystem, but the benefit is still
> uncertain. As Mel pointed out we really need better demonstration
> workloads for memory compression before changes.
> https://lwn.net/Articles/591961

Well I think it's hard to argue that memory compression provides *no*
obvious benefit - I'm pretty sure it's quite useful for minor
overcommit on systems without any disk swap, and even for systems with
swap it at least softens the steep performance cliff that we currently
have when starting to overcommit memory into swap space.

As far as its benefits for larger systems, or how realistic it is to
start routinely overcommitting systems with the expectation that
memory compression magically gives you more effective RAM, I certainly
don't know the answer, and I agree, more widespread testing and
demonstration surely will be needed.

But to ask a more pointed question - what do you think would be the
tricky part(s)?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/