RE: [PATCHv6 0/8] zswap: compressed swap caching

From: Dan Magenheimer
Date: Thu Feb 21 2013 - 17:05:03 EST


> From: Seth Jennings [mailto:sjenning@xxxxxxxxxxxxxxxxxx]
> Subject: Re: [PATCHv6 0/8] zswap: compressed swap caching
>
> On 02/21/2013 09:50 AM, Dan Magenheimer wrote:
> >> From: Seth Jennings [mailto:sjenning@xxxxxxxxxxxxxxxxxx]
> >> Subject: [PATCHv6 0/8] zswap: compressed swap caching
> >>
> >> Changelog:
> >>
> >> v6:
> >> * fix improper freeing of rbtree (Cody)
> >
> > Cody's bug fix reminded me of a rather fundamental question:
> >
> > Why does zswap use a rbtree instead of a radix tree?
> >
> > Intuitively, I'd expect that pgoff_t values would
> > have a relatively high level of locality AND at any one time
> > the set of stored pgoff_t values would be relatively non-sparse.
> > This would argue that a radix tree would result in fewer nodes
> > touched on average for lookup/insert/remove.
>
> I considered using a radix tree, but I don't think there is a compelling
> reason to choose a radix tree over a red-black tree in this case
> (explanation below).
>
> From a runtime standpoint, a radix tree might be faster. The swap
> offsets will be largely in linearly bunched groups over the indexed
> range. However, there are also memory constraints to consider in this
> particular situation.
>
> Using a radix tree could result in intermediate radix_tree_node
> allocations in the store (insert) path in addition to the zswap_entry
> allocation. Since we are under memory pressure, using the red-black
> tree, whose metadata is included in the struct zswap_entry, reduces the
> number of opportunities to fail.
>
> On my system, the radix_tree_node structure is 568 bytes. The
> radix_tree_node cache requires 4 pages per slab, an order-2 page
> allocation. Growing that cache will be difficult under the pressure.
>
> In my mind, cost of even a single node allocation failure resulting in
> an additional page swapped to disk will more that wipe out any possible
> performance advantage using a radix tree might have.

For slab, I agree that makes good sense. But slub (the default allocator)
falls back, I think, to order-0 if order-2 fails.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/