Re: [PATCHv7 4/8] zswap: add to mm/

From: Dave Hansen
Date: Thu Mar 07 2013 - 14:10:49 EST


On 03/06/2013 07:52 AM, Seth Jennings wrote:
> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
> +{
> + struct crypto_comp *tfm;
> + u8 *dst;
> +
> + switch (action) {
> + case CPU_UP_PREPARE:
> + tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
> + if (IS_ERR(tfm)) {
> + pr_err("can't allocate compressor transform\n");
> + return NOTIFY_BAD;
> + }
> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
> + dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);

Are there some alignment requirements for 'dst'? If not, why not use
kmalloc()? I think kmalloc() should always be used where possible since
slab debugging is so useful compared to what we can do with raw
buddy-allocated pages.

Where does the order-1 requirement come from by the way?

...
> +**********************************/
> +/* attempts to compress and store an single page */
> +static int zswap_frontswap_store(unsigned type, pgoff_t offset,
> + struct page *page)
> +{
...
> + /* store */
> + handle = zs_malloc(tree->pool, dlen,
> + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> + __GFP_NOWARN);
> + if (!handle) {
> + zswap_reject_zsmalloc_fail++;
> + ret = -ENOMEM;
> + goto putcpu;
> + }
> +

I think there needs to at least be some strong comments in here about
why you're doing this kind of allocation. From some IRC discussion, it
seems like you found some pathological case where zswap wasn't helping
make reclaim progress and ended up draining the reserve pools and you
did this to avoid draining the reserve pools.

I think the lack of progress doing reclaim is really the root cause you
should be going after here instead of just working around the symptom.

> +/* NOTE: this is called in atomic context from swapon and must not sleep */
> +static void zswap_frontswap_init(unsigned type)
> +{
> + struct zswap_tree *tree;
> +
> + tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT);
> + if (!tree)
> + goto err;
> + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
> + if (!tree->pool)
> + goto freetree;
> + tree->rbroot = RB_ROOT;
> + spin_lock_init(&tree->lock);
> + zswap_trees[type] = tree;
> + return;
> +
> +freetree:
> + kfree(tree);
> +err:
> + pr_err("alloc failed, zswap disabled for swap type %d\n", type);
> +}

How large are these allocations? Why are you doing GFP_NOWAIT instead
of GFP_ATOMIC? This seems like the kind of thing that you'd _want_ to
be able to dip in to the reserves for.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/