[RFC] mempool_alloc() pre-allocated object usage

From: Paul Mundt
Date: Mon Oct 03 2005 - 09:39:30 EST


Currently mempool_create() will pre-allocate min_nr objects in the pool
for later usage. However, the current semantics of mempool_alloc() are to
first attempt the ->alloc() path and then fall back to using a
pre-allocated object that already exists in the pool.

This is somewhat of a problem if we want to build up a pool of relatively
high order allocations (backed with a slab cache for example) for
gauranteeing contiguity early on, as sometimes we are able to satisfy the
->alloc() path and end up growing the pool larger than we would like.

The easy way around this would be to first fetch objects out of the pool
and then try ->alloc() in the case where we have no free objects left in
the pool. ie:

diff --git a/mm/mempool.c b/mm/mempool.c
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -216,11 +216,6 @@ void * mempool_alloc(mempool_t *pool, un
gfp_temp = gfp_mask & ~(__GFP_WAIT|__GFP_IO);

repeat_alloc:
-
- element = pool->alloc(gfp_temp, pool->pool_data);
- if (likely(element != NULL))
- return element;
-
spin_lock_irqsave(&pool->lock, flags);
if (likely(pool->curr_nr)) {
element = remove_element(pool);
@@ -229,6 +224,10 @@ repeat_alloc:
}
spin_unlock_irqrestore(&pool->lock, flags);

+ element = pool->alloc(gfp_temp, pool->pool_data);
+ if (likely(element != NULL))
+ return element;
+
/* We must not sleep in the GFP_ATOMIC case */
if (!(gfp_mask & __GFP_WAIT))
return NULL;

The downside to this is that some people may be expecting that
pre-allocated elements are used as reserve space for when regular
allocations aren't possible. In which case, this would break that
behaviour.

Both usage patterns seem valid from my point of view, would you be open
to something that would accomodate both? (ie, possibly adding in a flag
to determine pre-allocated object usage?) Or should I not be using
mempool for contiguity purposes?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/