Re: [Bug #14141] order 2 page allocation failures in iwlagn

From: reinette chatre
Date: Wed Oct 14 2009 - 16:43:09 EST


On Wed, 2009-10-14 at 09:50 -0700, Mel Gorman wrote:

> What is your take on GFP_ATOMIC-direct deleting the pool before the tasklet
> can refill it with GFP_KERNEL?

I am not sure I understand your question. We attempt to reclaim a
received buffer on every receive, and with a queue size of 256 + 64 we
assume to have a pretty big buffer to deal with cases when allocations
fail. So, technically, for us to get into this situation where we start
seeing these allocation failures there would have been more than 200
times in which GFP_ATOMIC allocations failed that we did _not_ see since
we only see those warnings when there are less than 8 free buffers
remaining. More on this below ...

> Should direct allocation be falling back to
> calling with GFP_KERNEL when the pool has been depleted instead of failing?

This is the intention of the current implementation. In the tasklet we
run iwl_rx_replenish_now(), which attempts the GFP_ATOMIC allocations
first by calling iwl_rx_allocate() with the GFP_ATOMIC flag. No
particular action is taken when this fails (apart from the error
message), but if the buffers are running low then iwl_rx_queue_restock()
(which is also called from iwl_rx_replenish_now()) will queue work that
will do the allocation with GFP_KERNEL.

We do queue the GFP_KERNEL allocations when there are only a few buffers
remaining in the queue (8 right now) ... maybe we can make this higher?

I am not sure if this will help in what you are trying to figure out
here, but would it help to play with the numbers here? That is, in
iwl_rx_queue_restock() we have:

if (rxq->free_count <= RX_LOW_WATERMARK)
queue_work(priv->workqueue, &priv->rx_replenish);

Would it help here to make that value higher? Maybe queue the GFP_KERNEL
allocation when there are, for example, 50 or 100 free buffers
remaining?

Reinette


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/