Re: NET_SCHED cbq dropping too many packets on a bonding interface

From: Kingsley Foreman
Date: Thu May 15 2008 - 17:28:03 EST




--------------------------------------------------
From: "Jarek Poplawski" <jarkao2@xxxxxxxxx>
Sent: Friday, May 16, 2008 4:16 AM
To: "Patrick McHardy" <kaber@xxxxxxxxx>
Cc: "Kingsley Foreman" <kingsley@xxxxxxxxxxxxxxxx>; "Eric Dumazet" <dada1@xxxxxxxxxxxxx>; "Andrew Morton" <akpm@xxxxxxxxxxxxxxxxxxxx>; <linux-kernel@xxxxxxxxxxxxxxx>; <netdev@xxxxxxxxxxxxxxx>
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface

On Thu, May 15, 2008 at 08:32:44PM +0200, Patrick McHardy wrote:
Jarek Poplawski wrote:
On Thu, May 15, 2008 at 06:09:36PM +0200, Patrick McHardy wrote:
...
Do things improve if you set txqueuelen to a larger value
*before* configuring the qdiscs?

BTW, I hope it was *before*, but since pfifo_fast_enqueue() uses
"qdisc->dev->tx_queue_len" does it really matter? (Until it's
before the test of course...)


Yes, CBQ uses pfifo, not pfifo_fast. pfifo uses txqueuelen
to inialize q->limit, but thats whats used during ->enqueue().

...My bad! I missed this and this (only!?) seems to explain this
puzzle. So, I hope it was really because *not before* (and not only
size matters...)

Thanks,
Jarek P.


running

tc qdisc add dev bond0 root pfifo limit 1000

or

tc qdisc add dev bond0 root handle 1: cbq bandwidth 2000Mbit avpkt 1000 cell 0
tc qdisc add dev bond0 parent 1: pfifo limit 1000


doesn't appear to be dropping packets.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/