Re: [PATCH] ppp_generic: fix multilink fragment sizes

From: Ben McKeegan
Date: Wed Jun 02 2010 - 11:55:57 EST


Paoloni, Gabriele wrote:
The proposed patch looks wrong to me.

nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used DIV_ROUND_UP to get the fragment size. Looking more closely I see your point, the original code was starting with the larger fragment size and decrementing rather than starting with the smaller size and incrementing as your code does, so that makes sense.



flen = len;
if (nfree > 0) {
if (pch->speed == 0) {
- flen = totlen/nfree;
+ if (nfree > 1)
+ flen = DIV_ROUND_UP(len, nfree);
if (nbigger > 0) {
flen++;
nbigger--;

The important change here is the use of 'len' instead of 'totlen'. 'nfree' and 'len' should decrease roughly proportionally with each iteration of the loop whereas 'totlen' remains unchanged. Thus (totlen/nfree) gets bigger on each iteration whereas len/nfree should give roughly the same. However, without rounding up here I'm not sure the logic is right either, since the side effect of nbigger is to make len decrease faster so it is not quite proportional to the decrease in nfree. Is there a risk of ending up on the nfree == 1 iteration with flen == len - 1 and thus generating a superfluous extra 1 byte long fragment? This would be a far worse situation than a slight imbalance in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for the pch->speed == 0 case as per original code?

Regards,
Ben.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/