On Wed, Feb 21, 2001 at 10:07:32PM +0000, Alan Cox wrote:
> > that because the kernel was getting 99% of the cpu, the application was
> > getting very little, and thus the read wasn't happening fast enough, and
>
> Seems reasonable
>
> > This is NOT what I'm seeing at all.. the kernel load appears to be
> > pegged at 100% (or very close to it), the user space app is getting
> > enough cpu time to read out about 10-20Mbit, and FURTHERMORE the kernel
> > appears to be ACKING ALL the traffic, which I don't understand at all
> > (e.g. the transmitter is simply blasting 300MBit of tcp unrestricted)
>
> TCP _requires_ the remote end ack every 2nd frame regardless of progress.
YIPES. I didn't realize this was the case.. how is end-to-end application
flow control handled when the bottle neck is user space bound and not b/w
bound? e.g. if i write a test app that does a
while(1) {
sleep (5);
read(sock, buf, 1);
}
and the transmitter is unrestricted, what happens?
Does it have to do with TCP_FORMAL_WINDOW (eg. automatically reduce window
size to zero when queue backs up?)
or is it only a cpu loading problem? (ie. is there a difference in queuing
behavior between 1) the user process doesnt get cycles 2) the user process
simply fails to read ?)
Also, I have been reading up on CONFIG_HW_FLOWCONTROL.. what is the
recommended way for the driver to stop receiving? In the sample tulip
code i see you can register a xon callback, but i can't tell if there
is a way to see the backlog from the driver.
-nye
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Feb 23 2001 - 21:00:26 EST