Re: Today Linus redesigns the networking driver interface (was Re: tulip driver in ...)

Donald Becker (becker@cesdis1.gsfc.nasa.gov)
Mon, 21 Sep 1998 17:22:35 -0400 (EDT)


First, thanks to Doug Ledford for correctly replying to incorrect assumptions
over the past day. Specificially for noting that "fast interrupts" aka
SA_INTERRUPT has a specific meaning. I wasn't proposing *slow* code in the
interrupt handlers.

Another confusion: "fast path" or "IP fast path".
To some that means doing everything at interrupt time.
To me it means protocol code that spans layer boundaries (yes, breaks
abstractions!) by assuming no option IP packets, and is written so that
branches are never taken in the common case. All other packet types are
handled as exceptions. It implies very little about the queue layer or
driver interaction -- it's done at the protocol dispatch layer.

________________

On Mon, 21 Sep 1998, Gerard Roudier wrote:
> > Wouldn't these problems with sharing interrupts be solved by removing
> > the SA_INTERRUPT flag entirely and simply having interrupt handlers

As Linus explained, SA_INTERRUPT has a specific use in the serial drivers,
and was excellent for that use. The bug was using it in the SCSI drivers.

I understand the temptation. When first reading about the "fast interrupts"
serial driver change I attempted to rewrite my drivers to use SA_INTERRUPT
and my own bottom half handler in an attempt to decrease the back-to-back
packet latency of the 8390. I had written only a handful of driver at the
time, most based around the 8390, so it would be an easy change.

I found that in writing a driver that did interrupt-time work comparable to
the serial driver, I was just writing a normal interrupt handler with an
expensive and convoluted entry point.

The SA_INTERRUPT semantics works for the 16450 serial driver because the
work is trivial and limited. It does
is there a character waiting?
get it
finished!
Most other drivers are more complex and do a variable amount of work.

> I would like a network data flow control that works this way:
>
> 1 - Throw away the current packet, if you cannot have ressources for
> a next one. This prevent from having to restart the pump after
> resources will get available.
>
> 2 - Implement a 'throw away on resource lack' strategy (from interrupt
> routine) that is as fast as possible. This will allow to resist to
> not too hard attacks.

I have switched all my 100mbps drivers to a different policy. Most already
use skbuffs as receive buffers. Previously I would strive to keep a full
ring of Rx buffers, dropping packets inside the driver when the kernel ran
short of memory. Now I consume my Rx buffers and replenish the supply when
more memory is available.

> 3 - Implement a 'flood detection' heuristic based on event counters and
> real time and the soft disabling as decribed above.

This isn't a good solution: future machines might be able to handle high
interrupt rates.
The only acceptable detection is a too-long Rx queue in the queue layer.
And the 'too long' metric should be better documented than our metric on the
Tx queue.

I doubt the interrupt overhead is coming from the driver itself. We
have at least 80 usec. between incoming packets at 100mbps. The work the
driver does for a normal packet Rx minimal, and is completely overwhelmed by
the cost of allocating/clearing a replacement Rx buffer. If we wanted to
minimize the work done at interrupt time that could be best done by
pre-initializing skbuffs of the most common size.

Donald Becker becker@cesdis.gsfc.nasa.gov
USRA-CESDIS, Center of Excellence in Space Data and Information Sciences.
Code 930.5, Goddard Space Flight Center, Greenbelt, MD. 20771
301-286-0882 http://cesdis.gsfc.nasa.gov/people/becker/whoiam.html

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/