On Fri, Aug 24, 2007 at 09:04:56PM +0200, Bodo Eggert wrote:That is indeed a good question. At least for 10G eHEA we see
Linas Vepstas <linas@xxxxxxxxxxxxxx> wrote:
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:Possible solution / possible brainfart:
3) On modern systems the incoming packets are processed very fast. Especiallyworst-case network ping-pong app: send one
on SMP systems when we use multiple queues we process only a few packets
per napi poll cycle. So NAPI does not work very well here and the interrupt
rate is still high.
packet, wait for reply, send one packet, etc.
Introduce a timer, but don't start to use it to combine packets unless you
receive n packets within the timeframe. If you receive less than m packets
within one timeframe, stop using the timer. The system should now have a
decent response time when the network is idle, and when the network is
busy, nobody will complain about the latency.-)
Ohh, that was inspirational. Let me free-associate some wild ideas.
Suppose we keep a running average of the recent packet arrival rate,
Lets say its 10 per millisecond ("typical" for a gigabit eth runnning
flat-out). If we could poll the driver at a rate of 10-20 per
millisecond (i.e. letting the OS do other useful work for 0.05 millisec),
then we could potentially service the card without ever having to enable interrupts on the card, and without hurting latency.
If the packet arrival rate becomes slow enough, we go back to an
interrupt-driven scheme (to keep latency down).
The main problem here is that, even for HZ=1000 machines, this amounts to 10-20 polls per jiffy. Which, if implemented in kernel, requires using the high-resolution timers. And, umm, don't the HR timers require
a cpu timer interrupt to make them go? So its not clear that this is much
of a win.