Re: 2.6.21-rt9 - IRQ23 consuming a steady 2.7% of CPU

From: Ingo Molnar
Date: Sun Jun 10 2007 - 13:25:12 EST



* Mark Knecht <markknecht@xxxxxxxxx> wrote:

> 23: 1739277 IO-APIC-fasteoi ohci_hcd:usb2, eth0

> 00:0a.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev a3)

ok, that's forcedeth. Does that behavior go away if you revert the
change below, via patch -p0 -R ?

but in general, -rt will track IRQ cpu usage accurately, while mainline
does not measure it at all when in idle. So a frequent IRQ on mainline
might be using up similar amounts of of CPU time as well, but it's not
reported.

Ingo

---
drivers/net/forcedeth.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

Index: linux-rt.q/drivers/net/forcedeth.c
===================================================================
--- linux-rt.q.orig/drivers/net/forcedeth.c
+++ linux-rt.q/drivers/net/forcedeth.c
@@ -800,7 +800,7 @@ struct fe_priv {
* Maximum number of loops until we assume that a bit in the irq mask
* is stuck. Overridable with module param.
*/
-static int max_interrupt_work = 5;
+static int max_interrupt_work = 50;

/*
* Optimization can be either throuput mode or cpu mode
@@ -812,7 +812,7 @@ enum {
NV_OPTIMIZATION_MODE_THROUGHPUT,
NV_OPTIMIZATION_MODE_CPU
};
-static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
+static int optimization_mode = NV_OPTIMIZATION_MODE_CPU;

/*
* Poll interval for timer irq
@@ -3108,9 +3108,17 @@ static int nv_napi_poll(struct net_devic
int retcode;

if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
+ spin_lock_irqsave(&np->lock, flags);
+ nv_tx_done(dev);
+ spin_unlock_irqrestore(&np->lock, flags);
+
pkts = nv_rx_process(dev, limit);
retcode = nv_alloc_rx(dev);
} else {
+ spin_lock_irqsave(&np->lock, flags);
+ nv_tx_done_optimized(dev, np->tx_ring_size);
+ spin_unlock_irqrestore(&np->lock, flags);
+
pkts = nv_rx_process_optimized(dev, limit);
retcode = nv_alloc_rx_optimized(dev);
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/