Re: [PATCH v2] tg3: driver sleeps indefinitely when EEH errors exceed eeh_max_freezes

From: David Christensen
Date: Fri Jul 24 2020 - 20:19:41 EST


The driver function tg3_io_error_detected() calls napi_disable twice,
without an intervening napi_enable, when the number of EEH errors exceeds
eeh_max_freezes, resulting in an indefinite sleep while holding rtnl_lock.

Add check for pcierr_recovery which skips code already executed for the
"Frozen" state.

Signed-off-by: David Christensen <drc@xxxxxxxxxxxxxxxxxx>
---
drivers/net/ethernet/broadcom/tg3.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 7a3b22b35238..ebff1fc0d8ce 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -18168,8 +18168,8 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,

rtnl_lock();

- /* We probably don't have netdev yet */
- if (!netdev || !netif_running(netdev))
+ /* Could be second call or maybe we don't have netdev yet */
+ if (!netdev || tp->pcierr_recovery || !netif_running(netdev))
goto done;

/* We needn't recover from permanent error */


It appears this change is not sufficient. There's another failure case when a TX timeout occurs, causing tg3_reset_task() to run, followed by a call to tg3_io_error_detected() by the EEH driver. Not all occurences of this sequence cause a failure, but some do. Here's an abbreviated version of tg3_reset_task():

static void tg3_reset_task(struct work_struct *work)
{
...
tg3_netif_stop(tp); /* >>> Calls tg3_napi_disable() */

tg3_full_lock(tp, 1);

if (tg3_flag(tp, TX_RECOVERY_PENDING)) {
tp->write32_tx_mbox = tg3_write32_tx_mbox;
tp->write32_rx_mbox = tg3_write_flush_reg32;
tg3_flag_set(tp, MBOX_WRITE_REORDER);
tg3_flag_clear(tp, TX_RECOVERY_PENDING);
}

tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
err = tg3_init_hw(tp, true);
if (err)
goto out;

tg3_netif_start(tp); /* >>> Calls tg3_napi_enable() */

out:
...
}

In the working case, tg3_init_hw() returns successfully, resulting in every instance of napi_disable() being followed by an instance of napi_enable().

In the failing case, tg3_hw_init() returns an error. (This is not surprising since the system is now preventing the adapter from accessing its MMIO registers. I'm curious why it doesn't always fail.) When tg3_hw_init() fails, tg3_netif_start() is not called, and we end up with two sequential calls to napi_disable(), resulting in multiple hung task messages.

A more complete log file attached below.

Can you suggest another solution or does it make sense to reconsider my v1 patch for the issue?

Dave

Attachment: vmcore-dmesg.txt.zip
Description: Zip archive