Re: [PATCH 0/8] watchdog: Add support for keepalives triggered by infrastructure

From: David Teigland
Date: Wed Aug 05 2015 - 15:51:34 EST


On Wed, Aug 05, 2015 at 12:01:38PM -0700, Guenter Roeck wrote:
> I think I can understand why Wim was reluctant to accept your patch;
> I must admit I don't understand your use case either.

Very breifly, sanlock is a shared storage based lease manager, and the
expiration of a lease is tied to the expiration of the watchdog. I have
to ensure that the watchdog expires at or before the time that the lease
expires. This means that I cannot allow a watchdog heartbeat apart from a
corresponding lease renewal on the shared storage. Otherwise, the
calculation by other hosts of the time of the hard reset will be wrong,
and the data on shared storage could be corrupted.

> I wonder if you are actually mis-using the watchdog subsystem to generate
> hard resets.

I am indeed using it to generate hard resets.

> After all, you could avoid the unexpected close situation with
> an exit handler in your application. That handler could catch anything but
> SIGKILL, but anyone using SIGKILL doesn't really deserve better.

I avoid the unexpected close situation by prematurely closing the device
to generate the heartbeat from close, and then reopening if needed. That
covers the SIGKILL case. So, I have a work around, but the patch would
still be nice.

> If the intent is to reset the system after the application closes,
> executing "/sbin/restart -f" might be a safer approach than just killing
> the watchdog.

I need to reset the system if the application crashes, or if the
application is running but can't renew its lease. In the former case,
executing something doesn't work. In the later case, I have done similar
(with /proc/sysrq-trigger), but it doesn't always apply and I'd still want
the hardware reset as redundancy.

> In addition to that, I don't think it is a good idea to rely on the assumption
> that the watchdog will expire exactly after the configured timeout.
> Many watchdog drivers implement a soft timeout on top of the hardware timeout,
> and thus already implement the internal heartbeat. Most of those drivers
> will stop sending internal heartbeats if user space did not send a heartbeat
> within the configured timeout period. The actual reset will then occur later,
> after the actual hardware watchdog timed out. This can be as much as the
> hardware timeout period, which may be substantial.

OK, thanks, I'll look into this in more detail. Is there a way I can
identify which cases these are, or do you know an example I can look at?
In the worst case I'd have to extend the lease expiration time by a full
timeout period when the dubious drivers are used.

Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/