On Sun, 29 Dec 2002, Jeff Dike wrote:
> Step 3 is obviously where the meat of the problem is. The process needs
> to have available on it all the resources it had in UML -
> the same files
> network connections (puntable on a first pass)
> process relationships (I have no idea what to do about a parent
> process on the host, nor what to do with children whose parent has been
> migrated, or ipc mechanisms, except to do the Mosix thing and have little
> proxies sitting around passing information between UML and the host).
I think people are at the point of working on this becouse it sounds like
a worthwhile feature, not becouse it's actually anything that would be
used.
what possible application needs to be able to do a seamless kernel upgrade
that wouldn't be useing a network?
if it's a batch processing task, it can checkpoint itself and restart
after a reboot.
if it's a controller of specialized equipment then you either can have the
process checkpoint itself, or you can't afford to pause long enough to do
the kernel swap (i.e. the device keeps operating regardless and so may
generate ssignals to the program during the time when you are swapping
kernels)
As Alan Cox said, anyone really needing this will have redundant systems
anyway (to cover the case of hardware failure) so they will already be
dealing with things on a cluster leveel and rebooting a machine to
complete the upgrade will not be that bad (they upgrade the backup, reboot
it, sync things up, failover to the backup, upgrade and reboot the primary
and keep running on the backup until the next upgrade cycle)
David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Tue Dec 31 2002 - 22:00:15 EST