There are a few cluster/parallel computing libraries out there that are starting to allow "process migration"...
Suppose Linux could save the total state of a program to disk, for
instance, imagine a program like mozilla with many open windows. I give
it a SIGNAL-SAVETODISK and the process memory image is dropped to a
file. I can then turn off the computer and later continue using the
program where I left it, by loading it back into memory.
Would that be possible? At least a program can be given a ctrl-z and is
there is a LOT of state though.. the moment you add networking in the
picture the amount of state just isn't funny anymore. Your X example is
a good one as well...
One would assume that "saving it to a disk" is simply a degenerate case of migrating the process...
Presuming they have process migration working (and it seemed close a while ago when I last looked), saving to a file might already be supported... I'd google "process migration" and you are likely to find a lot of discussion on this topic...
/mike
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
In fact moving processes from one machine to another would be a brilliant feature at my work, since we run fairly large and time-consuming simulations on electronic circuits. If the kernel could natively support bouncing jobs back and forth, that would really be something. Since we simulate with proprietary software, I suppose we can't rely on the simulator being rewritten to support such special libraries.
Does any other Unix variant have process bouncing already?