On Sat, Jun 24, 2000 at 10:06:38PM +0100, Alan Cox wrote:
> > CPU1's old thread gets rescheduled on CPU2 (CPU1 stays frozen), eventually
> > returns to userspace, CPU2 gets rescheduled after the down(&lock) in
> > cleanup_module, cleanup_module finishes.
> >
> > Note this is completely equivalent to UP
> >
> > thread a thread b
> > down(&lock);
> > schedule();
> > rmmod
> > cleanup_module
> > down(&lock)
>
> Firstly why does it get scheduled on CPU2. What guarantee is there of that.
None of the other CPUs is scheduling, so it has to end up on CPU2.
> How will you handle the case when threads get pinned to a CPU in future.
If the scheduler doesn't handle a realtime/kernel thread on one CPU never
giving up it's CPU it's broken.
> The fundamental logic seems reasonable for simplistic cases. I think for
> the complex case you need to allow cleanup_module to fail now - which isnt
> a bad thing.
I don't think so. We essentially go back to UP for module unload, and in
2.5 we can fix the code to make that more obvious.
> We even have a set of threads to deal with the freeze handling (its effectively
> bumping the idle task to highest priority on all other cpus)
2.5, IMHO.
> Harder question (which isnt currently solved)
>
> CPU1 CPU2
> rmmod ide
> sleeping in cleanup module
> open /dev/hda
> ????????
CPU1 CPU2
rmmod ide
freeze
sleeping in cleanup module
wakes up in cleanup module
unfreeze
open /dev/hda
works.
CPU
rmmod ide
sleeps in cleanup_module
open /dev/hda
doesn't, but that's a UP problem (so not within the scope of what the
patch is trying to fix).
Philipp
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Jun 26 2000 - 21:00:05 EST