Re: [GIT PULL] PM updates for 2.6.33

From: Linus Torvalds
Date: Mon Dec 07 2009 - 15:56:38 EST




On Mon, 7 Dec 2009, Rafael J. Wysocki wrote:
>
> So I guess the only thing we need at the core level is to call
> async_synchronize_full() after every stage of suspend/resume, right?

Yes and no.

Yes in the sense that _if_ everybody always uses "async_schedule()" (or
whatever the call is named - I've really only written pseudo-code and
haven't even tried to look up the details), then the only thing you need
to do is async_synchronize_full().

But one of the nice things about using just the trivial rwlock model and
letting any async users just depend on that is that we could easily just
depend entirely on those device locks, and allow drivers to do async
shutdowns other ways too.

For example, I could imagine some driver just doing an async suspend (or
resume) that gets completed in an interrupt context, rather than being
done by 'async_schedule()' at all.

So in many ways it's nicer to serialize by just doing

serialize_all_PM_events()
{
for_each_device() {
down_write(dev->lock);
up_write(dev->lock);
}
}

rather than depend on something like async_synchronize_full() that
obviously waits for all async events, but doesn't have the capability to
wait for any other event that some random driver might be using.

[ That "down+up" is kind of stupid, but I don't think we have a "wait for
unlocked" rwsem operation. We could add one, and it would be cheaper for
the case where the device never did anything async at all, and didn't
really need to dirty that cacheline by doing that write lock/unlock
pair. ]

But that really isn't a big deal. I think it would be perfectly ok to also
just say "if you do any async PM, you need to use 'async_schedule()'
because that's all we're going to wait for". It's probably perfectly fine.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/