Re: [RFC] vmstat: Avoid waking up idle-cpu to service shepherd work

From: Viresh Kumar
Date: Mon Mar 30 2015 - 08:02:22 EST


On 29 March 2015 at 15:54, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Sat, Mar 28, 2015 at 02:44:57PM +0100, Peter Zijlstra wrote:
>> > Now there are few issues I see here (Sorry if they are all imaginary):
>> > - In case a timer re-arms itself from its handler and is migrated from CPU A to B, what
>> > happens if the re-armed timer fires before the first handler finishes ? i.e. timer->fn()
>> > hasn't finished running on CPU A and it has fired again on CPU B. Wouldn't this expose
>> > us to a lot of other problems? It wouldn't be serialized to itself anymore ?
>>
>> What I said above.
>
> What I didn't say, but had thought of is that __run_timer() should skip
> any timer that has RUNNING set -- for obvious reasons :-)

Below is copied from your first reply, and so you probably already
said that ? :)

> Also, once you have tbase_running, we can take base->running_timer out
> altogether.

I wanted to clarify if I understood it correctly..

Are you saying that:

Case 1.) if we find tbase_running on cpuY (because it was rearmed
from its handler on cpuX and has got migrated to cpuY), then we should drop the
timer from the list without calling its handler (as that is already
running in parallel) ?

Or

Case 2.) we keep retrying for it, until the time the other handler finishes?


I have few queries for both the cases.

Case 1.) Will that be fair to the timer user as the timer may get lost
completely.
If we skip the timer on cpuY here, it wouldn't be enqueued again and
so will be lost.

Case 2.) We kept waiting for the first handler to finish ..
- cpuY may waste some cycles as it kept waiting for handler to finish on cpuX ..
- We may need to perform base unlock/lock on cpuY, so that cpuX can take cpuY's
lock to reset tbase_running. And that might be racy, not sure.

--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/