Re: question about RCU dynticks_nesting

From: Rik van Riel
Date: Thu May 07 2015 - 11:44:50 EST


On 05/06/2015 08:59 PM, Frederic Weisbecker wrote:
> On Mon, May 04, 2015 at 04:53:16PM -0400, Rik van Riel wrote:

>> Ingo's idea is to simply have cpu 0 check the current task
>> on all other CPUs, see whether that task is running in system
>> mode, user mode, guest mode, irq mode, etc and update that
>> task's vtime accordingly.
>>
>> I suspect the runqueue lock is probably enough to do that,
>> and between rcu state and PF_VCPU we probably have enough
>> information to see what mode the task is running in, with
>> just remote memory reads.
>
> Note that we could significantly reduce the overhead of vtime accounting
> by only accumulate utime/stime on per cpu buffers and actually account it
> on context switch or task_cputime() calls. That way we remove the overhead
> of the account_user/system_time() functions and the vtime locks.
>
> But doing the accounting from CPU 0 by just accounting 1 tick to the context
> we remotely observe would certainly reduce the local accounting overhead to the strict
> minimum. And I think we shouldn't even lock rq for that, we can live with some
> lack of precision.

We can live with lack of precision, but we cannot live with data
structures being re-used and pointers pointing off into la-la
land while we are following them :)

> Now we must expect quite some overhead on CPU 0. Perhaps it should be
> an option as I'm not sure every full dynticks usecases want that.

Lets see if I can get this to work before deciding whether we need yet
another configurable option :)

It may be possible to have most of the overhead happen from schedulable
context, maybe softirq code. Right now I am still stuck in the giant
spaghetti mess under account_process_tick, with dozens of functions that
only work on cpu-local, task-local, or architecture dependently cpu or
task local data...

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/