Re: [PATCH 6/7] sched, x86: Provide a per-cpu preempt_countimplementation

From: Eric Dumazet
Date: Tue Sep 10 2013 - 10:02:59 EST


On Tue, 2013-09-10 at 15:08 +0200, Peter Zijlstra wrote:

> +static __always_inline int preempt_count(void)
> +{
> + return __this_cpu_read_4(__preempt_count) & ~PREEMPT_NEED_RESCHED;
> +}

Not sure why you used the _4 prefix on all accessors ?

>
> +#ifdef CONFIG_PREEMPT_COUNT
> + /*
> + * If it were not for PREEMPT_ACTIVE we could guarantee that the
> + * preempt_count of all tasks was equal here and this would not be
> + * needed.
> + */
> + task_thread_info(prev_p)->saved_preempt_count = __raw_get_cpu_var(__preempt_count);

this_cpu_read(__preempt_count) ?

> + __raw_get_cpu_var(__preempt_count) = task_thread_info(next_p)->saved_preempt_count;

this_cpu_write(__preempt_count,
task_thread_info(next_p)->saved_preempt_count;

> +#endif
> +
> this_cpu_write(kernel_stack,
> (unsigned long)task_stack_page(next_p) +
> THREAD_SIZE - KERNEL_STACK_OFFSET);
>
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/