[tip:sched/core] sched/clock: Fix early boot preempt assumption in __set_sched_clock_stable()

From: tip-bot for Peter Zijlstra
Date: Wed May 24 2017 - 06:28:33 EST


Commit-ID: 45aea321678856687927c53972321ebfab77759a
Gitweb: http://git.kernel.org/tip/45aea321678856687927c53972321ebfab77759a
Author: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
AuthorDate: Wed, 24 May 2017 08:52:02 +0200
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Wed, 24 May 2017 09:10:00 +0200

sched/clock: Fix early boot preempt assumption in __set_sched_clock_stable()

The more strict early boot preemption warnings found that
__set_sched_clock_stable() was incorrectly assuming we'd still be
running on a single CPU:

BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
caller is debug_smp_processor_id+0x1c/0x1e
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.12.0-rc2-00108-g1c3c5ea #1
Call Trace:
dump_stack+0x110/0x192
check_preemption_disabled+0x10c/0x128
? set_debug_rodata+0x25/0x25
debug_smp_processor_id+0x1c/0x1e
sched_clock_init_late+0x27/0x87
[...]

Fix it by disabling IRQs.

Reported-by: kernel test robot <xiaolong.ye@xxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Acked-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: lkp@xxxxxx
Cc: tipbuild@xxxxxxxxx
Link: http://lkml.kernel.org/r/20170524065202.v25vyu7pvba5mhpd@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
kernel/sched/clock.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index 1a0d389..ca0f8fc 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -133,12 +133,19 @@ static void __scd_stamp(struct sched_clock_data *scd)

static void __set_sched_clock_stable(void)
{
- struct sched_clock_data *scd = this_scd();
+ struct sched_clock_data *scd;

/*
+ * Since we're still unstable and the tick is already running, we have
+ * to disable IRQs in order to get a consistent scd->tick* reading.
+ */
+ local_irq_disable();
+ scd = this_scd();
+ /*
* Attempt to make the (initial) unstable->stable transition continuous.
*/
__sched_clock_offset = (scd->tick_gtod + __gtod_offset) - (scd->tick_raw);
+ local_irq_enable();

printk(KERN_INFO "sched_clock: Marking stable (%lld, %lld)->(%lld, %lld)\n",
scd->tick_gtod, __gtod_offset,