[PATCH] sched_clock: fix jiffie fallback clock

From: Peter Zijlstra
Date: Mon Sep 15 2008 - 14:27:13 EST



David pointed out that the default sched_clock() fallback is broken in that it
wraps too soon. Fix this by using the 64 bit jiffie value so that we're large
enough to overflow properly.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
CC: David Howells <dhowells@xxxxxxxxxx>
---
arch/x86/kernel/tsc.c | 6 ++----
kernel/sched_clock.c | 2 +-
2 files changed, 3 insertions(+), 5 deletions(-)

Index: linux-2.6/arch/x86/kernel/tsc.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/tsc.c 2008-09-15 18:41:26.000000000 +0200
+++ linux-2.6/arch/x86/kernel/tsc.c 2008-09-15 18:41:33.000000000 +0200
@@ -46,10 +46,8 @@ u64 native_sched_clock(void)
* very important for it to be as fast as the platform
* can achive it. )
*/
- if (unlikely(tsc_disabled)) {
- /* No locking but a rare wrong value is not a big deal: */
- return (jiffies_64 - INITIAL_JIFFIES) * (1000000000 / HZ);
- }
+ if (unlikely(tsc_disabled))
+ return (get_jiffies_64() - INITIAL_JIFFIES) * (NSEC_PER_SEC/HZ);

/* read the Time Stamp Counter: */
rdtscll(this_offset);
Index: linux-2.6/kernel/sched_clock.c
===================================================================
--- linux-2.6.orig/kernel/sched_clock.c 2008-09-15 18:41:26.000000000 +0200
+++ linux-2.6/kernel/sched_clock.c 2008-09-15 18:41:33.000000000 +0200
@@ -38,7 +38,7 @@
*/
unsigned long long __attribute__((weak)) sched_clock(void)
{
- return (unsigned long long)jiffies * (NSEC_PER_SEC / HZ);
+ return (get_jiffies_64() - INITIAL_JIFFIES) * (NSEC_PER_SEC/HZ);
}

static __read_mostly int sched_clock_running;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/