Linux & nanoseconds

From: Ulrich Windl (Ulrich.Windl@rz.uni-regensburg.de)
Date: Tue Sep 05 2000 - 01:34:17 EST


Hello,

I revised the code that calculates the nanoseconds in Linux, and I
thought I'll drop some notes here:

As I found out there is a systematic error of 167ns per tick, almost
17us per second. This is because of the timer chip that is used to
generate the interrupts for "100 Hz".

Linux uses the register of the timer chip to interpolate time. The
current implementation introduces an error of another 160ns (or
around). I haven't checked, but maybe they just compensate each other.

Finally the current resolution gotten from the TSC (PCC) is limited to
16 cycles (or 160ns for a 100MHz CPU). Linux uses the TSC to
interpolate between (within) ticks only. This is for historical reasons
and possibly because of APM (ACPI?) varying the CPU speed. The current
implementation does not recalibrate for variable CPU speed, so the tick
interpolation is condensed towards the start of a tick of the CPU is
getting slower. However if the timer chip gets slower at the same rate,
the inperpolation is fine, but the absolute time is wrong.

I always tried to fix the most urgent problem; there's still potential
to improve. I need your experience as well.

The current modification (based on PPSkit-0.9.3) is "nanofix.diff.gz",
both located on your favourite Linux mirror in
pub/linux/daemons/ntp/PPS (or very similar).

Regards,
Ulrich

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Sep 07 2000 - 21:00:21 EST