On Wed, 2002-11-06 at 07:45, Linus Torvalds wrote:
> The solution is to make all the TSC calibration and offsets be per-CPU.
> That should be fairly trivial, since we _already_ do the calibration
> per-CPU anyway for bogomips (for no good reason except the whole process
> is obviously just a funny thing to do, which is the point of bogomips).
This was discussed earlier, but dismissed as being a can of worms. It
still is possible to do (and can be added as just another timer_opt
stucture), but uglies like the spread-spectrum feature on the x440,
which actually runs each node at slightly varying speeds, pop up and
make my head hurt. Regardless, the attempt would probably help clean
things up, as you mentioned below. We also would need to round-robin the
timer interrupt, as each cpu would need a last_tsc_low point to generate
an offset. So I'm not opposed to it, but I'm not exactly eager to
implement it.
> Let's face it, we don't have that many tsc-related data structures. What,
> we have:
>
> - loops_per_jiffy, which is already a per-CPU thing, used by udelay()
> - fast_gettimeoffset_quotient - which is global right now and shouldn't
> be.
Good to see its on your hit-list. :) I mailed out a patch for this
earlier, I'll resend later today.
> - delay_at_last_interrupt. See previous.
I'll get to this one too, as well as a few other spots where the
timer_opts abstraction isn't clean enough (cpu_khz needs to be pulled
out of the timer_tsc code, etc)
thanks for the feedback
-john
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Nov 07 2002 - 22:00:45 EST