Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes

From: Andi Kleen
Date: Tue Nov 04 2008 - 15:58:28 EST


On Tue, Nov 04, 2008 at 09:44:00PM +0100, Ingo Molnar wrote:
>
> * Alexander van Heukelum <heukelum@xxxxxxxxxxx> wrote:
>
> > On Tue, 4 Nov 2008 18:05:01 +0100, "Andi Kleen" <andi@xxxxxxxxxxxxxx>
> > said:
> > > > not taking into account the cost of cs reading (which I
> > > > don't suspect to be that expensive apart from writting,
> > >
> > > GDT accesses have an implied LOCK prefix. Especially
> > > on some older CPUs that could be slow.
> > >
> > > I don't know if it's a problem or not but it would need
> > > some careful benchmarking on different systems to make sure interrupt
> > > latencies are not impacted.
>
> That's not a real issue on anything produced in this decade as we have
> had per CPU GDTs in Linux for about a decade as well.
>
> It's only an issue on ancient CPUs that export all their LOCKed cycles
> to the bus. Pentium and older or so. The PPro got it right already.

??? LOCK slowness is not because of the bus. And I know you know
that Ingo, so I don't know why you wrote that bogosity above.

> What matters is what i said before: the actual raw cycle count before
> and after the patch, on the two main classes of CPUs, and the amount

iirc there are at least between three and five classes of CPUs that
matter (P6, K8, P4 and possibly Atom and C3). But I would only
expect P4 to be a real problem.

> > That's good to know. I assume this LOCKed bus cycle only occurs if
> > the (hidden) segment information is not cached in some way? How many
> > segments are typically cached? In particular, does it optimize
> > switching between two segments?
> >
> > > Another reason I would be also careful with this patch is that it
> > > will likely trigger slow paths in JITs like qemu/vmware/etc.
> >
> > Software can be fixed ;).
>
> Yes, and things like vmware were never a reason to hinder Linux.

Hopefully the users agree with you on that.

But anyways having to fix the JIT for saving 3-5k of memory would seem
like a bad payoff in terms of effort:gain. Yes I know you personally
wouldn't need to fix them, but wasting other engineer's time is nearly
as bad as your own.

> > > An alternative BTW to having all the stubs in the executable would
> > > be to just dynamically generate them when the interrupt is set up.
> > > Then you would only have the stubs around for the interrupts which
> > > are actually used.
> >
> > I was trying to simplify things, not make it even less transparent
> > ;).

Doesn't make sense to me. The current code is not complex at all,
just not particularly efficient. Yours might be better (at some
risk), but simpler is probably not the right word to describe it.

>
> yep, the complexity of dynamic stubs is the last thing we need here.

I don't think it's particularly complex. You just have a few bytes
and you fill in the number and the target.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/