Re: [V5][PATCH 4/6] x86, nmi: add in logic to handle multipleevents and unknown NMIs

From: Don Zickus
Date: Wed Sep 21 2011 - 10:05:17 EST


On Wed, Sep 21, 2011 at 12:08:42PM +0200, Robert Richter wrote:
> On 20.09.11 10:43:10, Don Zickus wrote:
> > @@ -87,6 +87,16 @@ static int notrace __kprobes nmi_handle(unsigned int type, struct pt_regs *regs)
> >
> > handled += a->handler(type, regs);
> >
> > + /*
> > + * Optimization: only loop once if this is not a
> > + * back-to-back NMI. The idea is nothing is dropped
> > + * on the first NMI, only on the second of a back-to-back
> > + * NMI. No need to waste cycles going through all the
> > + * handlers.
> > + */
> > + if (!b2b && handled)
> > + break;
>
> In rare cases we will lose nmis here.
>
> We see a back-to-back nmi in the case if a 2 nmi source triggers
> *after* the nmi handler is entered. Depending on the internal cpu
> timing influenced by microcode and SMM code execution, the nmi may not
> entered immediately. So all sources that trigger *before* the nmi
> handler is entered raise only one nmi with no subsequent nmi.

Right, but that can only happen with the second NMI in the back-to-back
NMI case. Here the optimization is only for the first NMI, with the
assumption that you will always have a second NMI if multiple sources
trigger, so you can process those in the second iteration (assuming we
correctly detect the back-to-back NMI condition). Then when the second
NMI comes in, we have no idea how many we dropped to get here so we
process all the handlers based on the assumption we might not have another
NMI behind us to make up for the dropped NMIs.

Unless I misunderstood your point above?

>
> However, as these cases should be very rare, I think we can live with
> it in favor of the optimization to jump out the handler chain and save
> lot of cpu cycles esp. in the case of heavy PMI load.

I think it is covered as described above?

Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/