Re: [RFC KVM 18/27] kvm/isolation: function to copy page table entries for percpu buffer

From: Andy Lutomirski
Date: Tue May 14 2019 - 16:35:19 EST


On Tue, May 14, 2019 at 11:09 AM Sean Christopherson
<sean.j.christopherson@xxxxxxxxx> wrote:
>
> On Tue, May 14, 2019 at 07:05:22PM +0200, Peter Zijlstra wrote:
> > On Tue, May 14, 2019 at 06:24:48PM +0200, Alexandre Chartre wrote:
> > > On 5/14/19 5:23 PM, Andy Lutomirski wrote:
> >
> > > > How important is the ability to enable IRQs while running with the KVM
> > > > page tables?
> > > >
> > >
> > > I can't say, I would need to check but we probably need IRQs at least for
> > > some timers. Sounds like you would really prefer IRQs to be disabled.
> > >
> >
> > I think what amluto is getting at, is:
> >
> > again:
> > local_irq_disable();
> > switch_to_kvm_mm();
> > /* do very little -- (A) */
> > VMEnter()
> >
> > /* runs as guest */
> >
> > /* IRQ happens */
> > WMExit()
> > /* inspect exit raisin */
> > if (/* IRQ pending */) {
> > switch_from_kvm_mm();
> > local_irq_restore();
> > goto again;
> > }
> >
> >
> > but I don't know anything about VMX/SVM at all, so the above might not
> > be feasible, specifically I read something about how VMX allows NMIs
> > where SVM did not somewhere around (A) -- or something like that,
> > earlier in this thread.
>
> For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
> on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
> from occuring while KVM's page tables are loaded.
>
> Back to Andy's question about enabling IRQs, the answer is "it depends".
> Exits due to INTR, NMI and #MC are considered high priority and are
> serviced before re-enabling IRQs and preemption[1]. All other exits are
> handled after IRQs and preemption are re-enabled.
>
> A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
> and WRMSR, any event-related exit, etc... But many exit handlers require
> significantly longer flows, e.g. EPT violations (page faults) and anything
> that requires extensive emulation, e.g. nested VMX. In short, leaving
> IRQs disabled across all exits is not practical.
>
> Before going down the path of figuring out how to handle the corner cases
> regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
> are a) in the hot path for the use case (configuration) and b) can be
> handled fast enough that they can run with IRQs disabled. Generating that
> list might allow us to tightly bound the contents of kvm_mm and sidestep
> many of the corner cases, i.e. select VM-Exits are handle with IRQs
> disabled using KVM's mm, while "slow" VM-Exits go through the full context
> switch.

I suspect that the context switch is a bit of a red herring. A
PCID-don't-flush CR3 write is IIRC under 300 cycles. Sure, it's slow,
but it's probably minor compared to the full cost of the vm exit. The
pain point is kicking the sibling thread.

When I worked on the PTI stuff, I went to great lengths to never have
a copy of the vmalloc page tables. The top-level entry is either
there or it isn't, so everything is always in sync. I'm sure it's
*possible* to populate just part of it for this KVM isolation, but
it's going to be ugly. It would be really nice if we could avoid it.
Unfortunately, this interacts unpleasantly with having the kernel
stack in there. We can freely use a different stack (the IRQ stack,
for example) as long as we don't schedule, but that means we can't run
preemptable code.

Another issue is tracing, kprobes, etc -- I don't think anyone will
like it if a kprobe in KVM either dramatically changes performance by
triggering isolation exits or by crashing. So you may need to
restrict the isolated code to a file that is compiled with tracing off
and has everything marked NOKPROBE. Yuck.

I hate to say this, but at what point do we declare that "if you have
SMT on, you get to keep both pieces, simultaneously!"?