Re: [PATCH] KVM: nVMX: nested VPID emulation

From: Jan Kiszka
Date: Wed Sep 16 2015 - 01:20:33 EST


On 2015-09-16 04:36, Wanpeng Li wrote:
> On 9/16/15 1:32 AM, Jan Kiszka wrote:
>> On 2015-09-15 12:14, Wanpeng Li wrote:
>>> On 9/14/15 10:54 PM, Jan Kiszka wrote:
>>>> Last but not least: the guest can now easily exhaust the host's pool of
>>>> vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
>>>> or should there be some limit?
>>> I reuse the value of vpid02 while vpid12 changed w/ one invvpid in v2,
>>> and the scenario which you pointed out can be avoid.
>> I cannot yet follow why there is no chance for L1 to consume all vpids
>> that the host manages in that single, global bitmap by simply spawning a
>> lot of nested VCPUs for some L2. What is enforcing L1 to call nested
>> vmclear - apparently the only way, besides destructing nested VCPUs, to
>> release such vpids again?
>
> In v2, there is no direct mapping between vpid02 and vpid12, the vpid02
> is per-vCPU for L0 and reused while the value of vpid12 is changed w/
> one invvpid during nested vmentry. The vpid12 is allocated by L1 for L2,
> so it will not influence global bitmap(for vpid01 and vpid02 allocation)
> even if spawn a lot of nested vCPUs.

Ah, I see, you limit allocation to one additional host-side vpid per
VCPU, for nesting. That looks better. That also means all vpids for L2
will be folded on that single vpid in hardware, right? So the major
benefit comes from having separate vpids when switching between L1 and
L2, in fact.

Jan

--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/