Re: [RFC PATCH 1/1] Move two pinned pages to non-movable node in kvm.

From: Tang Chen
Date: Mon Jun 30 2014 - 04:57:29 EST


Hi Gleb,

On 06/30/2014 02:00 PM, Gleb Natapov wrote:
On Mon, Jun 30, 2014 at 09:45:32AM +0800, Tang Chen wrote:
On 06/21/2014 04:39 AM, Marcelo Tosatti wrote:
On Fri, Jun 20, 2014 at 05:31:46PM -0300, Marcelo Tosatti wrote:
IIRC your shadow page pinning patch series support flushing of ptes
by mmu notifier by forcing MMU reload and, as a result, faulting in of
pinned pages during next entry. Your patch series does not pin pages
by elevating their page count.

No but PEBS series does and its required to stop swap-out
of the page.

Well actually no because of mmu notifiers.

Tang, can you implement mmu notifiers for the other breaker of
mem hotplug ?

Hi Marcelo,

I made a patch to update ept and apic pages when finding them in the
next ept violation. And I also updated the APIC_ACCESS_ADDR phys_addr.
The pages can be migrated, but the guest crached.
How does it crash?

It just stopped running. The guest system is dead.
I'll try to debug it and give some more info.



How do I stop guest from access apic pages in mmu_notifier when the
page migration starts ? Do I need to stop all the vcpus by set vcpu
state to KVM_MP_STATE_HALTED ? If so, the vcpu will not able to go
to the next ept violation.
When apic access page is unmapped from ept pages by mmu notifiers you
need to set its value in VMCS to a physical address that will never be
mapped into guest memory. Zero for instance. You can do it by introducing
new KVM_REQ_ bit and set VMCS value during next vcpu's vmentry. On ept
violation you need to update VMCS pointer to newly allocated physical
address, you can use the same KVM_REQ_ mechanism again.


So, may I write any specific value into APIC_ACCESS_ADDR to stop guest
from access to apic page ?

Any phys address that will never be mapped into guest's memory should work.

Thanks for the advice. I'll try it.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/