Re: [PATCH] vhost: support upto 509 memory regions

From: Igor Mammedov
Date: Tue Feb 17 2015 - 09:45:00 EST


On Tue, 17 Feb 2015 13:32:12 +0100
"Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote:

> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
> >
> >
> > On 17/02/2015 10:02, Michael S. Tsirkin wrote:
> > > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
> > > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
> > > >
> > > > Signed-off-by: Igor Mammedov <imammedo@xxxxxxxxxx>
> > >
> > > This scares me a bit: each region is 32byte, we are talking
> > > a 16K allocation that userspace can trigger.
> >
> > What's bad with a 16K allocation?
>
> It fails when memory is fragmented.
>
> > > How does kvm handle this issue?
> >
> > It doesn't.
> >
> > Paolo
>
> I'm guessing kvm doesn't do memory scans on data path,
> vhost does.
>
> qemu is just doing things that kernel didn't expect it to need.
>
> Instead, I suggest reducing number of GPA<->HVA mappings:
>
> you have GPA 1,5,7
> map them at HVA 11,15,17
> then you can have 1 slot: 1->11
>
> To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> or something like this.
Lets suppose that we add API to reserve whole memory hotplug region
with MAP_NORESERVE and passed it as memslot to KVM.

Then what will happen to guest accessing not really mapped region?
This memslot will also be passed to vhost as region, is it really ok?
I don't know what else it might break.

As alternative:
we can filter out hotplugged memory and vhost will continue to work with
only initial memory.
So question is id we have to tell vhost about hotplugged memory?

>
> We can discuss smarter lookup algorithms but I'd rather
> userspace didn't do things that we then have to
> work around in kernel.
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/