Re: [PATCH 3/3] mm/vmalloc: Sync unmappings in vunmap_page_range()

From: Joerg Roedel
Date: Thu Jul 18 2019 - 05:17:49 EST


Hi Andy,

On Wed, Jul 17, 2019 at 02:24:09PM -0700, Andy Lutomirski wrote:
> On Wed, Jul 17, 2019 at 12:14 AM Joerg Roedel <joro@xxxxxxxxxx> wrote:
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 4fa8d84599b0..322b11a374fd 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -132,6 +132,8 @@ static void vunmap_page_range(unsigned long addr, unsigned long end)
> > continue;
> > vunmap_p4d_range(pgd, addr, next);
> > } while (pgd++, addr = next, addr != end);
> > +
> > + vmalloc_sync_all();
> > }
>
> I'm confused. Shouldn't the code in _vm_unmap_aliases handle this?
> As it stands, won't your patch hurt performance on x86_64? If x86_32
> is a special snowflake here, maybe flush_tlb_kernel_range() should
> handle this?

Imo this is the logical place to handle this. The code first unmaps the
area from the init_mm page-table and then syncs that page-table to all
other page-tables in the system, so one place to update the page-tables.

Performance-wise it makes no difference if we put that into
_vm_unmap_aliases(), as that is called in the vmunmap path too. But it
is right that vmunmap/iounmap performance on x86-64 will decrease to
some degree. If that is a problem for some workloads I can also
implement a complete separate code-path which just syncs unmappings and
is only implemented for x86-32 with !SHARED_KERNEL_PMD.

Regards,

Joerg