Re: [PATCH v9 11/12] x86, mpx: cleanup unused bound tables

From: Thomas Gleixner
Date: Tue Nov 04 2014 - 12:03:08 EST


On Tue, 4 Nov 2014, Dave Hansen wrote:
> On 11/03/2014 01:29 PM, Thomas Gleixner wrote:
> > On Mon, 3 Nov 2014, Dave Hansen wrote:
>
> > That's not really true. You can evaluate that information with
> > mmap_sem held for read as well. Nothing can change the mappings until
> > you drop it. So you could do:
> >
> > down_write(mm->bd_sem);
> > down_read(mm->mmap_sem;
> > evaluate_size_of_shm_to_unmap();
> > clear_bounds_directory_entries();
> > up_read(mm->mmap_sem);
> > do_the_real_shm_unmap();
> > up_write(mm->bd_sem);
> >
> > That should still be covered by the above scheme.
>
> Yep, that'll work. It just means rewriting the shmdt()/mremap() code to
> do a "dry run" of sorts.

Right. So either that or we hold bd_sem write locked accross all write
locked mmap_sem sections. Dunno, which solution is prettier :)

> Do you have any concerns about adding another mutex to these paths?

You mean bd_sem? I don't think its an issue. You need to get mmap_sem
for write as well. So

> munmap() isn't as hot of a path as the allocation side, but it does
> worry me a bit that we're going to perturb some workloads. We might
> need to find a way to optimize out the bd_sem activity on processes that
> never used MPX.

I think using mm->bd_addr as a conditional for the bd_sem/mpx activity
is good enough. You just need to make sure that you store the result
of the starting conditional and use it for the closing one as well.

mpx = mpx_pre_unmap(mm);
{
if (!kernel_managing_bounds_tables(mm)
return 0;
down_write(mm->bd_sem);
...
return 1;
}

unmap();

mxp_post_unmap(mm, mpx);
{
if (mpx) {
....
up_write(mm->bd_sem);
}

So this serializes nicely with the bd_sem protected write to
mm->bd_addr. There is a race there, but I don't think it matters. The
worst thing what can happen is a stale bound table.

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/