Re: [PATCH] LDT improvements

From: Andy Lutomirski
Date: Fri Dec 08 2017 - 11:38:53 EST


On Fri, Dec 8, 2017 at 3:31 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> * Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
>
>> On Fri, 8 Dec 2017, Ingo Molnar wrote:
>> > * Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
>> >
>> > > On Fri, 8 Dec 2017, Ingo Molnar wrote:
>> > > > * Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>> > > > > I don't love mucking with user address space. I'm also quite nervous about
>> > > > > putting it in our near anything that could pass an access_ok check, since we're
>> > > > > totally screwed if the bad guys can figure out how to write to it.
>> > > >
>> > > > Hm, robustness of the LDT address wrt. access_ok() is a valid concern.
>> > > >
>> > > > Can we have vmas with high addresses, in the vmalloc space for example?
>> > > > IIRC the GPU code has precedents in that area.
>> > > >
>> > > > Since this is x86-64, limitation of the vmalloc() space is not an issue.
>> > > >
>> > > > I like Thomas's solution:
>> > > >
>> > > > - have the LDT in a regular mmap space vma (hence per process ASLR randomized),
>> > > > but with the system bit set.
>> > > >
>> > > > - That would be an advantage even for non-PTI kernels, because mmap() is probably
>> > > > more randomized than kmalloc().
>> > >
>> > > Randomization is pointless as long as you can get the LDT address in user
>> > > space, i.e. w/o UMIP.
>> >
>> > But with UMIP unprivileged user-space won't be able to get the linear address of
>> > the LDT. Now it's written out in /proc/self/maps.
>>
>> We can expose it nameless like other VMAs, but then it's 128k sized so it
>> can be figured out. But when it's RO then it's not really a problem, even
>> the kernel can't write to it.
>
> Yeah, ok. I don't think we should hide it - if it's in the vma space it should be
> listed in the 'maps' file, and with a descriptive name.
>
> Thanks,
>
> Ingo

Can we take a step back here? I think there are four vaguely sane
ways to make the LDT work:

1. The way it is right now -- in vmalloc space. The only real
downside is that it requires exposing that part of vmalloc space in
the user tables, which is a bit gross.

2. In some fixmap-like space, which is what my patch does, albeit
buggily. This requires a PGD that we treat as per-mm, not per-cpu,
but that's not so bad.

3. In one of the user PGDs but above TASK_SIZE_MAX. This is
functionally almost identical to #2, except that there's more concern
about exploits that write past TASK_SIZE_MAX.

4. In an actual vma. I don't see the benefit of doing this at all --
it's just like #2 except way more error prone. Hell, you have to make
sure that you can't munmap or mremap it, which isn't a consideration
at all with the other choices.

Why all the effort to make #4 work? #1 is working fine right now, and
#2 is half-implemented. #3 code-wise looks just like #2 except for
the choice of address and the interation with PTI's shitty PGD
handling.