Re: [RFC PATCH 2/7] x86/sci: add core implementation for system call isolation

From: Andy Lutomirski
Date: Mon Apr 29 2019 - 14:46:49 EST


On Sat, Apr 27, 2019 at 3:46 AM Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
>
> * Ingo Molnar <mingo@xxxxxxxxxx> wrote:
>
> > * Andy Lutomirski <luto@xxxxxxxxxx> wrote:
> >
> > > > And no, I'm not arguing for Java or C#, but I am arguing for a saner
> > > > version of C.
> > >
> > > IMO three are three credible choices:
> > >
> > > 1. C with fairly strong CFI protection. Grsecurity has this (supposedly
> > > â thereâs a distinct lack of source code available), and clang is
> > > gradually working on it.
> > >
> > > 2. A safe language for parts of the kernel, e.g. drivers and maybe
> > > eventually filesystems. Rust is probably the only credible candidate.
> > > Actually creating a decent Rust wrapper around the core kernel
> > > facilities would be quite a bit of work. Things like sysfs would be
> > > interesting in Rust, since AFAIK few or even no drivers actually get
> > > the locking fully correct. This means that naive users of the API
> > > cannot port directly to safe Rust, because all the races won't compile
> > > :)
> > >
> > > 3. A sandbox for parts of the kernel, e.g. drivers. The obvious
> > > candidates are eBPF and WASM.
> > >
> > > #2 will give very good performance. #3 gives potentially stronger
> > > protection against a sandboxed component corrupting the kernel overall,
> > > but it gives much weaker protection against a sandboxed component
> > > corrupting itself.
> > >
> > > In an ideal world, we could do #2 *and* #3. Drivers could, for
> > > example, be written in a language like Rust, compiled to WASM, and run
> > > in the kernel.
> >
> > So why not go for #1, which would still outperform #2/#3, right? Do we
> > know what it would take, roughly, and how the runtime overhead looks
> > like?
>
> BTW., CFI protection is in essence a compiler (or hardware) technique to
> detect stack frame or function pointer corruption after the fact.
>
> So I'm wondering whether there's a 4th choice as well, which avoids
> control flow corruption *before* it happens:
>
> - A C language runtime that is a subset of current C syntax and
> semantics used in the kernel, and which doesn't allow access outside
> of existing objects and thus creates a strictly enforced separation
> between memory used for data, and memory used for code and control
> flow.
>
> - This would involve, at minimum:
>
> - tracking every type and object and its inherent length and valid
> access patterns, and never losing track of its type.
>
> - being a lot more organized about initialization, i.e. no
> uninitialized variables/fields.
>
> - being a lot more strict about type conversions and pointers in
> general.

You're not the only one to suggest this. There are at least a few
things that make this extremely difficult if not impossible. For
example, consider this code:

void maybe_buggy(void)
{
int a, b;
int *p = &a;
int *q = (int *)some_function((unsigned long)p);
*q = 1;
}

If some_function(&a) returns &a, then all is well. But if
some_function(&a) returns &b or even a valid address of some unrelated
kernel object, then the code might be entirely valid and correct C,
but I don't see how the runtime checks are supposed to tell whether
the resulting address is valid or is a bug. This type of code is, I
think, quite common in the kernel -- it happens in every data
structure where we have unions of pointers and integers or where we
steal some known-zero bits of a pointer to store something else.

--Andy