Re: [PATCH v3 01/10] mm: add Kernel Electric-Fence infrastructure

From: Alexander Potapenko
Date: Tue Sep 29 2020 - 11:52:16 EST


On Tue, Sep 29, 2020 at 4:24 PM Mark Rutland <mark.rutland@xxxxxxx> wrote:
>
> On Mon, Sep 21, 2020 at 03:26:02PM +0200, Marco Elver wrote:
> > From: Alexander Potapenko <glider@xxxxxxxxxx>
> >
> > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> > low-overhead sampling-based memory safety error detector of heap
> > use-after-free, invalid-free, and out-of-bounds access errors.
> >
> > KFENCE is designed to be enabled in production kernels, and has near
> > zero performance overhead. Compared to KASAN, KFENCE trades performance
> > for precision. The main motivation behind KFENCE's design, is that with
> > enough total uptime KFENCE will detect bugs in code paths not typically
> > exercised by non-production test workloads. One way to quickly achieve a
> > large enough total uptime is when the tool is deployed across a large
> > fleet of machines.
> >
> > KFENCE objects each reside on a dedicated page, at either the left or
> > right page boundaries. The pages to the left and right of the object
> > page are "guard pages", whose attributes are changed to a protected
> > state, and cause page faults on any attempted access to them. Such page
> > faults are then intercepted by KFENCE, which handles the fault
> > gracefully by reporting a memory access error. To detect out-of-bounds
> > writes to memory within the object's page itself, KFENCE also uses
> > pattern-based redzones. The following figure illustrates the page
> > layout:
> >
> > ---+-----------+-----------+-----------+-----------+-----------+---
> > | xxxxxxxxx | O : | xxxxxxxxx | : O | xxxxxxxxx |
> > | xxxxxxxxx | B : | xxxxxxxxx | : B | xxxxxxxxx |
> > | x GUARD x | J : RED- | x GUARD x | RED- : J | x GUARD x |
> > | xxxxxxxxx | E : ZONE | xxxxxxxxx | ZONE : E | xxxxxxxxx |
> > | xxxxxxxxx | C : | xxxxxxxxx | : C | xxxxxxxxx |
> > | xxxxxxxxx | T : | xxxxxxxxx | : T | xxxxxxxxx |
> > ---+-----------+-----------+-----------+-----------+-----------+---
> >
> > Guarded allocations are set up based on a sample interval (can be set
> > via kfence.sample_interval). After expiration of the sample interval, a
> > guarded allocation from the KFENCE object pool is returned to the main
> > allocator (SLAB or SLUB). At this point, the timer is reset, and the
> > next allocation is set up after the expiration of the interval.
>
> From other sub-threads it sounds like these addresses are not part of
> the linear/direct map.
For x86 these addresses belong to .bss, i.e. "kernel text mapping"
section, isn't that the linear map?
I also don't see lm_alias being used much outside arm64 code.

> Having kmalloc return addresses outside of the
> linear map is going to break anything that relies on virt<->phys
> conversions, and is liable to make DMA corrupt memory. There were
> problems of that sort with VMAP_STACK, and this is why kvmalloc() is
> separate from kmalloc().
>
> Have you tested with CONFIG_DEBUG_VIRTUAL? I'd expect that to scream.

Just checked - it doesn't scream on x86.

> I strongly suspect this isn't going to be safe unless you always use an
> in-place carevout from the linear map (which could be the linear alias
> of a static carevout).
>
> [...]
>
> > +static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
> > +{
> > + return static_branch_unlikely(&kfence_allocation_key) ? __kfence_alloc(s, size, flags) :
> > + NULL;
> > +}
>
> Minor (unrelated) nit, but this would be easier to read as:
>
> static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
> {
> if (static_branch_unlikely(&kfence_allocation_key))
> return __kfence_alloc(s, size, flags);
> return NULL;
> }
>
> Thanks,
> Mark.



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg