[PATCH 2/2] mm/slub.c: add a naive detection of double free or corruption

From: Alexander Popov
Date: Mon Jul 24 2017 - 16:16:28 EST


On 06.07.2017 03:27, Kees Cook wrote:
> This SLUB free list pointer obfuscation code is modified from Brad
> Spengler/PaX Team's code in the last public patch of grsecurity/PaX based
> on my understanding of the code. Changes or omissions from the original
> code are mine and don't reflect the original grsecurity/PaX code.
>
> This adds a per-cache random value to SLUB caches that is XORed with
> their freelist pointer address and value. This adds nearly zero overhead
> and frustrates the very common heap overflow exploitation method of
> overwriting freelist pointers. A recent example of the attack is written
> up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit
>
> This is based on patches by Daniel Micay, and refactored to minimize the
> use of #ifdef.

Hello!

This is an addition to the SLAB_FREELIST_HARDENED feature. I'm sending it
according the discussion here:
http://www.openwall.com/lists/kernel-hardening/2017/07/17/9

-- >8 --

Add an assertion similar to "fasttop" check in GNU C Library allocator
as a part of SLAB_FREELIST_HARDENED feature. An object added to a singly
linked freelist should not point to itself. That helps to detect some
double free errors (e.g. CVE-2017-2636) without slub_debug and KASAN.

Signed-off-by: Alexander Popov <alex.popov@xxxxxxxxx>
---
mm/slub.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index c92d636..f39d06e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache *s,
void *object, void *fp)
{
unsigned long freeptr_addr = (unsigned long)object + s->offset;

+#ifdef CONFIG_SLAB_FREELIST_HARDENED
+ BUG_ON(object == fp); /* naive detection of double free or corruption */
+#endif
+
*(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr);
}

--
2.7.4