[PATCH] Slub Freeoffset check overflow (updated)

From: Mathieu Desnoyers
Date: Tue Mar 04 2008 - 01:17:39 EST


Check for overflow of the freeoffset version number.

I just thought adding this check in CONFIG_SLUB_DEBUG makes sense. It's
really unlikely that enough interrupt handlers will nest over the slub
fast path, and each of them do about a million alloc/free on 32 bits or
a huge amount of alloc/free on 64 bits, but just in case, it seems good
to warn if we detect we are half-way to a version overflow.

Changelog :
- Mask out the LSB because of alloc fast path. See comment in source.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxx>
---
mm/slub.c | 40 ++++++++++++++++++++++++++++++++++------
1 file changed, 34 insertions(+), 6 deletions(-)

Index: linux-2.6-lttng/mm/slub.c
===================================================================
--- linux-2.6-lttng.orig/mm/slub.c 2008-03-04 00:59:01.000000000 -0500
+++ linux-2.6-lttng/mm/slub.c 2008-03-04 01:03:44.000000000 -0500
@@ -1660,7 +1660,7 @@ static __always_inline void *slab_alloc(
*/

#ifdef SLUB_FASTPATH
- unsigned long freeoffset, newoffset;
+ unsigned long freeoffset, newoffset, resoffset;

c = get_cpu_slab(s, raw_smp_processor_id());
do {
@@ -1682,8 +1682,22 @@ static __always_inline void *slab_alloc(
newoffset = freeoffset;
newoffset &= ~c->off_mask;
newoffset |= (unsigned long)object[c->offset] & c->off_mask;
- } while (cmpxchg_local(&c->freeoffset, freeoffset, newoffset)
- != freeoffset);
+ resoffset = cmpxchg_local(&c->freeoffset, freeoffset,
+ newoffset);
+#ifdef CONFIG_SLUB_DEBUG
+ /*
+ * Just to be paranoid : warn if we detect that enough free or
+ * slow paths nested on top of us to get the counter to go
+ * half-way to overflow. That would be insane to do that much
+ * allocations/free in interrupt handers, but check it anyway.
+ * Mask out the LSBs because alloc fast path does not increment
+ * the sequence number, which may cause the overall values to go
+ * backward.
+ */
+ WARN_ON((resoffset & ~c->off_mask)
+ - (freeoffset & ~c->off_mask) > -1UL >> 1);
+#endif
+ } while (resoffset != freeoffset);
#else
unsigned long flags;

@@ -1822,7 +1836,7 @@ static __always_inline void slab_free(st
struct kmem_cache_cpu *c;

#ifdef SLUB_FASTPATH
- unsigned long freeoffset, newoffset;
+ unsigned long freeoffset, newoffset, resoffset;

c = get_cpu_slab(s, raw_smp_processor_id());
debug_check_no_locks_freed(object, s->objsize);
@@ -1850,8 +1864,22 @@ static __always_inline void slab_free(st
newoffset = freeoffset + c->off_mask + 1;
newoffset &= ~c->off_mask;
newoffset |= (unsigned long)object & c->off_mask;
- } while (cmpxchg_local(&c->freeoffset, freeoffset, newoffset)
- != freeoffset);
+ resoffset = cmpxchg_local(&c->freeoffset, freeoffset,
+ newoffset);
+#ifdef CONFIG_SLUB_DEBUG
+ /*
+ * Just to be paranoid : warn if we detect that enough free or
+ * slow paths nested on top of us to get the counter to go
+ * half-way to overflow. That would be insane to do that much
+ * allocations/free in interrupt handers, but check it anyway.
+ * Mask out the LSBs because alloc fast path does not increment
+ * the sequence number, which may cause the overall values to go
+ * backward.
+ */
+ WARN_ON((resoffset & ~c->off_mask)
+ - (freeoffset & ~c->off_mask) > -1UL >> 1);
+#endif
+ } while (resoffset != freeoffset);
#else
unsigned long flags;

--
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/