Re: SLUB defrag pull request?

From: Eric Dumazet
Date: Thu Oct 23 2008 - 13:18:28 EST


Christoph Lameter a écrit :
On Thu, 23 Oct 2008, Eric Dumazet wrote:

SLUB touches objects by default when allocating. And it does it immediately in slab_alloc() in order to retrieve the pointer to the next object. So there is no point of hinting there right now.


Please note SLUB touches by reading object.

prefetchw() gives a hint to cpu saying this cache line is going to be *modified*, even
if first access is a read. Some architectures can save some bus transactions, acquiring
the cache line in an exclusive way instead of shared one.

Most architectures actually can do that. Its probably worth to run some tests with that. Conversion of a cacheline from shared to exclusive can cost something.


Please check following patch as a followup

[PATCH] slub: slab_alloc() can use prefetchw()

Most kmalloced() areas are initialized/written right after allocation.

prefetchw() gives a hint to cpu saying this cache line is going to be
*modified*, even if first access is a read.

Some architectures can save some bus transactions, acquiring
the cache line in an exclusive way instead of shared one.

Same optimization was done in 2005 on SLAB in commit 34342e863c3143640c031760140d640a06c6a5f8 ([PATCH] mm/slab.c: prefetchw the start of new allocated objects)

Signed-off-by: Eric Dumazet <dada1@xxxxxxxxxxxxx>

diff --git a/mm/slub.c b/mm/slub.c
index 0c83e6a..c2017a3 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1592,13 +1592,14 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,

local_irq_save(flags);
c = get_cpu_slab(s, smp_processor_id());
+ object = c->freelist;
+ prefetchw(object);
objsize = c->objsize;
- if (unlikely(!c->freelist || !node_match(c, node)))
+ if (unlikely(!object || !node_match(c, node)))

object = __slab_alloc(s, gfpflags, node, addr, c);

else {
- object = c->freelist;
c->freelist = object[c->offset];
stat(c, ALLOC_FASTPATH);
}