Re: [patch 00/10] [RFC] SLUB patches for more functionality,performance and maintenance

From: Christoph Lameter
Date: Mon Jul 09 2007 - 17:57:16 EST


On Mon, 9 Jul 2007, Mathieu Desnoyers wrote:

> > >Okay the source for these numbers is in his paper for the OLS 2006: Volume
> > >1 page 208-209? I do not see the exact number that you referred to there.
> >
>
> Hrm, the reference page number is wrong: it is in OLS 2006, Vol. 1 page
> 216 (section 4.5.2 Scalability). I originally pulled out the page number
> from my local paper copy. oops.

4.5.2 is on page 208 in my copy of the proceedings.


> > >He seems to be comparing spinlock acquire / release vs. cmpxchg. So I
> > >guess you got your material from somewhere else?
> > >
>
> I ran a test specifically for this paper where I got this result
> comparing the local irq enable/disable to local cmpxchg.


The numbers are pretty important and suggest that we can obtain
a significant speed increase by avoid local irq disable enable in the slab
allocator fast paths. Do you some more numbers? Any other publication that
mentions these?


> Yep, I volountarily used the variant without lock prefix because the
> data is per cpu and I disable preemption.

local_cmpxchg generates this?

> Yes, preempt disabling or, eventually, the new thread migration
> disabling I just proposed as an RFC on LKML. (that would make -rt people
> happier)

Right.

> Sure, also note that the UP cmpxchg (see asm-$ARCH/local.h in 2.6.22) is
> faster on architectures like powerpc and MIPS where it is possible to
> remove some memory barriers.

UP cmpxchg meaning local_cmpxchg?

> See 2.6.22 Documentation/local_ops.txt for a thorough discussion. Don't
> hesitate ping me if you have more questions.

That is pretty thin and does not mention atomic_cmpxchg. You way want to
expand on your ideas a bit.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/