Re: [PATCH] slub: fix slab_pad_check()

From: Christoph Lameter
Date: Fri Sep 04 2009 - 15:07:03 EST

On Fri, 4 Sep 2009, Eric Dumazet wrote:

> Problem is not _objects_ Christoph, but _slabs_, and your patch is not working.


> Its true that when User calls kmem_cache_destroy(), all _objects_ were
> previously freed. This is mandatory, with or withou SLAB_DESTROY_BY_RCU
> thing


> Problem is that slub has some internal state, including some to-be-freed _slabs_,
> that User have no control at all on it.

Those are going to be freed without calls to rcu with my patch. The only
reason for earlier rcu frees are user calls to kfree.

> Face it, SLAB_DESTROY_BY_RCU is internal affair (to slub/slab/... allocators)

Nope the user must follow RCU guidelines when using objects.

> We absolutely need a rcu_barrier() somewhere, believe it or not. You can
> argue that it should be done *before*, but it gives no speedup, only
> potential bugs.

I never said that you do not need an rcu_barrier() for this particular
situation? Why suggest such a thing?

The insertion of rcu stuff in the slab code will lead to future bugs since
now the slab logic is tied to the semantics of a particular rcu

> Only case User should do its rcu_barrier() itself is if it knows some
> call_rcu() are pending and are delaying _objects_ freeing (typical
> !SLAB_DESTROY_RCU usage in RCU algos).

Ok then the user already has to deal with the barriers. The API is
inconsistent if you put this into kmem_cache_destroy.

> I dont even understand why you care so much about
> kmem_cache_destroy(SLAB_DESTROY_BY_RCU), given that almost nobody use
> it. We took almost one month to find out what the bug was in first
> place...

This is already the second bug on this issue. Given the complexity of rcu
it is to be experted that inserting more RCU semantics into the slab
allocators is likely to cause future chains of new features and
bugs in slab allocators.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at