Re: [Q] Default SLAB allocator

From: Christoph Lameter
Date: Tue Oct 16 2012 - 14:53:00 EST


On Mon, 15 Oct 2012, David Rientjes wrote:

> This type of workload that really exhibits the problem with remote freeing
> would suggest that the design of slub itself is the problem here.

There is a tradeoff here between spatial data locality and temporal
locality. Slub always frees to the queue associated with the slab page
that the object originated from and therefore restores spatial data
locality. It will always serve all objects available in a slab page
before moving onto the next. Within a slab page it can consider temporal
locality.

Slab considers temporal locatlity more important and will not return
objects to the originating slab pages until they are no longer in use. It
(ideally) will serve objects in the order they were freed. This breaks
down in the NUMA case and the allocator got into a pretty bizarre queueing
configuration (with lots and lots of queues) as a result of our attempt to
preverse the free/alloc order per NUMA node (look at the alien caches
f.e.). Slub is an alternative to that approach.

Slab also has the problem of queue handling overhead due to the attempt to
throw objects out of the queues that are likely no more cache hot. Every
few seconds it needs to run queue cleaning through all queues that exists
on the system. How accurate it tracks the actual cache hotness of objects
is not clear.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/