[PATCH] micro optimization of cache_estimate in slab.c

From: Steven Rostedt
Date: Sun Dec 18 2005 - 03:22:54 EST


OK, I know this is really just a micro optimization, but I figured I'd
submit it anyway. Looking into the slab code, I came across the
cache_estimate function, which has the following code:

i = 0;
while (i*size + ALIGN(base+i*extra, align) <= wastage)
i++;
if (i > 0)
i--;

Now this is counting linearly up and will be O(n) for n = number of
objects in the page order. Since objects range up to 202 (according to
my /proc/slabinfo). So I figured there must be a better way. So I added
this:

i = 0;
do {
x = 1;
while ((x+i)*size + ALIGN(base+(x+i)*extra, align) <= wastage)
x <<= 1;
i += (x >> 1);
} while (x > 1);

Which now makes it O(log n). This basically does a binary search for
the upper range. I tested this code in userspace, and it works just the
same as the original code, but with less iterations.

I know this is really a micro optimization, but if you think every usec
counts, then we can use this patch ;)

-- Steve

Index: linux-2.6.15-rc5/mm/slab.c
===================================================================
--- linux-2.6.15-rc5.orig/mm/slab.c 2005-12-16 16:24:09.000000000 -0500
+++ linux-2.6.15-rc5/mm/slab.c 2005-12-18 03:16:18.000000000 -0500
@@ -700,6 +700,7 @@
int flags, size_t *left_over, unsigned int *num)
{
int i;
+ int x;
size_t wastage = PAGE_SIZE<<gfporder;
size_t extra = 0;
size_t base = 0;
@@ -709,10 +710,12 @@
extra = sizeof(kmem_bufctl_t);
}
i = 0;
- while (i*size + ALIGN(base+i*extra, align) <= wastage)
- i++;
- if (i > 0)
- i--;
+ do {
+ x = 1;
+ while ((x+i)*size + ALIGN(base+(x+i)*extra, align) <= wastage)
+ x <<= 1;
+ i += (x >> 1);
+ } while (x > 1);

if (i > SLAB_LIMIT)
i = SLAB_LIMIT;


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/