Re: [PATCH] Fixed a mismatch between the users of radix_tree and the implementation.

From: Salman Qazi
Date: Tue Aug 17 2010 - 00:45:31 EST


On Mon, Aug 16, 2010 at 9:35 PM, Salman Qazi <sqazi@xxxxxxxxxx> wrote:
> On Mon, Aug 16, 2010 at 2:06 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> (html damaged email alert)
>>
>> On Mon, 2010-08-16 at 13:59 -0700, Salman Qazi wrote:
>>> On Mon, Aug 16, 2010 at 12:33 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>>         On Mon, 2010-08-16 at 11:30 -0700, Salman Qazi wrote:
>>>         > For the delete case,
>>>         > we no longer shrink the tree back to being just the root containing the
>>>         > only remaining object.  For the insert case, we no longer store the
>>>         > first object in the root, rather allocating a node structure for it.  The
>>>         > reason that this works is that deleting (or inserting) intermediate nodes
>>>         > does not make a difference to a reader holding a slot.
>>>
>>>
>>>         Ah, I through that was what it did. So you basically increase the memory
>>>         footprint for tiny files.. have you done any measurements on that?
>>>
>>
>>> You raise a valid concern.  I haven't.  What would you recommend as a
>>> benchmark/metric to measure this?
>>
>> One thing you could try is something like the below on a freshly booted
>> machine, once without and once with the patch:
>>
>>  cd /usr/src/linux-2.6
>>  echo 1 > /proc/sys/vm/drop_caches
>>  grep radix /proc/slabinfo
>>  make bzImage
>>  echo 1 > /proc/sys/vm/drop_caches
>>  grep radix /proc/slabinfo
>>
>>
>>
>>
>
> Here's what I see:
>
> Without the patch:
>
> Before:
> radix_tree_node      468   1400    568   28    4 : tunables    0    0
>  0 : slabdata     50     50      0
>
> After:
> radix_tree_node     1886   3192    568   28    4 : tunables    0    0
>  0 : slabdata    114    114      0
>
> With the patch:
>
> Before:
>
> radix_tree_node      495   1176    568   28    4 : tunables    0    0
>  0 : slabdata     42     42      0
>
> After:
>
> radix_tree_node     3173   7336    568   28    4 : tunables    0    0
>  0 : slabdata    262    262      0
>
>
> So, not particularly good news :(.
>

But considering that the kernel locks up, and we are still talking
about < 5MB after a kernel compile, should we really be all that
concerned? If so, what are the alternatives that should be considered
for fixing this lock up?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/