Re: [PATCH v2 1/2] mm, kasan: improve double-free detection

From: Dmitry Vyukov
Date: Mon May 09 2016 - 03:07:55 EST


On Sat, May 7, 2016 at 5:15 PM, Luruo, Kuthonuzo
<kuthonuzo.luruo@xxxxxxx> wrote:
> Thank you for the review!
>
>> > +
>> > +/* acquire per-object lock for access to KASAN metadata. */
>>
>> I believe there's strong reason not to use standard spin_lock() or
>> similar. I think it's proper place to explain it.
>>
>
> will do.
>
>> > +void kasan_meta_lock(struct kasan_alloc_meta *alloc_info)
>> > +{
>> > + union kasan_alloc_data old, new;
>> > +
>> > + preempt_disable();
>>
>> It's better to disable and enable preemption inside the loop
>> on each iteration, to decrease contention.
>>
>
> ok, makes sense; will do.
>
>> > + for (;;) {
>> > + old.packed = READ_ONCE(alloc_info->data);
>> > + if (unlikely(old.lock)) {
>> > + cpu_relax();
>> > + continue;
>> > + }
>> > + new.packed = old.packed;
>> > + new.lock = 1;
>> > + if (likely(cmpxchg(&alloc_info->data, old.packed, new.packed)
>> > + == old.packed))
>> > + break;
>> > + }
>> > +}
>> > +
>> > +/* release lock after a kasan_meta_lock(). */
>> > +void kasan_meta_unlock(struct kasan_alloc_meta *alloc_info)
>> > +{
>> > + union kasan_alloc_data alloc_data;
>> > +
>> > + alloc_data.packed = READ_ONCE(alloc_info->data);
>> > + alloc_data.lock = 0;
>> > + if (unlikely(xchg(&alloc_info->data, alloc_data.packed) !=
>> > + (alloc_data.packed | 0x1U)))
>> > + WARN_ONCE(1, "%s: lock not held!\n", __func__);
>>
>> Nitpick. It never happens in normal case, correct?. Why don't you place it under
>> some developer config, or even leave at dev branch? The function will
>> be twice shorter without it.
>
> ok, will remove/shorten

My concern here is performance.
We do lock/unlock 3 times per allocated object. Currently that's 6
atomic RMW. The unlock one is not necessary, so that would reduce
number of atomic RMWs to 3.