Re: [PATCH] arp_queue: serializing unlink + kfree_skb

From: David S. Miller
Date: Fri Feb 04 2005 - 19:07:26 EST


On Fri, 4 Feb 2005 22:33:05 +1100
Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> wrote:

> I think you should probably note that some sort of locking or RCU
> scheme is required to make this safe. As it is the atomic_inc
> and the list_add can be reordered such that the atomic_inc occurs
> after the atomic_dec_and_test.
>
> Either that or you can modify the example to add an
> smp_mb__after_atomic_inc(). That'd be a good way to
> demonstrate its use.

Yeah, this example is totally bogus. I'll make it match the
neighbour cache case which started this discussion. Which is
something like:

static void obj_list_add(struct obj *obj)
{
obj->active = 1;
list_add(&obj->list);
}

static void obj_list_del(struct obj *obj)
{
list_del(&obj->list);
obj->active = 0;
}

static void obj_destroy(struct obj *obj)
{
BUG_ON(obj->active);
kfree(obj);
}

struct obj *obj_list_peek(struct list_head *head)
{
if (!list_empty(head)) {
struct obj *obj;

obj = list_entry(head->next, struct obj, list);
atomic_inc(&obj->refcnt);
return obj;
}
return NULL;
}

void obj_poke(void)
{
struct obj *obj;

spin_lock(&global_list_lock);
obj = obj_list_peek(&global_list);
spin_unlock(&global_list_lock);

if (obj) {
obj->ops->poke(obj);
if (atomic_dec_and_test(&obj->refcnt))
obj_destroy(obj);
}
}

void obj_timeout(struct obj *obj)
{
spin_lock(&global_list_lock);
obj_list_del(obj);
spin_unlock(&global_list_lock);

if (atomic_dec_and_test(&obj->refcnt))
obj_destroy(obj);
}

Something like that. I'll update the atomic_ops.txt
doc and post and updated version later tonight.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/