Re: [openib-general] [PATCH 04/13] Connection Manager

From: Steve Wise
Date: Fri Nov 17 2006 - 13:27:03 EST


On Fri, 2006-11-17 at 10:07 -0800, Bryan O'Sullivan wrote:
> Steve Wise wrote:
>
> > +static void release_tid(struct t3cdev *tdev, u32 hwtid, struct sk_buff *skb)
> > +{
> > + struct cpl_tid_release *req;
> > +
> > + skb = get_skb(skb, sizeof *req, GFP_KERNEL);
> > + if (!skb) {
> > + return;
> > + }
>
> Style micronit: no curlies for single-statement blocks.
>

yup.

> > +void __free_ep(struct iwch_ep_common *epc)
> > +{
> > + PDBG("%s ep %p, &refcnt %p state %s, refcnt %d\n",
> > + __FUNCTION__, epc, &epc->refcnt,
> > + states[state_read(epc)], atomic_read(&epc->refcnt));
> > +
> > + if (atomic_read(&epc->refcnt) == 1) {
> > + goto out;
> > + }
> > + if (!atomic_dec_and_test(&epc->refcnt)) {
> > + return;
> > + }
> > +out:
> > + PDBG("free ep %p\n", epc);
> > + kfree(epc);
> > +}
>
> Whatever you're trying to do with refcounting and atomics here looks
> extremely dodgy and race-prone to me. Why are you using atomic ops in
> such a scary manner, instead of just slapping a spinlock around this?
>

This logic is the same as kfree_skb() (and kref_put() :). It optimizes
the case where you're the last one freeing I guess and avoids the
dec-and-test in that case..

> Anyway, please drop this atomic refcounting stuff and use embedded krefs
> instead. You're tunnelling into a bug mine.

The kref put code looks very much like the code above. But I can use
krefs to avoid replicating code.

>
> By the way, it would be more consistent with normal kernel naming
> conventions to name these refcount-diddling routines ep_get and ep_put,
> since __ep_free doesn't actually free an object unless it feels like it.

Again, it was modeled after skb.

>
> > +int __init iwch_cm_init(void)
> > +{
> > + skb_queue_head_init(&rxq);
> > +
> > + workq = create_singlethread_workqueue("iw_cxgb3");
> > + if (!workq)
> > + return -ENOMEM;
> > +
> > + /*
> > + * All upcalls from the T3 Core go to sched() to
> > + * schedule the processing on a work queue.
> > + */
> > + t3c_handlers[CPL_ACT_ESTABLISH] = sched;
> > + t3c_handlers[CPL_ACT_OPEN_RPL] = sched;
> > + t3c_handlers[CPL_RX_DATA] = sched;
> > + t3c_handlers[CPL_TX_DMA_ACK] = sched;
> > + t3c_handlers[CPL_ABORT_RPL_RSS] = sched;
> > + t3c_handlers[CPL_ABORT_RPL] = sched;
> > + t3c_handlers[CPL_PASS_OPEN_RPL] = sched;
> > + t3c_handlers[CPL_CLOSE_LISTSRV_RPL] = sched;
> > + t3c_handlers[CPL_PASS_ACCEPT_REQ] = sched;
> > + t3c_handlers[CPL_PASS_ESTABLISH] = sched;
> > + t3c_handlers[CPL_PEER_CLOSE] = sched;
> > + t3c_handlers[CPL_CLOSE_CON_RPL] = sched;
> > + t3c_handlers[CPL_ABORT_REQ_RSS] = sched;
> > + t3c_handlers[CPL_RDMA_TERMINATE] = sched;
> > + t3c_handlers[CPL_RDMA_EC_STATUS] = sched;
> > +
> > + /*
> > + * These are the real handlers that are called from a
> > + * work queue.
> > + */
> > + work_handlers[CPL_ACT_ESTABLISH] = act_establish;
> > + work_handlers[CPL_ACT_OPEN_RPL] = act_open_rpl;
> > + work_handlers[CPL_RX_DATA] = rx_data;
> > + work_handlers[CPL_TX_DMA_ACK] = tx_ack;
> > + work_handlers[CPL_ABORT_RPL_RSS] = abort_rpl;
> > + work_handlers[CPL_ABORT_RPL] = abort_rpl;
> > + work_handlers[CPL_PASS_OPEN_RPL] = pass_open_rpl;
> > + work_handlers[CPL_CLOSE_LISTSRV_RPL] = close_listsrv_rpl;
> > + work_handlers[CPL_PASS_ACCEPT_REQ] = pass_accept_req;
> > + work_handlers[CPL_PASS_ESTABLISH] = pass_establish;
> > + work_handlers[CPL_PEER_CLOSE] = peer_close;
> > + work_handlers[CPL_ABORT_REQ_RSS] = peer_abort;
> > + work_handlers[CPL_CLOSE_CON_RPL] = close_con_rpl;
> > + work_handlers[CPL_RDMA_TERMINATE] = terminate;
> > + work_handlers[CPL_RDMA_EC_STATUS] = ec_status;
> > + return 0;
> > +}
>
> This seems mighty peculiar. Why aren't you keeping this stuff in
> structs, instead of faking up structs via arrays?
>

Its a function handler table. Given an incoming message with a command
number, we can index into the table using the command number to get the
handler function for that message.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/