Re: [PATCH] bsg, block layer sg

From: Jens Axboe
Date: Mon Mar 06 2006 - 04:17:55 EST


On Mon, Mar 06 2006, Andrew Morton wrote:
> Jens Axboe <axboe@xxxxxxx> wrote:
> >
> > ...
> > >
> > > If you expand the two above statements you get:
> > >
> > > spin_lock_irqsave(q->queue_lock, flags);
> > > __elv_add_request(q, rq, where, plug);
> > > spin_unlock_irqrestore(q->queue_lock, flags);
> > > spin_lock_irq(q->queue_lock);
> > > __generic_unplug_device(q);
> > > spin_unlock_irq(q->queue_lock);
> > >
> > > which is a bit sad.
> >
> > Indeed, I'll do the locking manually and use the __ functions.
>
> blk_execute_rq_nowait() and pkt_generic_packet() also do the above two
> calls. It might be worth creating a new library function.

Yes it might, there are other call sites like this in the kernel. But
it's basically blk_execute_rq_nowait(). I'll make that change.

> > > > +static int bsg_complete_all_commands(struct bsg_device *bd)
> > > > +{
> > > > + struct bsg_command *bc;
> > > > + int ret, tret;
> > > > +
> > > > + dprintk("%s: entered\n", bd->name);
> > > > +
> > > > + set_bit(BSG_F_BLOCK, &bd->flags);
> > > > +
> > > > + /*
> > > > + * wait for all commands to complete
> > > > + */
> > > > + ret = 0;
> > > > + do {
> > > > + ret = bsg_io_schedule(bd, TASK_UNINTERRUPTIBLE);
> > > > + /*
> > > > + * look for -ENODATA specifically -- we'll sometimes get
> > > > + * -ERESTARTSYS when we've taken a signal, but we can't
> > > > + * return until we're done freeing the queue, so ignore
> > > > + * it. The signal will get handled when we're done freeing
> > > > + * the bsg_device.
> > > > + */
> > > > + } while (ret != -ENODATA);
> > > > +
> > > > + /*
> > > > + * discard done commands
> > > > + */
> > >
> > > Would it be useful to reap the completed commands earlier? While their
> > > predecessors are still in flight?
> >
> > Not sure I follow... You mean if someone attempts to queue and fails
> > because we are out of commands, then try and reap some done commands?
>
> Rather than waiting for all commands to complete then freeing them all
> up, it might be possible to free some of them earlier, while the rest
> are still in flight. Pipeline the reaping with the I/O completion. A
> minor saving in cycles and memory, perhaps. Probably it's not worth
> the complexity - I was just asking ;)

For the example above, it's the final shutdown case just waiting for all
commands to complete. So definitely not worth extra complexity :-)

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/