Re: SCSI access creating lost time

Andrea Arcangeli (andrea@e-mind.com)
Mon, 8 Mar 1999 01:53:48 +0100 (CET)


On Sun, 7 Mar 1999, Doug Ledford wrote:

>of a spin_lock_irqsave(). There's nothing intelligent that can be done
>until the locking in the drivers/mid-level SCSI code is redone except

The io_request_lock holding time seems a bit too much to be excessive to
me. On UP this looks like to me as a major issue. It's not a my problem
since I don't have money to buy a scsi hardware though ;).

I was trying to understand why we need it hold for a so long time.

Starting from unplug_device and add_request everything in the lowlevel
block device path holds the lock. So a request path could be something
like:

add_request->do_sd_request->(thehell
of)requeue_sd_request->scsi_do_cmd->internal_cmnd->and finally
queuecommand (that in the worst case could loop on the bus)

I had not the time to look at the path of the irq handler that tell us
about I/O completation yet (any hint is welcome ;)).

My question is: exactly which is _the_ race (or the race_s_) are we
avoiding with this spinlock?

If the point is to have only an add_request() path running at once we
could use a simpler down() in add_request (supposing that the request
can't be done from an irq handler, or play with down_trylock() if
in_interrupt() == 1), and an up() at I/O completation time. And we could
still held the spinlock _only_ to protect the
request-data-structure handling.

And btw since many places just does spin_unlock_irq in the scsi path,
it's just possible that you'll have two request path running at the same
time.

Andrea Arcangeli

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/