Re: [PATCH UPDATED] block: In blk_execute_rq_nowait, init rq->end_iobefore checking for dead queue.
From: Muthu Kumar
Date: Wed Jun 20 2012 - 20:53:33 EST
On Wed, Jun 20, 2012 at 11:53 AM, Bart Van Assche <bvanassche@xxxxxxx> wrote:
> On 06/18/12 22:42, Tejun Heo wrote:
>
>> On Mon, Jun 11, 2012 at 02:23:23PM -0700, Muthu Kumar wrote:
>>> On Mon, Jun 11, 2012 at 10:33 AM, Muthu Kumar <muthu.lkml@xxxxxxxxx> wrote:
>>>> On Sun, Jun 10, 2012 at 10:40 AM, Bart Van Assche <bvanassche@xxxxxxx> wrote:
>>>>> On 06/09/12 23:57, Muthu Kumar wrote:
>>>>>
>>>>>> Locking change is the one you posted already (the link above). Anyway,
>>>>>> I have the attached patch *including* the locking change. Original
>>>>>> mail has attachment without locking change. Please use whatever you
>>>>>> need.
>>>>>
>>>>> While we are at it ... the rq->rq_disk and rq->end_io assignments can be
>>>>> performed safely before the spinlock is taken, isn't it ?
>>>>
>>>> Yes.. that spinlock is to protect the q.
>>>
>>> Attached patch with assignment performed before taking the spinlock.
>>
>> This looks correct to me. Bart, can you please include this patch in
>> your series and repost the series?
>
>
> I'll start testing the patch below:
>
> [PATCH] block: Fix blk_execute_rq_nowait() dead queue handling
>
> Make sure that blk_execute_rq_nowait() invokes the proper end_io
> function if the queue is dead and also that this call is performed
> with the queue lock held. Found this through source code review.
>
> Notes:
> - Since blk_execute_rq_nowait() must not be invoked from interrupt
> context it is safe to invoke end_io directly from this function.
> - blk_finish_request() already invokes request.end_io with the
> queue lock held.
>
> Signed-off-by: Bart Van Assche <bvanassche@xxxxxxx>
> Reported-by: Muthukumar Ratty <muthur@xxxxxxxxx>
Hmm... Shouldn't this be:
Signed-off-by: Muthukumar Ratty <muthur@xxxxxxxxx>
Tested-by: Bart Van Assche <bvanassche@xxxxxxx>
???
> Cc: Jens Axboe <axboe@xxxxxxxxx>
> Cc: Tejun Heo <tj@xxxxxxxxxx>
> ---
> block/blk-exec.c | 11 +++++++----
> 1 files changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/block/blk-exec.c b/block/blk-exec.c
> index fb2cbd5..c0fd83a 100644
> --- a/block/blk-exec.c
> +++ b/block/blk-exec.c
> @@ -42,7 +42,8 @@ static void blk_end_sync_rq(struct request *rq, int error)
> *
> * Description:
> * Insert a fully prepared request at the back of the I/O scheduler queue
> - * for execution. Don't wait for completion.
> + * for execution. Don't wait for completion. This function may invoke
> + * rq->end_io directly.
> */
> void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
> struct request *rq, int at_head,
> @@ -51,18 +52,20 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
> int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
>
> WARN_ON(irqs_disabled());
> +
> + rq->rq_disk = bd_disk;
> + rq->end_io = done;
> +
> spin_lock_irq(q->queue_lock);
>
> if (unlikely(blk_queue_dead(q))) {
> - spin_unlock_irq(q->queue_lock);
> rq->errors = -ENXIO;
> if (rq->end_io)
> rq->end_io(rq, rq->errors);
> + spin_unlock_irq(q->queue_lock);
This part already went in, its here only for testing - So in the final
patch, before applying this should be removed.
> return;
> }
>
> - rq->rq_disk = bd_disk;
> - rq->end_io = done;
> __elv_add_request(q, rq, where);
> __blk_run_queue(q);
> /* the queue is stopped so it won't be run */
> --
> 1.7.7
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/