Re: [PATCH 4/7] ide: ide_hwgroup_t.rq doesn't need an ide_lock held

From: Bartlomiej Zolnierkiewicz
Date: Fri Oct 10 2008 - 12:23:23 EST


On Friday 10 October 2008, Elias Oltmanns wrote:
> Bartlomiej Zolnierkiewicz <bzolnier@xxxxxxxxx> wrote:
> > From: Bartlomiej Zolnierkiewicz <bzolnier@xxxxxxxxx>
> > Subject: [PATCH] ide: ide_hwgroup_t.rq doesn't need an ide_lock held
> >
> > While at it:
> > - no need to check for hwgroup presence in ide_dump_opcode()
> >
> > Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@xxxxxxxxx>
> > ---
> [...]
> > Index: b/drivers/ide/ide-io.c
> > ===================================================================
> > --- a/drivers/ide/ide-io.c
> > +++ b/drivers/ide/ide-io.c
> [...]
> > @@ -274,7 +269,11 @@ static void ide_complete_pm_request (ide
> > drive->dev_flags &= ~IDE_DFLAG_BLOCKED;
> > blk_start_queue(drive->queue);
> > }
> > - HWGROUP(drive)->rq = NULL;
> > + spin_unlock_irqrestore(&ide_lock, flags);
> > +
> > + drive->hwif->hwgroup->rq = NULL;
> > +
> > + spin_lock_irqsave(&ide_lock, flags);
> > if (__blk_end_request(rq, 0, 0))
> > BUG();
> > spin_unlock_irqrestore(&ide_lock, flags);
>
> Is it really an improvement to release the lock here?

Yes since it is a preparation for using the right lock here
(drive->queue->queue_lock instead of ide_lock) which is done
in patch #6/7:

@@ -263,10 +260,8 @@ static void ide_complete_pm_request (ide

drive->hwif->hwgroup->rq = NULL;

- spin_lock_irqsave(&ide_lock, flags);
- if (__blk_end_request(rq, 0, 0))
+ if (blk_end_request(rq, 0, 0))
BUG();
- spin_unlock_irqrestore(&ide_lock, flags);
}

[ ide_lock and drive->queue->queue_lock are the same lock at the moment
(which is wrong since they are meant to protect completely different
things) but after this patchset it should be quite easy to address ]

Also ide_complete_pm_request() above is used only for completion of
PM suspend/resume requests and is not really performance critical.

Thanks,
Bart
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/