Re: [PATCH UPDATED 38/40] cifs: use workqueue instead of slow-work

From: Tejun Heo
Date: Sun Jan 24 2010 - 03:20:21 EST


Hello,

On 01/22/2010 08:45 PM, Jeff Layton wrote:
>> @@ -584,13 +583,18 @@ is_valid_oplock_break(struct smb_hdr *bu
>> pCifsInode->clientCanCacheAll = false;
>> if (pSMB->OplockLevel == 0)
>> pCifsInode->clientCanCacheRead = false;
>> - rc = slow_work_enqueue(&netfile->oplock_break);
>> - if (rc) {
>> - cERROR(1, ("failed to enqueue oplock "
>> - "break: %d\n", rc));
>> - } else {
>> - netfile->oplock_break_cancelled = false;
>> - }
>> +
>> + /*
>> + * cifs_oplock_break_put() can't be called
>> + * from here. Get reference after queueing
>> + * succeeded. cifs_oplock_break() will
>> + * synchronize using GlobalSMSSeslock.
>> + */
>> + if (queue_work(system_single_wq,
>> + &netfile->oplock_break))
>> + cifs_oplock_break_get(netfile);
>> + netfile->oplock_break_cancelled = false;
>> +
>
> I think we want to move the setting of netfile->oplock_break_cancelled
> inside of the if above it.
>
> If the work is already queued, I don't think we want to set the flag to
> false. Doing so might be problematic if we somehow end up processing
> this oplock break after a previous oplock break/reconnect/reopen
> sequence, but while the initial oplock break is still running.

Hmmm.... I can surely do that but that would be different from the
original code. slow_work_enqueue() doesn't distinguish between
successful enqueue and the one which got ignored because the work was
already queued. With conversion to queue_work(), there's no failure
case there so setting oplock_break_cancelled always is equivalent to
the original code. Even if changing it is the right thing to do, it
should probably be done with a separate patch as it changes the logic.
Are you sure it needs to be changed?

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/