Re: [PATCH 2/2] aio: fix the confliction of read events and migrating ring page

From: Benjamin LaHaise
Date: Thu Mar 20 2014 - 12:30:18 EST


On Thu, Mar 20, 2014 at 10:32:07AM -0400, Dave Jones wrote:
> On Thu, Mar 20, 2014 at 01:46:25PM +0800, Gu Zheng wrote:
>
> > diff --git a/fs/aio.c b/fs/aio.c
> > index 88ad40c..e353085 100644
> > --- a/fs/aio.c
> > +++ b/fs/aio.c
> > @@ -319,6 +319,9 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
> > ctx->ring_pages[old->index] = new;
> > spin_unlock_irqrestore(&ctx->completion_lock, flags);
> >
> > + /* Ensure read event is completed before putting old page */
> > + mutex_lock(&ctx->ring_lock);
> > + mutex_unlock(&ctx->ring_lock);
> > put_page(old);
> >
> > return rc;
>
> This looks a bit weird. Would using a completion work here ?

Nope. This is actually the most elegant fix I've seen for this approach,
as everything else has relied on adding additional spin locks (which only
end up being needed in the migration case) around access to the ring_pages
on the reader side. That said, this patch is not a complete solution to
the problem, as the update of the ring's head pointer could still get lost
with this patch. I think the right thing is just taking the ring_lock
mutex over the entire page migration operation. That should be safe, as
nowhere else is the ring_lock mutex nested with any other locks.

-ben

> Dave

--
"Thought is the essence of where you are now."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/