Re: [V2 PATCH 1/2] aio: clean up aio_migratepage() and related code much

From: Gu Zheng
Date: Tue Mar 25 2014 - 06:32:55 EST


Hi Ben,
On 03/24/2014 09:20 PM, Benjamin LaHaise wrote:

> On Mon, Mar 24, 2014 at 06:59:30PM +0800, Gu Zheng wrote:
>> As the page migration framework holds lock_page() to protect the pages
>> (both old and new) while migrating, so while the page migrating, both
>> of old page and new page are locked. And the aio context teardown
>> routine will call *truncate*(in put_aio_ring_file()) to truncate
>> pagecache which needs to acquire page_lock() for each page one by one.
>> So there is a native mutual exclusion between *migrate page* v.s. truncate().
>>
>> If put_aio_ring_file() is called at first of the context teardown flow
>> (aio_free_ring). Then, page migration and ctx freeing will have mutual
>> execution guarded by lock_page() v.s. truncate(). Once a page is removed
>> from radix-tree, it will not be migrated. On the other hand, the context
>> can not be freed while the page migraiton are ongoing.
>
> Sorry, but your change to remove the taking of ->private_lock in
> put_aio_ring_file() is not safe. If a malicious user reinstantiates
> any pages in the ring buffer's mmaping, there is nothing protecting
> the system against incoherent accesses of ->ring_pages. One possible
> way of making this occur would be to use mremap() to expand the size
> of the mapping or move it to a different location in the user process'
> address space. Yes, it's a tiny race, but it's possible. There is
> absolutely no reason to remove this locking -- ring teardown is
> hardly a performance sensitive code path. I'm going to stick with my
> approach instead.

OK, you can go ahead via your approach, but I'll hold the reservation
about the issue you mentioned above before I find out it clearly.

BTW, please also send it to the 3.12.y and 3.13.y stable tree once it is
merged into Linus' tree.

Thanks,
Gu

>
> -ben


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/