[PATCH 04/14] mm: reduce duplicate page fault code

From: Wu Fengguang
Date: Tue Apr 07 2009 - 08:03:17 EST


Restore the simplicity of the filemap_fault():no_cached_page block.
The VM_FAULT_RETRY case is not all that different.

No readahead/readaround will be performed after no_cached_page,
because no_cached_page either means MADV_RANDOM or some error condition.

Cc: Ying Han <yinghan@xxxxxxxxxx>
Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
---
mm/filemap.c | 22 +++-------------------
1 file changed, 3 insertions(+), 19 deletions(-)

--- mm.orig/mm/filemap.c
+++ mm/mm/filemap.c
@@ -1565,7 +1565,6 @@ int filemap_fault(struct vm_area_struct
retry_find:
page = find_lock_page(mapping, vmf->pgoff);

-retry_find_nopage:
/*
* For sequential accesses, we use the generic readahead logic.
*/
@@ -1615,6 +1614,7 @@ retry_find_nopage:
start = vmf->pgoff - ra_pages / 2;
do_page_cache_readahead(mapping, file, start, ra_pages);
}
+retry_find_retry:
retry_ret = find_lock_page_retry(mapping, vmf->pgoff,
vma, &page, retry_flag);
if (retry_ret == VM_FAULT_RETRY)
@@ -1626,7 +1626,6 @@ retry_find_nopage:
if (!did_readaround)
ra->mmap_miss--;

-retry_page_update:
/*
* We have a locked page in the page cache, now we need to check
* that it's up-to-date. If not, it is going to be due to an error.
@@ -1662,23 +1661,8 @@ no_cached_page:
* In the unlikely event that someone removed it in the
* meantime, we'll just come back here and read it again.
*/
- if (error >= 0) {
- /*
- * If caller cannot tolerate a retry in the ->fault path
- * go back to check the page again.
- */
- if (!retry_flag)
- goto retry_find;
-
- retry_ret = find_lock_page_retry(mapping, vmf->pgoff,
- vma, &page, retry_flag);
- if (retry_ret == VM_FAULT_RETRY)
- return retry_ret;
- if (!page)
- goto retry_find_nopage;
- else
- goto retry_page_update;
- }
+ if (error >= 0)
+ goto retry_find_retry;

/*
* An error return from page_cache_read can result if the

--

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/