Re: [PATCH 3/5] hugetlb: try to search again if it is really needed

From: Andrew Morton
Date: Wed Feb 01 2012 - 17:43:55 EST


On Fri, 13 Jan 2012 19:45:45 +0800
Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> wrote:

> Search again only if some holes may be skipped in the first time
>
> Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx>
> ---
> arch/x86/mm/hugetlbpage.c | 8 ++++----
> 1 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> index e12debc..6bf5735 100644
> --- a/arch/x86/mm/hugetlbpage.c
> +++ b/arch/x86/mm/hugetlbpage.c
> @@ -309,9 +309,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
> struct hstate *h = hstate_file(file);
> struct mm_struct *mm = current->mm;
> struct vm_area_struct *vma;
> - unsigned long base = mm->mmap_base, addr = addr0;
> + unsigned long base = mm->mmap_base, addr = addr0, start_addr;

grr. The multiple-definitions-per-line thing is ugly, makes for more
patch conflicts and reduces opportunities to add useful comments.

--- a/arch/x86/mm/hugetlbpage.c~hugetlb-try-to-search-again-if-it-is-really-needed-fix
+++ a/arch/x86/mm/hugetlbpage.c
@@ -309,7 +309,9 @@ static unsigned long hugetlb_get_unmappe
struct hstate *h = hstate_file(file);
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
- unsigned long base = mm->mmap_base, addr = addr0, start_addr;
+ unsigned long base = mm->mmap_base;
+ unsigned long addr = addr0;
+ unsigned long start_addr;
unsigned long largest_hole = mm->cached_hole_size;

/* don't allow allocations above current base */
_


> unsigned long largest_hole = mm->cached_hole_size;
> - int first_time = 1;
>
> /* don't allow allocations above current base */
> if (mm->free_area_cache > base)
> @@ -322,6 +321,8 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
> mm->free_area_cache = base;
> }
> try_again:
> + start_addr = mm->free_area_cache;
> +
> /* make sure it can fit in the remaining address space */
> if (mm->free_area_cache < len)
> goto fail;
> @@ -357,10 +358,9 @@ fail:
> * if hint left us with no space for the requested
> * mapping then try again:
> */
> - if (first_time) {
> + if (start_addr != base) {
> mm->free_area_cache = base;
> largest_hole = 0;
> - first_time = 0;
> goto try_again;

The code used to retry a single time. With this change the retrying is
potentially infinite. What is the reason for this change? What is the
potential for causing a lockup?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/