Re: [PATCH] mm: hugetlb: bail out unmapping after serving referencepage

From: Andrew Morton
Date: Wed Feb 22 2012 - 16:07:04 EST


On Wed, 22 Feb 2012 20:35:34 +0800
Hillf Danton <dhillf@xxxxxxxxx> wrote:

> When unmapping given VM range, we could bail out if a reference page is
> supplied and it is unmapped, which is a minor optimization.
>
> Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx>
> ---
>
> --- a/mm/hugetlb.c Wed Feb 22 19:34:12 2012
> +++ b/mm/hugetlb.c Wed Feb 22 19:50:26 2012
> @@ -2280,6 +2280,9 @@ void __unmap_hugepage_range(struct vm_ar
> if (pte_dirty(pte))
> set_page_dirty(page);
> list_add(&page->lru, &page_list);
> +
> + if (page == ref_page)
> + break;
> }
> spin_unlock(&mm->page_table_lock);
> flush_tlb_range(vma, start, end);

Perhaps add a little comment to this explaining what's going on?


It would be sufficient to do

if (ref_page)
break;

This is more efficient, and doesn't make people worry about whether
this value of `page' is the same as the one which
pte_page(huge_ptep_get()) earlier returned.

Why do we evaluate `page' twice inside that loop anyway? And why do we
check for huge_pte_none() twice? It looks all messed up.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/