Re: [BUG] BUG: unable to handle kernel paging request at fffba000

From: Ilya Dryomov
Date: Wed Jan 19 2011 - 17:50:30 EST


On Wed, Jan 19, 2011 at 11:19:09PM +0100, Andrea Arcangeli wrote:
> Hello Ilya,
>
> thanks for sending me the gdb info too.
>
> can you test this fix? Thanks a lot! (it only affected x86 32bit
> builds with highpte enabled)
>
> ====
> Subject: fix pte_unmap in khugepaged for highpte x86_32
>
> From: Andrea Arcangeli <aarcange@xxxxxxxxxx>
>
> __collapse_huge_page_copy is still dereferencing the pte passed as parameter so
> we must pte_unmap after __collapse_huge_page_copy returns, not before.
>
> Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx>

It fixes the above problem for me. Thanks a lot Andrea.

> ---
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 004c9c2..c4f634b 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1837,9 +1837,9 @@ static void collapse_huge_page(struct mm_struct *mm,
> spin_lock(ptl);
> isolated = __collapse_huge_page_isolate(vma, address, pte);
> spin_unlock(ptl);
> - pte_unmap(pte);
>
> if (unlikely(!isolated)) {
> + pte_unmap(pte);
> spin_lock(&mm->page_table_lock);
> BUG_ON(!pmd_none(*pmd));
> set_pmd_at(mm, address, pmd, _pmd);
> @@ -1856,6 +1856,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> anon_vma_unlock(vma->anon_vma);
>
> __collapse_huge_page_copy(pte, new_page, vma, address, ptl);
> + pte_unmap(pte);
> __SetPageUptodate(new_page);
> pgtable = pmd_pgtable(_pmd);
> VM_BUG_ON(page_count(pgtable) != 1);
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/