[PATCH] mm/hugetlb: fix gigantic page initialization/allocation

From: Mike Kravetz
Date: Tue Feb 02 2016 - 17:34:08 EST


Attempting to preallocate 1G gigantic huge pages at boot time with
"hugepagesz=1G hugepages=1" on the kernel command line will prevent
booting with the following:

kernel BUG at mm/hugetlb.c:1218!

When mapcount accounting was reworked, the setting of compound_mapcount_ptr
in prep_compound_gigantic_page was overlooked. As a result, the validation
of mapcount in free_huge_page fails.

The "BUG_ON" checks in free_huge_page were also changed to "VM_BUG_ON_PAGE"
to assist with debugging.

Fixes: af5642a8af ("mm: rework mapcount accounting to enable 4k mapping of THPs")
Suggested-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
---
mm/hugetlb.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 12908dc..d7a8024 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1214,8 +1214,8 @@ void free_huge_page(struct page *page)

set_page_private(page, 0);
page->mapping = NULL;
- BUG_ON(page_count(page));
- BUG_ON(page_mapcount(page));
+ VM_BUG_ON_PAGE(page_count(page), page);
+ VM_BUG_ON_PAGE(page_mapcount(page), page);
restore_reserve = PagePrivate(page);
ClearPagePrivate(page);

@@ -1286,6 +1286,7 @@ static void prep_compound_gigantic_page(struct page *page, unsigned int order)
set_page_count(p, 0);
set_compound_head(p, page);
}
+ atomic_set(compound_mapcount_ptr(page), -1);
}

/*
--
2.4.3