Re: [PATCH v2 1/1] mm/hugetlb: fix memory offline with hugepage size > memory block size

From: Rui Teng
Date: Sun Sep 25 2016 - 22:49:41 EST


On 9/23/16 7:03 PM, Gerald Schaefer wrote:
On Fri, 23 Sep 2016 14:40:33 +0800
Rui Teng <rui.teng@xxxxxxxxxxxxxxxxxx> wrote:

On 9/22/16 5:51 PM, Michal Hocko wrote:
On Wed 21-09-16 14:35:34, Gerald Schaefer wrote:
dissolve_free_huge_pages() will either run into the VM_BUG_ON() or a
list corruption and addressing exception when trying to set a memory
block offline that is part (but not the first part) of a hugetlb page
with a size > memory block size.

When no other smaller hugetlb page sizes are present, the VM_BUG_ON()
will trigger directly. In the other case we will run into an addressing
exception later, because dissolve_free_huge_page() will not work on the
head page of the compound hugetlb page which will result in a NULL
hstate from page_hstate().

To fix this, first remove the VM_BUG_ON() because it is wrong, and then
use the compound head page in dissolve_free_huge_page().

OK so dissolve_free_huge_page will work also on tail pages now which
makes some sense. I would appreciate also few words why do we want to
sacrifice something as precious as gigantic page rather than fail the
page block offline. Dave pointed out dim offline usecase for example.

Also change locking in dissolve_free_huge_page(), so that it only takes
the lock when actually removing a hugepage.

From a quick look it seems this has been broken since introduced by
c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle
hugepage"). Do we want to have this backported to stable? In any way
Fixes: SHA1 would be really nice.


If the huge page hot-plug function was introduced by c8721bbbdd36, and
it has already indicated that the gigantic page is not supported:

"As for larger hugepages (1GB for x86_64), it's not easy to do
hotremove over them because it's larger than memory block. So
we now simply leave it to fail as it is."

Is it possible that the gigantic page hot-plugin has never been
supported?

Offlining blocks with gigantic pages only fails when they are in-use,
I guess that was meant by the description. Maybe it was also meant to
fail in any case, but that was not was the patch did.

With free gigantic pages, it looks like it only ever worked when
offlining the first block of a gigantic page. And as long as you only
have gigantic pages, the VM_BUG_ON() would actually have triggered on
every block that is not gigantic-page-aligned, even if the block is not
part of any gigantic page at all.

I have not met the VM_BUG_ON() issue on my powerpc architecture. Seems
it does not always have the align issue on other architectures.


Given the age of the patch it is a little bit surprising that it never
struck anyone, and that we now have found it on two architectures at
once :-)


I made another patch for this problem, and also tried to apply the
first version of this patch on my system too. But they only postpone
the error happened. The HugePages_Free will be changed from 2 to 1, if I
offline a huge page. I think it does not have a correct roll back.

# cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
HugePages_Total: 2
HugePages_Free: 1
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 16777216 kB

HugePages_Free is supposed to be reduced when offlining a block, but
then HugePages_Total should also be reduced, so that is strange. On my
system both were reduced. Does this happen with any version of my patch?

No, I only tested your first version. I do not have any question on
your patch, because the error was not introduced by your patch.


What do you mean with postpone the error? Can you reproduce the BUG_ON
or the addressing exception with my patch?

I mean the gigantic offlining function does not work at all on my
environment, even if the correct head page has been found. My method is
to filter all the tail pages out, and your method is to find head page
from tail pages.

Since you can offline gigantic page successful, I think such function
is supported now. I will debug the problem on my environment.



I will make more test on it, but can any one confirm that this function
has been implemented and tested before?