Re: [PATCH v2 3/5] memory-hotplug: auto offline page_cgroup whenonlining memory block failed

From: Wen Congyang
Date: Wed Oct 17 2012 - 11:26:31 EST


At 2012/10/17 20:08, wency@xxxxxxxxxxxxxx Wrote:
> From: Wen Congyang<wency@xxxxxxxxxxxxxx>
>
> When a memory block is onlined, we will try allocate memory on that node
> to store page_cgroup. If onlining the memory block failed, we don't
> offline the page cgroup, and we have no chance to offline this page cgroup
> unless the memory block is onlined successfully again. It will cause
> that we can't hot-remove the memory device on that node, because some
> memory is used to store page cgroup. If onlining the memory block
> is failed, there is no need to stort page cgroup for this memory. So
> auto offline page_cgroup when onlining memory block failed.
>
> CC: David Rientjes<rientjes@xxxxxxxxxx>
> CC: Jiang Liu<liuj97@xxxxxxxxx>
> CC: Len Brown<len.brown@xxxxxxxxx>
> CC: Benjamin Herrenschmidt<benh@xxxxxxxxxxxxxxxxxxx>
> CC: Paul Mackerras<paulus@xxxxxxxxx>
> CC: Christoph Lameter<cl@xxxxxxxxx>
> Cc: Minchan Kim<minchan.kim@xxxxxxxxx>
> CC: Andrew Morton<akpm@xxxxxxxxxxxxxxxxxxxx>
> CC: KOSAKI Motohiro<kosaki.motohiro@xxxxxxxxxxxxxx>
> CC: Yasuaki Ishimatsu<isimatu.yasuaki@xxxxxxxxxxxxxx>
> Signed-off-by: Wen Congyang<wency@xxxxxxxxxxxxxx>

This patch has been acked by kosaki.
I forgot to add "Acked-by: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx>"


> ---
> mm/page_cgroup.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
> index 5ddad0c..44db00e 100644
> --- a/mm/page_cgroup.c
> +++ b/mm/page_cgroup.c
> @@ -251,6 +251,9 @@ static int __meminit page_cgroup_callback(struct notifier_block *self,
> mn->nr_pages, mn->status_change_nid);
> break;
> case MEM_CANCEL_ONLINE:
> + offline_page_cgroup(mn->start_pfn,
> + mn->nr_pages, mn->status_change_nid);
> + break;
> case MEM_GOING_OFFLINE:
> break;
> case MEM_ONLINE:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/