Re: [PATCH] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()

From: Michal Hocko
Date: Mon Feb 18 2019 - 04:27:56 EST


On Sat 16-02-19 21:31:12, Jingxiangfeng wrote:
> From: Jing Xiangfeng <jingxiangfeng@xxxxxxxxxx>
>
> We can use the following command to dynamically allocate huge pages:
> echo NR_HUGEPAGES > /proc/sys/vm/nr_hugepages
> The count in __nr_hugepages_store_common() is parsed from /proc/sys/vm/nr_hugepages,
> The maximum number of count is ULONG_MAX,
> the operation 'count += h->nr_huge_pages - h->nr_huge_pages_node[nid]' overflow and count will be a wrong number.

Could you be more specific of what is the runtime effect on the
overflow? I haven't checked closer but I would assume that we will
simply shrink the pool size because count will become a small number.

Is there any reason to report an error in that case? We do not report
errors when we cannot allocate the requested number of huge pages so why
is this case any different?

> So check the overflow to fix this problem.
>
> Signed-off-by: Jing Xiangfeng <jingxiangfeng@xxxxxxxxxx>
> ---
> mm/hugetlb.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index afef616..55173c3 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2423,7 +2423,12 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> * per node hstate attribute: adjust count to global,
> * but restrict alloc/free to the specified node.
> */
> + unsigned long old_count = count;
> count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> + if (count < old_count) {
> + err = -EINVAL;
> + goto out;
> + }
> init_nodemask_of_node(nodes_allowed, nid);
> } else
> nodes_allowed = &node_states[N_MEMORY];
> --
> 2.7.4

--
Michal Hocko
SUSE Labs