Re: [PATCH] hugetlb: fix hugetlb cgroup refcounting during vma split

From: Guillaume Morin
Date: Tue Aug 31 2021 - 10:02:24 EST


On 30 Aug 14:50, Mike Kravetz wrote:
> Guillaume Morin reported hitting the following WARNING followed
> by GPF or NULL pointer deference either in cgroups_destroy or in
> the kill_css path.:
>
> percpu ref (css_release) <= 0 (-1) after switching to atomic
> WARNING: CPU: 23 PID: 130 at lib/percpu-refcount.c:196 percpu_ref_switch_to_atomic_rcu+0x127/0x130
> CPU: 23 PID: 130 Comm: ksoftirqd/23 Kdump: loaded Tainted: G O 5.10.60 #1
> RIP: 0010:percpu_ref_switch_to_atomic_rcu+0x127/0x130
> Call Trace:
> rcu_core+0x30f/0x530
> rcu_core_si+0xe/0x10
> __do_softirq+0x103/0x2a2
> ? sort_range+0x30/0x30
> run_ksoftirqd+0x2b/0x40
> smpboot_thread_fn+0x11a/0x170
> kthread+0x10a/0x140
> ? kthread_create_worker_on_cpu+0x70/0x70
> ret_from_fork+0x22/0x30
>
> Upon further examination, it was discovered that the css structure
> was associated with hugetlb reservations.
>
> For private hugetlb mappings the vma points to a reserve map that
> contains a pointer to the css. At mmap time, reservations are set up
> and a reference to the css is taken. This reference is dropped in the
> vma close operation; hugetlb_vm_op_close. However, if a vma is split
> no additional reference to the css is taken yet hugetlb_vm_op_close will
> be called twice for the split vma resulting in an underflow.
>
> Fix by taking another reference in hugetlb_vm_op_open. Note that the
> reference is only taken for the owner of the reserve map. In the more
> common fork case, the pointer to the reserve map is cleared for
> non-owning vmas.
>
> Fixes: e9fe92ae0cd2 ("hugetlb_cgroup: add reservation accounting for
> private mappings")
> Reported-by: Guillaume Morin <guillaume@xxxxxxxxxxx>
> Suggested-by: Guillaume Morin <guillaume@xxxxxxxxxxx>
> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> Cc: <stable@xxxxxxxxxxxxxxx>

I verified that the patch does fix the underflow. I appreciate the help!

Feel free to add:
Tested-by: Guillaume Morin <guillaume@xxxxxxxxxxx>

--
Guillaume Morin <guillaume@xxxxxxxxxxx>