Re: [PATCH v3 09/19] mm: memcg/slab: charge individual slab objects instead of pages

From: Roman Gushchin
Date: Tue May 26 2020 - 14:05:14 EST


On Mon, May 25, 2020 at 06:10:55PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Switch to per-object accounting of non-root slab objects.
> >
> > Charging is performed using obj_cgroup API in the pre_alloc hook.
> > Obj_cgroup is charged with the size of the object and the size
> > of metadata: as now it's the size of an obj_cgroup pointer.
> > If the amount of memory has been charged successfully, the actual
> > allocation code is executed. Otherwise, -ENOMEM is returned.
> >
> > In the post_alloc hook if the actual allocation succeeded,
> > corresponding vmstats are bumped and the obj_cgroup pointer is saved.
> > Otherwise, the charge is canceled.
> >
> > On the free path obj_cgroup pointer is obtained and used to uncharge
> > the size of the releasing object.
> >
> > Memcg and lruvec counters are now representing only memory used
> > by active slab objects and do not include the free space. The free
> > space is shared and doesn't belong to any specific cgroup.
> >
> > Global per-node slab vmstats are still modified from (un)charge_slab_page()
> > functions. The idea is to keep all slab pages accounted as slab pages
> > on system level.
> >
> > Signed-off-by: Roman Gushchin <guro@xxxxxx>
>
> Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>
>
> Suggestion below:
>
> > @@ -568,32 +548,33 @@ static __always_inline int charge_slab_page(struct page *page,
> > gfp_t gfp, int order,
> > struct kmem_cache *s)
> > {
> > - int ret;
> > -
> > - if (is_root_cache(s)) {
> > - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > - PAGE_SIZE << order);
> > - return 0;
> > - }
> > +#ifdef CONFIG_MEMCG_KMEM
> > + if (!is_root_cache(s)) {
>
> This could also benefit from memcg_kmem_enabled() static key test AFAICS. Maybe
> even have a wrapper for both tests together?

Added.

>
> > + int ret;
> >
> > - ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
> > - if (ret)
> > - return ret;
> > + ret = memcg_alloc_page_obj_cgroups(page, gfp, objs_per_slab(s));
>
> You created memcg_alloc_page_obj_cgroups() empty variant for !CONFIG_MEMCG_KMEM
> but now the only caller is under CONFIG_MEMCG_KMEM.

Good catch, thanks!

>
> > + if (ret)
> > + return ret;
> >
> > - return memcg_charge_slab(page, gfp, order, s);
> > + percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
>
> Perhaps moving this refcount into memcg_alloc_page_obj_cgroups() (maybe the name
> should be different then) will allow you to not add #ifdef CONFIG_MEMCG_KMEM in
> this function.

The reference counter bumping is not related to obj_cgroups,
we just bump a counter for each slab page belonging to the kmem_cache.
And it will go away later in the patchset with the rest of slab caches
refcounting.

>
> Maybe this is all moot after patch 12/19, will find out :)
>
> > + }
> > +#endif
> > + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > + PAGE_SIZE << order);
> > + return 0;
> > }
> >
> > static __always_inline void uncharge_slab_page(struct page *page, int order,
> > struct kmem_cache *s)
> > {
> > - if (is_root_cache(s)) {
> > - mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > - -(PAGE_SIZE << order));
> > - return;
> > +#ifdef CONFIG_MEMCG_KMEM
> > + if (!is_root_cache(s)) {
>
> Everything from above also applies here.

Done.
Thanks!

>
> > + memcg_free_page_obj_cgroups(page);
> > + percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
> > }
> > -
> > - memcg_free_page_obj_cgroups(page);
> > - memcg_uncharge_slab(page, order, s);
> > +#endif
> > + mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
> > + -(PAGE_SIZE << order));
> > }
> >
> > static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>
>