Re: [PATCH] slub: limit count of partial slabs scanned to gather statistics

From: Konstantin Khlebnikov
Date: Tue May 05 2020 - 02:20:32 EST


On 05/05/2020 00.19, David Rientjes wrote:
On Mon, 4 May 2020, Konstantin Khlebnikov wrote:

To get exact count of free and used objects slub have to scan list of
partial slabs. This may take at long time. Scanning holds spinlock and
blocks allocations which move partial slabs to per-cpu lists and back.

Example found in the wild:

# cat /sys/kernel/slab/dentry/partial
14478538 N0=7329569 N1=7148969
# time cat /sys/kernel/slab/dentry/objects
286225471 N0=136967768 N1=149257703

real 0m1.722s
user 0m0.001s
sys 0m1.721s

The same problem in slab was addressed in commit f728b0a5d72a ("mm, slab:
faster active and free stats") by adding more kmem cache statistics.
For slub same approach requires atomic op on fast path when object frees.

Let's simply limit count of scanned slabs and print warning.
Limit set in /sys/module/slub/parameters/max_partial_to_count.
Default is 10000 which should be enough for most sane cases.

Return linear approximation if list of partials is longer than limit.
Nobody should notice difference.


Hi Konstantin,

Do you only exhibit this on slub for SO_ALL|SO_OBJECTS? I notice the
timing in the changelog is only looking at "objects" and not "partial".

"partial" is a count of partial slabs which simply sums per-numa counters.
Affected only "objects" and "objects_partial" which walk the list.


If so, it seems this is also a problem for get_slabinfo() since it also
uses the count_free() callback for count_partial().

Yep, /proc/slabinfo also affected.

Actually it's more affected than sysfs - it walks all cgroups while sysfs shows only root.


Concern would be that the kernel has now drastically changed a statistic
that it exports to userspace. There was some discussion about this back
in 2016[*] and one idea was that slabinfo would truncate its scanning and
append a '+' to the end of the value to indicate it exceeds the max, i.e.
10000+. I think that '+' actually caused the problem itself for userspace
processes.

Yep, "+" will break everything for sure.
I thought about returning "-1" or INT_MAX without counting,
but approximation gives almost correct result without breaking anything.

Each partial slab has at least one used and free object thus approximated
result will be somewhere between nr_partial_slabs and nr_partial_objects.


I think the patch is too far reaching, however, since it impacts all
count_partial() counting and not only for the case cited in the changelog.
Are there examples for things other than the count_free() callback?

Nope, this is just a statistics for used/free objects.
Total count of objects and slabs are counted precisely.


[*] https://lore.kernel.org/patchwork/patch/708427/

Signed-off-by: Konstantin Khlebnikov <khlebnikov@xxxxxxxxxxxxxx>
---
mm/slub.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 9bf44955c4f1..86a366f7acb6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2407,16 +2407,29 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n)
#endif /* CONFIG_SLUB_DEBUG */
#if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS)
+
+static unsigned long max_partial_to_count __read_mostly = 10000;
+module_param(max_partial_to_count, ulong, 0644);
+
static unsigned long count_partial(struct kmem_cache_node *n,
int (*get_count)(struct page *))
{
+ unsigned long counted = 0;
unsigned long flags;
unsigned long x = 0;
struct page *page;
spin_lock_irqsave(&n->list_lock, flags);
- list_for_each_entry(page, &n->partial, slab_list)
+ list_for_each_entry(page, &n->partial, slab_list) {
x += get_count(page);
+
+ if (++counted > max_partial_to_count) {
+ pr_warn_once("SLUB: too much partial slabs to count all objects, increase max_partial_to_count.\n");
+ /* Approximate total count of objects */
+ x = mult_frac(x, n->nr_partial, counted);
+ break;
+ }
+ }
spin_unlock_irqrestore(&n->list_lock, flags);
return x;
}