+static int cpuacct_stats_percpu_show(struct cgroup *cgrp, struct cftype *cft,I don't know if there's much that can be trivially done about it but I
> + struct cgroup_map_cb *cb)
> +{
> + struct cpuacct *ca = cgroup_ca(cgrp);
> + int cpu;
> +
> + for_each_online_cpu(cpu) {
> + do_fill_cb(cb, ca, "user", cpu, CPUTIME_USER);
> + do_fill_cb(cb, ca, "nice", cpu, CPUTIME_NICE);
> + do_fill_cb(cb, ca, "system", cpu, CPUTIME_SYSTEM);
> + do_fill_cb(cb, ca, "irq", cpu, CPUTIME_IRQ);
> + do_fill_cb(cb, ca, "softirq", cpu, CPUTIME_SOFTIRQ);
> + do_fill_cb(cb, ca, "guest", cpu, CPUTIME_GUEST);
> + do_fill_cb(cb, ca, "guest_nice", cpu, CPUTIME_GUEST_NICE);
> + }
> +
suspect these are a bit of a memory allocation time-bomb on a many-CPU
machine. The cgroup:seq_file mating (via read_map) treats everything
as/one/ record. This means that seq_printf is going to end up
eventually allocating a buffer that can fit_everything_ (as well as
every power-of-2 on the way there). Adding insult to injury is that
that the backing buffer is kmalloc() not vmalloc().
200+ bytes per-cpu above really is not unreasonable (46 bytes just for
the text, plus a byte per base 10 digit we end up reporting), but that
then leaves us looking at order-12/13 allocations just to print this
thing when there are O(many) cpus.