Re: Plumbers: Tweaking scheduler policy micro-conf RFP

From: Ingo Molnar
Date: Wed May 23 2012 - 11:50:17 EST



* Joe Perches <joe@xxxxxxxxxxx> wrote:

> On Wed, 2012-05-23 at 17:03 +0200, Ingo Molnar wrote:
> > * Chen <hi3766691@xxxxxxxxx> wrote:
> >
> > > Still you are just trying to said that your code is not bloated?
> > > Up to over 500K for a cpu scheduler. Laughing
> >
> > Where did you get that 500K from? You are off from the truth
> > almost by an order of magnitude.
> >
> > Here's the scheduler size on Linus's latest tree, on 64-bit
> > defconfig's:
> >
> > $ size kernel/sched/built-in.o
> > text data bss dec hex filename
> > 83611 10404 2524 96539 1791b kernel/sched/built-in.o
> >
> > That's SMP+NUMA, i.e. everything included.
> >
> > The !NUMA !SMP UP scheduler, if you are on a size starved
> > ultra-embedded device, is even smaller, just 22K:
> >
> > $ size kernel/sched/built-in.o
> > text data bss dec hex filename
> > 19882 2218 148 22248 56e8 kernel/sched/built-in.o
>
> Here's an allyesconfig x86-32

allyesconfig includes a whole lot of debugging code so it's a
pretty meaningless size test.

> $ size kernel/sched/built-in.o
> text data bss dec hex filename
> 213892 10856 65832 290580 46f14 kernel/sched/built-in.o
>
> But that's not the only sched related code.
>
> In a 1000 cpu config, there also an extra 500+ bytes per cpu
> in printk (I don't think that's particularly important btw)

A 1000 cpu piece of hardware will have a terabyte of RAM or
more. 0.5K per CPU is reasonable.

> kernel/printk.c adds:
>
> static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
>
> Maybe #ifdefing this when !CONFIG_PRINTK would reduce size
> a little in a few cases. I've attached a trivial suggested patch.

That might make sense for the ultra-embedded.

Still 500K is an obviously nonsensical number.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/