Re: [RFC] Transparent on-demand memory setup initialization embeddedin the (GFP) buddy allocator

From: Ingo Molnar
Date: Sat Jun 29 2013 - 03:24:50 EST



* Nathan Zimmer <nzimmer@xxxxxxx> wrote:

> On 06/26/2013 10:35 PM, Daniel J Blueman wrote:
> >On Wednesday, June 26, 2013 9:30:02 PM UTC+8, Andrew Morton wrote:
> >>
> >> On Wed, 26 Jun 2013 11:22:48 +0200 Ingo Molnar
> ><mi...@xxxxxxxxxx> wrote:
> >>
> >> > except that on 32 TB
> >> > systems we don't spend ~2 hours initializing 8,589,934,592
> >page heads.
> >>
> >> That's about a million a second which is crazy slow - even my
> >prehistoric desktop
> >> is 100x faster than that.
> >>
> >> Where's all this time actually being spent?
> >
> > The complexity of a directory-lookup architecture to make the
> > (intrinsically unscalable) cache-coherency protocol scalable gives you
> > a ~1us roundtrip to remote NUMA nodes.
> >
> > Probably a lot of time is spent in some memsets, and RMW cycles which
> > are setting page bits, which are intrinsically synchronous, so the
> > initialising core can't get to 12 or so outstanding memory
> > transactions.
> >
> > Since EFI memory ranges have a flag to state if they are zerod (which
> > may be a fair assumption for memory on non-bootstrap processor NUMA
> > nodes), we can probably collapse the RMWs to just writes.
> >
> > A normal write will require a coherency cycle, then a fetch and a
> > writeback when it's evicted from the cache. For this purpose,
> > non-temporal writes would eliminate the cache line fetch and give a
> > massive increase in bandwidth. We wouldn't even need a store-fence as
> > the initialising core is the only one online.
>
> Could you elaborate a bit more? or suggest a specific area to look at?
>
> After some experiments with trying to just set some fields in the struct
> page directly I haven't been able to produce any improvements. Of
> course there is lots about the area which I don't have much experience
> with.

Any such improvement will at most be in the 10-20% range.

I'd suggest first concentrating on the 1000-fold boot time initialization
speedup that the buddy allocator delayed initialization can offer, and
speeding up whatever remains after that stage - in a much more
development-friendly environment. (You'll be able to run 'perf record
./calloc-1TB' after bootup and get meaningful results, etc.)

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/