Re: 2.6.13-mm3

From: Sonny Rao
Date: Tue Sep 13 2005 - 13:36:54 EST


On Mon, Sep 12, 2005 at 12:56:41PM -0700, Andrew Morton wrote:
> Sonny Rao <sonny@xxxxxxxxxxx> wrote:
> >
> > On Mon, Sep 12, 2005 at 02:43:50AM -0700, Andrew Morton wrote:
> > <snip>
> > > - There are several performance tuning patches here which need careful
> > > attention and testing. (Does anyone do performance testing any more?)
> > <snip>
> > >
> > > - The size of the page allocator per-cpu magazines has been increased
> > >
> > > - The page allocator has been changed to use higher-order allocations
> > > when batch-loading the per-cpu magazines. This is intended to give
> > > improved cache colouring effects however it might have the downside of
> > > causing extra page allocator fragmentation.
> > >
> > > - The page allocator's per-cpu magazines have had their lower threshold
> > > set to zero. And we can't remember why it ever had a lower threshold.
> > >
> >
> > What would you like? The usual suspects: SDET, dbench, kernbench ?
> >
>
> That would be a good start, thanks. The higher-order-allocations thing is
> mainly targeted at big-iron numerical computing I believe.
>
> I've already had one report of fragmentation-derived page allocator
> failures (http://bugzilla.kernel.org/show_bug.cgi?id=5229).

Ok, I'm getting much further on ppc64 thanks to Anton B.

So far, I need patched in the hvc console fix, Anton's SCSI fix for
2.6.14-rc1, Paulus's EEH fix, and I reverted
remove-near-all-bugs-in-mm-mempolicyc.patch and
convert-mempolicies-to-nodemask_t.patch

I got most of the way through the boot scripts and crashed while
bringing up the loopback interface.

Here's the latest PPC64 crash on 2.6.13-mm3:

smp_call_function on cpu 5: other cpus not responding (5)
cpu 0x5: Vector: 0 at [c00000000f3b6b00]
pc: 000000000000003d
lr: 000000000000003d
sp: c00000000f3b6a90
msr: 8000000000009032
current = 0xc000000002018050
paca = 0xc00000000048a400
pid = 1679, comm = ip
enter ? for help
5:mon> t

(xmon hangs here)

Anyone have ideas on what to try?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/