Re: 2.6.3-mm3 (ioremap failure w/ _X86_4G and _NUMA)

From: Dave Hansen
Date: Fri Feb 27 2004 - 22:19:35 EST


On Fri, 2004-02-27 at 18:48, john stultz wrote:
> On Fri, 2004-02-27 at 16:06, Andrew Morton wrote:
> > john stultz <johnstul@xxxxxxxxxx> wrote:
> > >
> > > When running -mm3 (plus the one-line fix to the expanded-pci-config
> > > patch) to on an x440 w/ 4G enabled, the tg3 driver cannot find my
> > > network card.
> > >
> > > When booting I get:
> > > tg3.c:v2.7 (February 17, 2004)
> > > tg3: Cannot map device registers, aborting.
> > > tg3: probe of 0000:01:04.0 failed with error -12
> > >
> > > Otherwise the system seems to come up fine.
> > >
> > > Disabling CONFIG_ACPI (or CONFIG_X86_4G) makes the problem go away.
> >
> > Beats me. Maybe acpi is returning some monstrous reosurce length and we're
> > running out of kernel virtual space only with the 4g split?
> >
> > 'twould be appreciated if you could stick a few printk's in there and work
> > out what's happening please. Check out the pci space base address and
> > length with and without ACPI?
>
> The base address and length are the same either way, instead its
> __ioremap that's failing at "if(!PageReserved(page))"[ioremap.c:142].
>
> I've also narrowed down the issue to only occur w/ (CONFIG_X86_4G=y &&
> CONFIG_NUMA=y) so it looks like its a propblem w/ 4G and discontigmem
> together.
>
> I've also finally moved to -mm4 and reproduced the problem there.

Can you dump out all of the variables in the
if (phys_addr < virt_to_phys(high_memory)) {
...
statement, plus the arguments that ioremap() is getting?

Evidently, it's OK if highmem pages aren't PageReserved(), and I'm
starting to wonder if you've done something rare, like spanned the
highmem boundary with your ioremap().

We do some weird stuff with that highmem boundary when NUMA is on,
because we remap a bunch of mem_map[] in that area to get it local on
NUMA nodes which might be causing something unexpected. Take a look and
see if any of the variables that __ioremap() is using correspond to
those that remap_numa_kva() is playing with (from your boot log):
node 0 will remap to vaddr f8000000 - f8000000
node 1 will remap to vaddr f5600000 - f8000000
High memory starts at vaddr f8000000

It's just a guess.

-- dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/