Re: Crash in MM code in v4.4.y, v4.9.y with TRANSPARENT_HUGEPAGE enabled

From: Linus Torvalds
Date: Fri Aug 17 2018 - 20:25:23 EST


On Fri, Aug 17, 2018 at 3:27 PM Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
>
> [ 6.649970] random: crng init done
> [ 6.689002] BUG: unable to handle kernel paging request at ffffeafffa1a0020

Hmm. Lots of bits set.

> [ 6.689082] RIP: 0010:[<ffffffff8116ba10>] [<ffffffff8116ba10>] page_remove_rmap+0x10/0x230
> [ 6.689082] RSP: 0018:ffffc900007abc18 EFLAGS: 00000296
> [ 6.689082] RAX: ffffea0005e58000 RBX: ffffeafffa1a0000 RCX: 0000000020200000
> [ 6.689082] RDX: 00003fffffe00000 RSI: 0000000000000001 RDI: ffffeafffa1a0000

Is that RDX value the same value as PHYSICAL_PMD_PAGE_MASK?

If I did my math right, it would be, if your CPU has 46 bits of
physical memory. Might that be the case?

The reason I mention that is because we had the bug with spurious
inversion of the zero pte/pmd, fixed by

f19f5c49bbc3 ("x86/speculation/l1tf: Exempt zeroed PTEs from inversion")

and that would make a zeroed pmd entry be inverted by
PHYSICAL_PMD_PAGE_MASK, and then you get odd garbage page pointers
etc.

Maybe. I could have gotten the math wrong too, but it sounds like the
register contents _potentially_ might match up with something like
this, and then we'd zap a bogus hugepage because of some confusion.

Although then I'd have expected the bisection to hit
"x86/speculation/l1tf: Invert all not present mappings" instead of the
one you hit, so I don't know.

Plus I'd have expected the problem to have been in mainline too, and
apparently it's just the 4.4 and 4.9 backports.

Your test-case does have mprotect with PROT_NONE. Which together with
that mask that *might* be PHYSICAL_PMD_PAGE_MASK makes me think it
might be related.

Linus