Re: [PATCH] mm: fix possible cause of a page_mapped BUG

From: Hugh Dickins
Date: Mon Apr 04 2011 - 14:30:28 EST


On Mon, Apr 4, 2011 at 5:46 AM, Robert ÅwiÄcki <robert@xxxxxxxxxxx> wrote:
> On Sat, Apr 2, 2011 at 3:46 AM, Hugh Dickins <hughd@xxxxxxxxxx> wrote:
>> On Fri, Apr 1, 2011 at 8:44 AM, Linus Torvalds
>> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>>> On Fri, Apr 1, 2011 at 7:34 AM, Robert ÅwiÄcki <robert@xxxxxxxxxxx> wrote:
>>>>
>>>> Hey, I'll apply your patch and check it out. In the meantime I
>>>> triggered another Oops (NULL-ptr deref via sys_mprotect).
>>>>
>>>> The oops is here:
>>>>
>>>> http://alt.swiecki.net/linux_kernel/sys_mprotect-2.6.38.txt
>>>
>>> That's not a NULL pointer dereference. That's a BUG_ON().
>>>
>>> And for some reason you've turned off the BUG_ON() messages, saving
>>> some tiny amount of memory.
>>>
>>> Anyway, it looks like the first BUG_ON() in vma_prio_tree_add(), so it
>>> would be this one:
>>>
>>> Â Â Â ÂBUG_ON(RADIX_INDEX(vma) != RADIX_INDEX(old));
>>>
>>> but it is possible that gcc has shuffled things around (so it _might_
>>> be the HEAP_INDEX() one). If you had CONFIG_DEBUG_BUGVERBOSE=y, you'd
>>> get a filename and line number. One reason I hate -O2 in cases like
>>> this is that the basic block movement makes it way harder to actually
>>> debug things. I would suggest using -Os too (CONFIG_OPTIMIZE_FOR_SIZE
>>> or whatever it's called).
>>>
>>> Anyway, I do find it worrying. The vma code shouldn't be this fragile. ÂHugh?
>>>
>>> I do wonder what triggers this. Is it a huge-page vma? We seem to be
>>> lacking the check to see that mprotect() is on a hugepage boundary -
>>> and that seems bogus. Or am I missing some check? The new transparent
>>> hugepage support splits the page, but what if it's a _static_ hugepage
>>> thing?
>>>
>>> But why would that affect the radix_index thing? I have no idea. I'd
>>> like to blame the anon_vma rewrites last year, but I can't see why
>>> that should matter either. Again, hugepages had some special rules, I
>>> think (and that would explain why nobody normal sees this).
>>>
>>> Guys, please give this one a look.
>>
>> I do intend to look, but I think it makes more sense to wait until
>> Robert has reproduced it (or something like it) with my debugging
>> patch in.
>
> Hi Hugh,
>
> I did two things, included your patch, and compiled with
> CONFIG_CC_OPTIMIZE_FOR_SIZE=y; the kernel didn't BUG() or Oopssed for
> ~2 days under fuzzing (with getdents and readdir syscalls disabled in
> the fuzzer). I don't think -Os has any bigger influence on how mm
> internally works therefore I must attribute the change to your patch
> (was it patch which fixes something or merely dumps vma structures in
> case of any problem?).

I'm sorry, I should have explained the patch a little more. Along
with dumping out the vma structs, it does change the BUG or BUGs there
to WARN_ONs, allowing the system to continue if it's not too badly
corrupted, though leaking some structure memory (if the structs have
been reused, it's probably not safe to assume we still have ownership
of them). So if the problem has occurred again, it should be leaving
WARNING messages and vma struct dumps in your /var/log/messages -
please look for them and send them in if found.

Perhaps we should simply include the patch in mainline kernel: it
doesn't do much good just lingering in mmotm, but seems to be helping
your system to limp along longer.

>
>
> ==============
> BTW, another problem arose in the meantime, not sure if anyhow related
> to things we're discussing here, although 'btc 0' in kdb shows that
> processor 0 hangs in sys_mlock - I did it in two different moments, to
> exclude any coincidences. After those 2 days of fuzzing, 'ps wuax'
> stopped working, i.e. it prints some output, then stops, cannot be
> killed with -SIGKILL etc. I'll let it run for the time being, I can
> dump more data in this PID 17750 if anybody wants:
>
> strace:
>
> # strace -f ps wwuax
> ....
> open("/proc/17750/status", O_RDONLY) Â Â= 6
> read(6, "Name:\tiknowthis\nState:\tD (disk s"..., 1023) = 777
> close(6) Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â= 0
> open("/proc/17750/cmdline", O_RDONLY) Â = 6
> read(6,
>
> Process 17750 also cannot be killed. Attaching more data:
>
> # cat /proc/17750/status
> Name: Â iknowthis
> State: ÂD (disk sleep)
> Tgid: Â 17750
> Pid: Â Â17750
> PPid: Â 1
> TracerPid: Â Â Â0
> Uid: Â Â1001 Â Â1001 Â Â1001 Â Â1001
> Gid: Â Â1001 Â Â1001 Â Â1001 Â Â1001
> FDSize: 64
> Groups: 1001
> VmPeak: Â Â 7752 kB
> VmSize: Â Â 5760 kB
> VmLck: Â Â Â Â32 kB
> VmHWM: Â Â Â4892 kB
> VmRSS: Â Â Â3068 kB
> VmData: Â Â 2472 kB
> VmStk: Â Â Â 408 kB
> VmExe: Â Â Â 160 kB
> VmLib: Â Â Â2684 kB
> VmPTE: Â Â Â Â44 kB
> VmSwap: Â Â Â Â0 kB
> Threads: Â Â Â Â1
> SigQ: Â 218/16382
> SigPnd: 0000000000000b00
> ShdPnd: 0000400000000503
> SigBlk: 0000000000000000
> SigIgn: 0000000001001000
> SigCgt: 0000000000000000
> CapInh: 0000000000000000
> CapPrm: 0000000000000000
> CapEff: 0000000000000000
> CapBnd: ffffffffffffffff
> Cpus_allowed: Â 01
> Cpus_allowed_list: Â Â Â0
> Mems_allowed: Â 00000000,00000001
> Mems_allowed_list: Â Â Â0
> voluntary_ctxt_switches: Â Â Â Â43330
> nonvoluntary_ctxt_switches: Â Â 4436
>
> # cat /proc/17750/wchan
> call_rwsem_down_write_failed
>
> # cat /proc/17750/maps (hangs)
>
> (from kdb)
>
> [0]kdb> btp 17750
> Stack traceback for pid 17750
> 0xffff88011e772dc0 Â Â17750 Â Â Â Â1 Â0 Â Â0 Â D Â0xffff88011e773240 Âiknowthis
> <c> ffff88011cbcfb88<c> 0000000000000086<c> ffff88011cbcfb08<c>
> ffff88011cbcffd8<c>
> <c> 0000000000013f00<c> ffff88011e772dc0<c> ffff88011e773180<c>
> ffff88011e773178<c>
> <c> ffff88011cbce000<c> ffff88011cbcffd8<c> 0000000000013f00<c>
> 0000000000013f00<c>
> Call Trace:
> Â[<ffffffff81e286ad>] rwsem_down_failed_common+0xdb/0x10d
> Â[<ffffffff81e286f2>] rwsem_down_write_failed+0x13/0x15
> Â[<ffffffff81416953>] call_rwsem_down_write_failed+0x13/0x20
> Â[<ffffffff81e27da0>] ? down_write+0x25/0x27
> Â[<ffffffff8115f041>] do_coredump+0x14f/0x9a5
> Â[<ffffffff8114d4e2>] ? T.1006+0x17/0x32
> Â[<ffffffff810a45f0>] ? __dequeue_signal+0xfa/0x12f
> Â[<ffffffff8108a79c>] ? get_parent_ip+0x11/0x42
> Â[<ffffffff810a6406>] get_signal_to_deliver+0x3be/0x3e6
> Â[<ffffffff8103e0c1>] do_signal+0x72/0x67d
> Â[<ffffffff81096807>] ? child_wait_callback+0x0/0x58
> Â[<ffffffff81e28c28>] ? _raw_spin_unlock_irq+0x36/0x41
> Â[<ffffffff8108bb06>] ? finish_task_switch+0x4b/0xb9
> Â[<ffffffff8108c3ed>] ? schedule_tail+0x38/0x68
> Â[<ffffffff8103eb43>] ? ret_from_fork+0x13/0x80
> Â[<ffffffff8103e6f8>] do_notify_resume+0x2c/0x6e
>
> [0]kdb> Âbtc 0
> Stack traceback for pid 10350
> 0xffff88011b6badc0 Â Â10350 Â Â Â Â1 Â1 Â Â0 Â R Â0xffff88011b6bb240 *iknowthis2
> <c> ffff8800cfc03db8<c> 0000000000000000<c>
> Call Trace:
> Â<#DB> Â<<EOE>> Â<IRQ> Â[<ffffffff81518b03>] ? __handle_sysrq+0xbf/0x15c
> Â[<ffffffff81518d7d>] ? handle_sysrq+0x2c/0x2e
> Â[<ffffffff8152bd90>] ? serial8250_handle_port+0x157/0x2b2
> Â[<ffffffff810a1be8>] ? run_timer_softirq+0x2b3/0x2c2
> Â[<ffffffff8152bf4c>] ? serial8250_interrupt+0x61/0x111
> Â[<ffffffff810e6e52>] ? handle_IRQ_event+0x78/0x150
> Â[<ffffffff810ea044>] ? move_native_irq+0x19/0x6d
> Â[<ffffffff810e8d90>] ? handle_edge_irq+0xe3/0x12f
> Â[<ffffffff8104198f>] ? handle_irq+0x88/0x91
> Â[<ffffffff81e297a5>] ? do_IRQ+0x4d/0xb3
> Â[<ffffffff81e29193>] ? ret_from_intr+0x0/0x15
> Â<EOI> Â[<ffffffff811336aa>] ? __mlock_vma_pages_range+0x49/0xad
> Â[<ffffffff8113370a>] ? __mlock_vma_pages_range+0xa9/0xad
> Â[<ffffffff811337c0>] ? do_mlock_pages+0xb2/0x118
> Â[<ffffffff81134002>] ? sys_mlock+0xe8/0xf6
> Â[<ffffffff8107d7e3>] ? ia32_sysret+0x0/0x5
>
> [0]kdb> btc 1
> Stack traceback for pid 9409
> 0xffff88011c9816e0 Â Â 9409 Â Â Â Â1 Â1 Â Â1 Â R Â0xffff88011c981b60 Âiknowthis2
> <c> ffff88011dc21ec8

Sorry, I've no time to think about this one at the moment (at LSF).
Does this look similar to what you previously reported on mlock?

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/