Re: [syzbot] [mm?] possible deadlock in move_pages

From: Lokesh Gidra
Date: Tue Mar 19 2024 - 19:48:10 EST


On Tue, Mar 19, 2024 at 10:24 AM Lokesh Gidra <lokeshgidra@xxxxxxxxxx> wrote:
>
> On Tue, Mar 19, 2024 at 6:37 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
> >
> > On 19.03.24 10:52, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit: e5eb28f6d1af Merge tag 'mm-nonmm-stable-2024-03-14-09-36' ..
> > > git tree: upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=160dc26e180000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=4ffb854606e658d
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=49056626fe41e01f2ba7
> > > compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10f467b9180000
> > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=173b7ac9180000
> > >
> > > Downloadable assets:
> > > disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7bc7510fe41f/non_bootable_disk-e5eb28f6.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/a5c7ad05d6b2/vmlinux-e5eb28f6.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/531cb1917612/bzImage-e5eb28f6.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+49056626fe41e01f2ba7@xxxxxxxxxxxxxxxxxxxxxxxxx
> > >
> > > ============================================
> > > WARNING: possible recursive locking detected
> > > 6.8.0-syzkaller-09791-ge5eb28f6d1af #0 Not tainted
> > > --------------------------------------------
> > > syz-executor258/5169 is trying to acquire lock:
> > > ffff88802a6d23d0 (&vma->vm_lock->lock){++++}-{3:3}, at: uffd_move_lock mm/userfaultfd.c:1447 [inline]
> > > ffff88802a6d23d0 (&vma->vm_lock->lock){++++}-{3:3}, at: move_pages+0xbab/0x4970 mm/userfaultfd.c:1583
> > >
> > > but task is already holding lock:
> > > ffff88802a6d2580 (&vma->vm_lock->lock){++++}-{3:3}, at: uffd_move_lock mm/userfaultfd.c:1445 [inline]
> > > ffff88802a6d2580 (&vma->vm_lock->lock){++++}-{3:3}, at: move_pages+0xb6f/0x4970 mm/userfaultfd.c:1583
> > >
> > > other info that might help us debug this:
> > > Possible unsafe locking scenario:
> > >
> > > CPU0
> > > ----
> > > lock(&vma->vm_lock->lock);
> > > lock(&vma->vm_lock->lock);
> > >
> > > *** DEADLOCK ***
> > >
> > > May be due to missing lock nesting notation
> > >
> > > 2 locks held by syz-executor258/5169:
> > > #0: ffff888015086a20 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock include/linux/mmap_lock.h:146 [inline]
> > > #0: ffff888015086a20 (&mm->mmap_lock){++++}-{3:3}, at: uffd_move_lock mm/userfaultfd.c:1438 [inline]
> > > #0: ffff888015086a20 (&mm->mmap_lock){++++}-{3:3}, at: move_pages+0x8df/0x4970 mm/userfaultfd.c:1583
> > > #1: ffff88802a6d2580 (&vma->vm_lock->lock){++++}-{3:3}, at: uffd_move_lock mm/userfaultfd.c:1445 [inline]
> > > #1: ffff88802a6d2580 (&vma->vm_lock->lock){++++}-{3:3}, at: move_pages+0xb6f/0x4970 mm/userfaultfd.c:1583
> > >
> > > stack backtrace:
> > > CPU: 2 PID: 5169 Comm: syz-executor258 Not tainted 6.8.0-syzkaller-09791-ge5eb28f6d1af #0
> > > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > > Call Trace:
> > > <TASK>
> > > __dump_stack lib/dump_stack.c:88 [inline]
> > > dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
> > > check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > > validate_chain kernel/locking/lockdep.c:3856 [inline]
> > > __lock_acquire+0x20e6/0x3b30 kernel/locking/lockdep.c:5137
> > > lock_acquire kernel/locking/lockdep.c:5754 [inline]
> > > lock_acquire+0x1b1/0x540 kernel/locking/lockdep.c:5719
> > > down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
> > > uffd_move_lock mm/userfaultfd.c:1447 [inline]
> > > move_pages+0xbab/0x4970 mm/userfaultfd.c:1583
> > > userfaultfd_move fs/userfaultfd.c:2008 [inline]
> > > userfaultfd_ioctl+0x5e1/0x60e0 fs/userfaultfd.c:2126
> > > vfs_ioctl fs/ioctl.c:51 [inline]
> > > __do_sys_ioctl fs/ioctl.c:904 [inline]
> > > __se_sys_ioctl fs/ioctl.c:890 [inline]
> > > __x64_sys_ioctl+0x193/0x220 fs/ioctl.c:890
> > > do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > > do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
> > > entry_SYSCALL_64_after_hwframe+0x6d/0x75
> > > RIP: 0033:0x7fd48da20329
> > > Code: 48 83 c4 28 c3 e8 37 17 00 00 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > > RSP: 002b:00007ffd1244f8e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> > > RAX: ffffffffffffffda RBX: 00007ffd1244fab8 RCX: 00007fd48da20329
> > > RDX: 00000000200000c0 RSI: 00000000c028aa05 RDI: 0000000000000003
> > > RBP: 00007fd48da93610 R08: 00007ffd1244fab8 R09: 00007ffd1244fab8
> > > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
> > > R13: 00007ffd1244faa8 R14: 0000000000000001 R15: 0000000000000001
> > > </TASK>
> > >
> > >
> > > ---
> > > This report is generated by a bot. It may contain errors.
> > > See https://goo.gl/tpsmEJ for more information about syzbot.
> > > syzbot engineers can be reached at syzkaller@xxxxxxxxxxxxxxxx.
> > >
> > > syzbot will keep track of this issue. See:
> > > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > >
> > > If the report is already addressed, let syzbot know by replying with:
> > > #syz fix: exact-commit-title
> > >
> > > If you want syzbot to run the reproducer, reply with:
> > > #syz test: git://repo/address.git branch-or-commit-hash
> > > If you attach or paste a git patch, syzbot will apply it before testing.
> > >
> > > If you want to overwrite report's subsystems, reply with:
> > > #syz set subsystems: new-subsystem
> > > (See the list of subsystem names on the web dashboard)
> > >
> > > If the report is a duplicate of another one, reply with:
> > > #syz dup: exact-subject-of-another-report
> > >
> > > If you want to undo deduplication, reply with:
> > > #syz undup
> > >
> >
> > Possibly
> >
> > commit 867a43a34ff8a38772212045262b2c9b77807ea3
> > Author: Lokesh Gidra <lokeshgidra@xxxxxxxxxx>
> > Date: Thu Feb 15 10:27:56 2024 -0800
> >
> > userfaultfd: use per-vma locks in userfaultfd operations
> >
> > All userfaultfd operations, except write-protect, opportunistically use
> > per-vma locks to lock vmas. On failure, attempt again inside mmap_lock
> > critical section.
> >
> > Write-protect operation requires mmap_lock as it iterates over multiple
> > vmas.
> >
> > and
> >
> > commit 5e4c24a57b0c126686534b5b159a406c5dd02400
> > Author: Lokesh Gidra <lokeshgidra@xxxxxxxxxx>
> > Date: Thu Feb 15 10:27:54 2024 -0800
> >
> > userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx
> >
> > Increments and loads to mmap_changing are always in mmap_lock critical
> > section. This ensures that if userspace requests event notification for
> > non-cooperative operations (e.g. mremap), userfaultfd operations don't
> > occur concurrently.
> >
> > This can be achieved by using a separate read-write semaphore in
> > userfaultfd_ctx such that increments are done in write-mode and loads in
> > read-mode, thereby eliminating the dependency on mmap_lock for this
> > purpose.
> >
> > This is a preparatory step before we replace mmap_lock usage with per-vma
> > locks in fill/move ioctls.
> >
> > might responsible.
> >
I tried reproducing the issue with the provided reproducer locally and
with few additional checks:

down_read(&(*dst_vmap)->vm_lock->lock);
if (*dst_vmap != *src_vmap) {
BUG_ON((*src_vmap)->vm_lock == (*dst_vmap)->vm_lock);
BUG_ON(&(*src_vmap)->vm_lock->lock == &(*dst_vmap)->vm_lock->lock);
BUG_ON(rwsem_is_locked(&(*src_vmap)->vm_lock->lock));
down_read(&(*src_vmap)->vm_lock->lock);
}

None of the BUG_ONs are causing pani but the following down_read() is
reporting the deadlock as above. Even if I change the if condition to

if (&(*dst_vmap)->vm_lock->lock != &(*src_vmap)->vm_lock->lock)

I still get the deadlock trace. Possibly a bug in lockdep?




> > CCin Lokesh
>
> Thanks for looping me in. Taking a look.
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >