Re: [possible regression] 2.6.22 reiserfs/libata sporadically hangs on resume from hibernation

From: Rafael J. Wysocki
Date: Sun Sep 09 2007 - 12:55:52 EST


On Sunday, 9 September 2007 16:00, Andrey Borzenkov wrote:
> On Sunday 01 July 2007, Rafael J. Wysocki wrote:
> > On Saturday, 30 June 2007 06:59, Andrey Borzenkov wrote:
> > > Since 2.6.18 I do not have suspend to RAM; now I am starting to lose
> > > suspend to disk :)
> > >
> > > Environment - vanilla kernel (2.6.22-rc6 currently + squashfs + single
> > > pata_ali patch to switch off DMA on CD-ROM), single root on reiserfs,
> > > libata with pata_ali driver.
> > >
> > > Until 2.6.22-rc I never had problems with hibernation. With 2.6.22-rc
> > > system hung at least once in every rcX. Up to rc6 those lockups were
> > > absolutely silent (black screen without reaction to any key). In rc6 I
> > > just got something different. After resume I got on screem:
> > >
> > > swsusp: Marking nosave pages: 000000000009f000-0000000000100000
> > > swsusp: Basic memory bitmaps created
> > > swsusp: Basic memory bitmaps freed
> > >
> > > After that it just sits there doing nothing. Ther was brief sound of HDD
> > > but I suspect it was related more to power-on. System was responding to
> > > power-on button press:
> > >
> > > ACPI Error (event-0305): No installed handler for fixed event [00000002
> > > 20070125]
> > >
> > > And SysRq was functioning.
> >
> > That probably means that there's a deadlock somewhere in there.
> >
> > > Unfortunately I do not have serial console so I
> > > copy manually stacks from several last screens of output; I have tried to
> > > make a photo but right now my kbluetooth is refusing to work at all so I
> > > cannot transfer them :( (but I suspect quality would be too bad anyway)
> > >
> > > laptop_mode D
> > > io_schedule+0xe/0x20
> >
> > Looks suspicious to me. Can you identify what line of code this points to?
> >
> > > sync_buffer+0x35/0x40
> > > __wait_on_bit+0x45/0x70
> > > out_of_line_wait_on_bit+0x6c/0x80
> > > __wait_on_buffer+0x27/0x30
> > > search_by_key+0x15e/0x1250 [reiserfs]
> > > reiserfs_read_locked_inode+0x64/0x570 [reiserfs]
> > > reiserfs_iget+0x7e/0xa0 [reiserfs]
> > > reiserfs_lookup+0xc7/0x120 [reiserfs]
> > > do_lookup+0x138/0x180
> > > __link_path_walk+0x787/0xce0
> > > link_path_walk+0x44/0xc0
> > > path_walk+0x18/0x20
> > > do_path_lookup_0x88/0x210
> > > __path_lookupintent_open+0x4d/0x90
> > > path_lookup_open+0x1f/0x30
> > > open_exec+0x28/0xb0
> > > do_execve+0x36/0x1d0
> > > sys_execve+0x2e/0x80
> > > sysenter_past_esp+0x5f/0x99
> > >
> > > 90clock D
> > > __mutex_lock_slow_path+0xa1/0x290
> > > mutex_lock+0x21/0x30
> > > do_lookup+0xa1/0x180
> > > __link_path_walk+0x44/0xc0
> > > path_walk+0x18/0x20
> > > do_path_lookup+0x78/0x210
> > > __user_walk_fd+0x38/0x50
> > > vfs_stat_fd+0x21/0x50
> > > vfs_stat+0x11/0x20
> > > sys_stat64+0x14/0x30
> > > sysenter_past_esp+0x5f/0x99
> > >
> > > alsactl D
> > > io_schedule+0xe/0x20
> >
> > Same here. Hmm.
> >
> > > sync_page+0x35/0x40
> > > __wait_on_bit_lock+0x3f/0x70
> > > __lock_page+0x68/0x70
> > > filemap_nopage+0x16c/0x300
> > > __handle_mm_faul+0x1d7/0x610
> > > do_page_fault+0x1d7/0x610
> > > error_code+0x6a/0x70
> > > padzero+0x1f/0x30
> > > load_elf_binary+0x743/0x1ab0
> > > search_binary_handler+0x7b/0x1f0
> > > do_execve+0x137/0x1d0
> > > sys_execve+0x2e/0x80
> > > sysenter_past_esp+0x5f/0x90
> > >
> > > After that I could remount, sync and reboot using SysRq (well, after
> > > reboot it still insisted on replaying insane number of transactions so
> > > may be it did *not* remount / ro after all). Before reboot there was
> > > brief output that resembled lockdep warnings, but it went too fast to be
> > > readable.
> > >
> > > usual stuff follows
> >
> > I see you're using CFQ as the default IO scheduler. Can you please switch
> > to AS and see if that changes anything?
> >
>
> I just had the same lockup on resume using AS with 2.6.23-rc5.

Hm. Does your root partition sit on reiserfs?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/