Re: mvsas still has problems with 2.6.34

From: Konstantinos Skarlatos
Date: Fri Jul 16 2010 - 05:34:51 EST


I am another user of mvsas, attached you can find two recent emails with kernel logs that i have sent to the linux-scsi list regarding my problems with that driver

Kind regards

On 16/7/2010 12:26 ÎÎ, Thomas Fjellstrom wrote:
On July 16, 2010, Caspar Smit wrote:
On July 16, 2010, Caspar Smit wrote:
Thomas,

The patches you are using are the ones from november '09 i presume?
Those
patches still had a lot of SATA issues so I think they didn't make the
kernel. The patches seemed to handle SAS disks just fine though. SATA
disks was a whole different story.
I'm actually using some that Andy Yan sent me privately, I'm not sure
if they are the same exact ones he sent to linux-scsi. Probably are
though.
The november patches were a set of 7 patches where only the first 6
needed to be applied.
Yeah, I was given a zip of the driver a little while before he posted the
patches to the list.

Srinivas Naga Venkatasatya Pasagadugula created a patch instead of
Andy Yan's patches which seemed to handle SATA disks a lot better but
still after some tests it had alot of problems. Srinivas Naga
Venkatasatya Pasagadugula is now in the process of creating a new
patch to fix the remaining issues. He told me it would take a long
time to create those and
it is now a few months ago since. I and others submitted extensive
logging
for him to check.

As for production I could only advise this:

Using SAS disks: Use stock 2.6.34 kernel + Andy Yan's patches
Using SATA disks: DO NOT GO INTO PRODCUTION.
I've been using the code Andy Yan sent me for 7 months now with 5 SATA
disks
on a md raid5 array. I haven't noticed anything serious in that time.
Prior
to tonight I had been using 2.6.32 for quite some time.

Maybe the issues only show up with serious load? My raid array doesn't
get hammered, at least not often.
The main problem was hotplugging a SATA disk. This results in a kernel
panic almost all of the time. There were more issues like the
HDIO_GET_IDENTITY failed messages during boot for SATA disks and VERY
SLOW xfs creation times.
I don't recall xfs taking /that/ long for a 4TB fs. With 2.6.34 I don't see
any HDIO_GET_IDENTITY messages in dmesg. But I'll bet it freaks out if I try
and hot remove one of the drives. I remember seeing the card lockup, and/or
the kernel oopsing the last time I tried (2.6.30-2.6.32 time frame).
Thankfully its not something I often do. While I Can, since I have a hot
swap unit, its just not something I've had to do yet.

This array has been pretty solid for the past 6 months. Not sure it helps
but I've been very careful with this machine, it gets shut down
automatically and safely when the UPS battery gets low, so there hasn't been
any abrupt shutdowns, except today when a forkbomb hit, and I had to
SYSRQ+S+U+B the box.

At any rate I can help test whatever new patches might come along.

Kind regards,
Caspar Smit

Thanks

Kind regards,
Caspar Smit

On July 16, 2010, Thomas Fjellstrom wrote:
On July 16, 2010, Thomas Fjellstrom wrote:
I've recently updated my server, and the mvsas driver included in
2.6.34.1 still causes my AOC-SASLP-MV8 card to completely lock up

after

mdraid starts up on the devices. The machine is essentially in
"production" so I can't do a heck of a lot of testing on it anymore.
The mvsas driver I got from Andy Yan seems to be a little outdated,
it

fails to compile due to a missing argument to
sas_change_queue_depth,
which I managed to fix, and I will try testing. I hope it works.

It seems to work with the change I made.
Sorry for the noise, I forgot to post the following in my last
couple
messages:
It works, but I do get a kernel warning:

Jul 16 00:38:05 boris kernel: [ 20.104295] ------------[ cut here
]------------

Jul 16 00:38:05 boris kernel: [ 20.104315] WARNING: at
drivers/ata/libata-core.c:5216 ata_qc_issue+0x31b/0x330 [libata]()
Jul
16 00:38:05 boris kernel: [ 20.104323] Hardware name:
GA-MA790FXT-UD5P
Jul 16 00:38:05 boris kernel: [ 20.104327] Modules linked in:
snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep
snd_pcm_oss
snd_mixer_oss nouveau ttm snd_pcm drm_kms_helper snd_seq_midi k10temp
drm

agpgart i2c_algo_bit snd_rawmidi snd_seq_midi_event i2c_piix4
i2c_core
evdev edac_core edac_mce_amd tpm_tis snd_seq pcspkr tpm button
tpm_bios wmi snd_timer snd_seq_device processor snd soundcore
snd_page_alloc ext3 jbd mbcache dm_mod raid1 md_mod sg sr_mod sd_mod
crc_t10dif cdrom ata_generic ohci_hcd ide_pci_generic ahci mvsas
libsas libata atiixp scsi_transport_sas firewire_ohci firewire_core
crc_itu_t thermal skge thermal_sys ide_core ehci_hcd r8169 mii
usbcore scsi_mod nls_base [last unloaded: scsi_wait_scan]

Jul 16 00:38:05 boris kernel: [ 20.104448] Pid: 6091, comm: ata_id
Not
tainted 2.6.34.1 #2

Jul 16 00:38:05 boris kernel: [ 20.104453] Call Trace:
Jul 16 00:38:05 boris kernel: [ 20.104462] [<ffffffff81049bb3>] ?
warn_slowpath_common+0x73/0xb0

Jul 16 00:38:05 boris kernel: [ 20.104472] [<ffffffffa011686b>] ?
ata_qc_issue+0x31b/0x330 [libata]

Jul 16 00:38:05 boris kernel: [ 20.104482] [<ffffffffa000ef7f>] ?
scsi_init_io+0x2f/0x190 [scsi_mod]

Jul 16 00:38:05 boris kernel: [ 20.104492] [<ffffffffa011e020>] ?
ata_scsi_pass_thru+0x0/0x2e0 [libata]

Jul 16 00:38:05 boris kernel: [ 20.104500] [<ffffffffa0007990>] ?
scsi_done+0x0/0x20 [scsi_mod]

Jul 16 00:38:05 boris kernel: [ 20.104509] [<ffffffffa011bfae>] ?
ata_scsi_translate+0x9e/0x180 [libata]

Jul 16 00:38:05 boris kernel: [ 20.104517] [<ffffffffa0007990>] ?
scsi_done+0x0/0x20 [scsi_mod]

Jul 16 00:38:05 boris kernel: [ 20.104525] [<ffffffffa015522b>] ?
sas_queuecommand+0x9b/0x330 [libsas]

Jul 16 00:38:05 boris kernel: [ 20.104533] [<ffffffffa0007c7e>] ?
scsi_dispatch_cmd+0x17e/0x2b0 [scsi_mod]

Jul 16 00:38:05 boris kernel: [ 20.104542] [<ffffffffa000e830>] ?
scsi_request_fn+0x3e0/0x570 [scsi_mod]

Jul 16 00:38:05 boris kernel: [ 20.104549] [<ffffffff81058161>] ?
del_timer+0x71/0xd0

Jul 16 00:38:05 boris kernel: [ 20.104556] [<ffffffff811baed3>] ?
__blk_run_queue+0x63/0x130

Jul 16 00:38:05 boris kernel: [ 20.104563] [<ffffffff811b43a2>] ?
elv_insert+0x132/0x1f0

Jul 16 00:38:05 boris kernel: [ 20.104570] [<ffffffff811bf1c9>] ?
blk_execute_rq_nowait+0x59/0xb0

Jul 16 00:38:05 boris kernel: [ 20.104576] [<ffffffff811bf292>] ?
blk_execute_rq+0x72/0xe0

Jul 16 00:38:05 boris kernel: [ 20.104582] [<ffffffff811bf05b>] ?
blk_rq_map_user+0x1ab/0x290

Jul 16 00:38:05 boris kernel: [ 20.104588] [<ffffffff811c32f1>] ?
sg_io+0x241/0x3f0

Jul 16 00:38:05 boris kernel: [ 20.104594] [<ffffffff811c38fc>] ?
scsi_cmd_ioctl+0x45c/0x4b0

Jul 16 00:38:05 boris kernel: [ 20.104601] [<ffffffff8110e02f>] ?
__dentry_open+0x22f/0x340

Jul 16 00:38:05 boris kernel: [ 20.104607] [<ffffffff811195b3>] ?
inode_permission+0x93/0xd0

Jul 16 00:38:05 boris kernel: [ 20.104614] [<ffffffffa013cdc4>] ?
sd_ioctl+0xa4/0x120 [sd_mod]

Jul 16 00:38:05 boris kernel: [ 20.105009] [<ffffffff811c0798>] ?
__blkdev_driver_ioctl+0x98/0xe0

Jul 16 00:38:05 boris kernel: [ 20.105410] [<ffffffff811c0c75>] ?
blkdev_ioctl+0x1f5/0x7b0

Jul 16 00:38:05 boris kernel: [ 20.105815] [<ffffffff81113d30>] ?
cp_new_stat+0xe0/0x100

Jul 16 00:38:05 boris kernel: [ 20.106230] [<ffffffff8113b4f7>] ?
block_ioctl+0x37/0x40

Jul 16 00:38:05 boris kernel: [ 20.106647] [<ffffffff8111e985>] ?
vfs_ioctl+0x35/0xd0

Jul 16 00:38:05 boris kernel: [ 20.107064] [<ffffffff8111ef08>] ?
do_vfs_ioctl+0x88/0x560

Jul 16 00:38:05 boris kernel: [ 20.107490] [<ffffffff8111402e>] ?
sys_newfstat+0x2e/0x50

Jul 16 00:38:05 boris kernel: [ 20.107919] [<ffffffff8111f460>] ?
sys_ioctl+0x80/0xa0

Jul 16 00:38:05 boris kernel: [ 20.108003] [<ffffffff81002e2b>] ?
system_call_fastpath+0x16/0x1b

Jul 16 00:38:05 boris kernel: [ 20.108003] ---[ end trace
e8ea9c22d6b28439 ]---

Other than this stack trace, it seems to work fine.

At some point though I really hope this gets fixed. I'm still
willing
to help test any new versions, just that I can't keep my box down for
an extended period.

Thanks.
I forgot to post, but here are the kernel messages I get when trying
to

use the kernel's included mvsas driver:
Jul 15 22:42:41 boris kernel: [ 208.816129] sd 0:0:3:0: [sdf]
Unhandled
error code

Jul 15 22:42:41 boris kernel: [ 208.816809] sd 0:0:3:0: [sdf]
Result:
hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT

Jul 15 22:42:41 boris kernel: [ 208.817470] sd 0:0:3:0: [sdf] CDB:
Read(10): 28 00 3a 45 c1 08 00 04 00 00

Jul 15 22:42:41 boris kernel: [ 208.818853] sd 0:0:1:0: [sdd]
Unhandled
error code

Jul 15 22:42:41 boris kernel: [ 208.819508] sd 0:0:1:0: [sdd]
Result:
hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT

Jul 15 22:42:41 boris kernel: [ 208.820179] sd 0:0:1:0: [sdd] CDB:
Read(10): 28 00 3a 45 be 58 00 02 b0 00

Jul 15 22:42:41 boris kernel: [ 208.821558] sd 0:0:2:0: [sde]
Unhandled
error code

Jul 15 22:42:41 boris kernel: [ 208.822201] sd 0:0:2:0: [sde]
Result:
hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT

Jul 15 22:42:41 boris kernel: [ 208.822836] sd 0:0:2:0: [sde] CDB:
Read(10): 28 00 3a 45 c1 08 00 04 00 00

Jul 15 22:42:41 boris kernel: [ 208.824157] sd 0:0:4:0: [sdg]
Unhandled
error code

Jul 15 22:42:41 boris kernel: [ 208.824784] sd 0:0:4:0: [sdg]
Result:
hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT

Jul 15 22:42:41 boris kernel: [ 208.825407] sd 0:0:4:0: [sdg] CDB:
Read(10): 28 00 3a 45 c1 08 00 04 00 00

Jul 15 22:43:13 boris kernel: [ 240.737334] md1_raid5 D
0000000000000001 0 6120 2 0x00000000
Jul 15 22:43:13 boris kernel: [ 240.737948] ffff88012c94c420
0000000000000046 ffff880100000000 ffff88012f65b680
Jul 15 22:43:13 boris kernel: [ 240.738570] 00000000000134c0
ffff88012e6effd8 00000000000134c0 ffff88012c94c420
Jul 15 22:43:13 boris kernel: [ 240.739196] ffff88012e6effd8
ffff88012e6effd8 00000000000134c0 00000000000134c0
Jul 15 22:43:13 boris kernel: [ 240.739821] Call Trace:
Jul 15 22:43:13 boris kernel: [ 240.740458] [<ffffffffa018d17e>] ?
md_super_wait+0xae/0xd0 [md_mod]

Jul 15 22:43:13 boris kernel: [ 240.741100] [<ffffffff810671b0>] ?
autoremove_wake_function+0x0/0x30

Jul 15 22:43:13 boris kernel: [ 240.741729] [<ffffffffa018d748>] ?
md_update_sb+0x268/0x3d0 [md_mod]

Jul 15 22:43:13 boris kernel: [ 240.742361] [<ffffffffa018fcd2>] ?
md_check_recovery+0x232/0x520 [md_mod]

Jul 15 22:43:13 boris kernel: [ 240.742982] [<ffffffffa0421833>] ?
raid5d+0x23/0x4f0 [raid456]

Jul 15 22:43:13 boris kernel: [ 240.743602] [<ffffffff8137883d>] ?
schedule_timeout+0x23d/0x310

Jul 15 22:43:13 boris kernel: [ 240.744221] [<ffffffff8103aee4>] ?
finish_task_switch+0x34/0xb0

Jul 15 22:43:13 boris kernel: [ 240.744861] [<ffffffffa018ce43>] ?
md_thread+0x53/0x120 [md_mod]

Jul 15 22:43:13 boris kernel: [ 240.745489] [<ffffffff810671b0>] ?
autoremove_wake_function+0x0/0x30

Jul 15 22:43:13 boris kernel: [ 240.746121] [<ffffffffa018cdf0>] ?
md_thread+0x0/0x120 [md_mod]

Jul 15 22:43:13 boris kernel: [ 240.746743] [<ffffffff81066c9e>] ?
kthread+0x8e/0xa0

Jul 15 22:43:13 boris kernel: [ 240.747367] [<ffffffff81003bd4>] ?
kernel_thread_helper+0x4/0x10

Jul 15 22:43:13 boris kernel: [ 240.748000] [<ffffffff81066c10>] ?
kthread+0x0/0xa0

Jul 15 22:43:13 boris kernel: [ 240.748639] [<ffffffff81003bd0>] ?
kernel_thread_helper+0x0/0x10

Jul 15 22:43:13 boris kernel: [ 240.750521] mount D
0000000000000001 0 6405 6403 0x00000000
Jul 15 22:43:13 boris kernel: [ 240.751158] ffff88012eb8f3d0
0000000000000082 ffff88012e50c600 ffff88012f65d1c0
Jul 15 22:43:13 boris kernel: [ 240.751805] 00000000000134c0
ffff88012dc0bfd8 00000000000134c0 ffff88012eb8f3d0
Jul 15 22:43:13 boris kernel: [ 240.752452] ffff88012dc0bfd8
ffff88012dc0bfd8 00000000000134c0 00000000000134c0
Jul 15 22:43:13 boris kernel: [ 240.753108] Call Trace:
Jul 15 22:43:13 boris kernel: [ 240.753761] [<ffffffffa0020990>] ?
scsi_done+0x0/0x20 [scsi_mod]

Jul 15 22:43:13 boris kernel: [ 240.754409] [<ffffffff8137883d>] ?
schedule_timeout+0x23d/0x310

Jul 15 22:43:13 boris kernel: [ 240.755053] [<ffffffff811ba097>] ?
blk_peek_request+0x127/0x1e0

Jul 15 22:43:13 boris kernel: [ 240.755708] [<ffffffffa0020c8d>] ?
scsi_dispatch_cmd+0x18d/0x2b0 [scsi_mod]

Jul 15 22:43:13 boris kernel: [ 240.756358] [<ffffffff81377af2>] ?
wait_for_common+0xd2/0x180

Jul 15 22:43:13 boris kernel: [ 240.757023] [<ffffffff8103da50>] ?
default_wake_function+0x0/0x20

Jul 15 22:43:13 boris kernel: [ 240.757672] [<ffffffffa041f486>] ?
unplug_slaves+0x86/0xc0 [raid456]

Jul 15 22:43:13 boris kernel: [ 240.758363] [<ffffffffa048ed8d>] ?
xlog_bread_noalign+0xbd/0xf0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.759046] [<ffffffffa04a38c0>] ?
xfs_buf_iowait+0x40/0xf0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.759730] [<ffffffffa048ed8d>] ?
xlog_bread_noalign+0xbd/0xf0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.760423] [<ffffffffa048edf5>] ?
xlog_bread+0x35/0x80 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.761124] [<ffffffffa0491b9f>] ?
xlog_find_verify_cycle+0xbf/0x170 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.761813] [<ffffffffa0492558>] ?
xlog_find_head+0x168/0x3a0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.762495] [<ffffffffa04927b7>] ?
xlog_find_tail+0x27/0x3d0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.763178] [<ffffffffa0492b75>] ?
xlog_recover+0x15/0x90 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.763858] [<ffffffffa048b9c4>] ?
xfs_log_mount+0x134/0x170 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.764528] [<ffffffffa0495b8f>] ?
xfs_mountfs+0x38f/0x720 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.765214] [<ffffffffa04a090b>] ?
kmem_alloc+0x7b/0xc0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.765888] [<ffffffffa04a09fb>] ?
kmem_zalloc+0x2b/0x40 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.766559] [<ffffffffa04ad985>] ?
xfs_fs_fill_super+0x225/0x3b0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.767203] [<ffffffff81112c03>] ?
get_sb_bdev+0x1a3/0x1e0

Jul 15 22:43:13 boris kernel: [ 240.767877] [<ffffffffa04ad760>] ?
xfs_fs_fill_super+0x0/0x3b0 [xfs]

Jul 15 22:43:13 boris kernel: [ 240.768533] [<ffffffff81112633>] ?
vfs_kern_mount+0x83/0x1f0

Jul 15 22:43:13 boris kernel: [ 240.769174] [<ffffffff81112813>] ?
do_kern_mount+0x53/0x120

Jul 15 22:43:13 boris kernel: [ 240.769806] [<ffffffff8112abfa>] ?
do_mount+0x28a/0x8a0

Jul 15 22:43:13 boris kernel: [ 240.770441] [<ffffffff81128960>] ?
copy_mount_options+0xe0/0x180

Jul 15 22:43:13 boris kernel: [ 240.771073] [<ffffffff8112b2aa>] ?
sys_mount+0x9a/0xf0

Jul 15 22:43:13 boris kernel: [ 240.771695] [<ffffffff81002e2b>] ?
system_call_fastpath+0x16/0x1b

Jul 15 22:45:13 boris kernel: [ 360.769363] md1_raid5 D
0000000000000001 0 6120 2 0x00000000
Jul 15 22:45:13 boris kernel: [ 360.770006] ffff88012c94c420
0000000000000046 ffff880100000000 ffff88012f65b680
Jul 15 22:45:13 boris kernel: [ 360.770648] 00000000000134c0
ffff88012e6effd8 00000000000134c0 ffff88012c94c420
Jul 15 22:45:13 boris kernel: [ 360.771298] ffff88012e6effd8
ffff88012e6effd8 00000000000134c0 00000000000134c0
Jul 15 22:45:13 boris kernel: [ 360.771946] Call Trace:
Jul 15 22:45:13 boris kernel: [ 360.772620] [<ffffffffa018d17e>] ?
md_super_wait+0xae/0xd0 [md_mod]

Jul 15 22:45:13 boris kernel: [ 360.773265] [<ffffffff810671b0>] ?
autoremove_wake_function+0x0/0x30

Jul 15 22:45:13 boris kernel: [ 360.773911] [<ffffffffa018d748>] ?
md_update_sb+0x268/0x3d0 [md_mod]

Jul 15 22:45:13 boris kernel: [ 360.774550] [<ffffffffa018fcd2>] ?
md_check_recovery+0x232/0x520 [md_mod]

Jul 15 22:45:13 boris kernel: [ 360.775180] [<ffffffffa0421833>] ?
raid5d+0x23/0x4f0 [raid456]

Jul 15 22:45:13 boris kernel: [ 360.775804] [<ffffffff8137883d>] ?
schedule_timeout+0x23d/0x310

Jul 15 22:45:13 boris kernel: [ 360.776424] [<ffffffff8103aee4>] ?
finish_task_switch+0x34/0xb0

Jul 15 22:45:13 boris kernel: [ 360.777064] [<ffffffffa018ce43>] ?
md_thread+0x53/0x120 [md_mod]

Jul 15 22:45:13 boris kernel: [ 360.777679] [<ffffffff810671b0>] ?
autoremove_wake_function+0x0/0x30

Jul 15 22:45:13 boris kernel: [ 360.778302] [<ffffffffa018cdf0>] ?
md_thread+0x0/0x120 [md_mod]

Jul 15 22:45:13 boris kernel: [ 360.778919] [<ffffffff81066c9e>] ?
kthread+0x8e/0xa0

Jul 15 22:45:13 boris kernel: [ 360.779534] [<ffffffff81003bd4>] ?
kernel_thread_helper+0x4/0x10

Jul 15 22:45:13 boris kernel: [ 360.780148] [<ffffffff81066c10>] ?
kthread+0x0/0xa0

Jul 15 22:45:13 boris kernel: [ 360.780776] [<ffffffff81003bd0>] ?
kernel_thread_helper+0x0/0x10

Jul 15 22:45:13 boris kernel: [ 360.782623] mount D
0000000000000001 0 6405 6403 0x00000000
Jul 15 22:45:13 boris kernel: [ 360.783248] ffff88012eb8f3d0
0000000000000082 ffff88012e50c600 ffff88012f65d1c0
Jul 15 22:45:13 boris kernel: [ 360.783883] 00000000000134c0
ffff88012dc0bfd8 00000000000134c0 ffff88012eb8f3d0
Jul 15 22:45:13 boris kernel: [ 360.784536] ffff88012dc0bfd8
ffff88012dc0bfd8 00000000000134c0 00000000000134c0
Jul 15 22:45:13 boris kernel: [ 360.785184] Call Trace:
Jul 15 22:45:13 boris kernel: [ 360.785829] [<ffffffffa0020990>] ?
scsi_done+0x0/0x20 [scsi_mod]

Jul 15 22:45:13 boris kernel: [ 360.786465] [<ffffffff8137883d>] ?
schedule_timeout+0x23d/0x310

Jul 15 22:45:13 boris kernel: [ 360.787098] [<ffffffff811ba097>] ?
blk_peek_request+0x127/0x1e0

Jul 15 22:45:13 boris kernel: [ 360.787740] [<ffffffffa0020c8d>] ?
scsi_dispatch_cmd+0x18d/0x2b0 [scsi_mod]

Jul 15 22:45:13 boris kernel: [ 360.788361] [<ffffffff81377af2>] ?
wait_for_common+0xd2/0x180

Jul 15 22:45:13 boris kernel: [ 360.788988] [<ffffffff8103da50>] ?
default_wake_function+0x0/0x20

Jul 15 22:45:13 boris kernel: [ 360.789612] [<ffffffffa041f486>] ?
unplug_slaves+0x86/0xc0 [raid456]

Jul 15 22:45:13 boris kernel: [ 360.790277] [<ffffffffa048ed8d>] ?
xlog_bread_noalign+0xbd/0xf0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.790933] [<ffffffffa04a38c0>] ?
xfs_buf_iowait+0x40/0xf0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.791597] [<ffffffffa048ed8d>] ?
xlog_bread_noalign+0xbd/0xf0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.792258] [<ffffffffa048edf5>] ?
xlog_bread+0x35/0x80 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.792935] [<ffffffffa0491b9f>] ?
xlog_find_verify_cycle+0xbf/0x170 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.793598] [<ffffffffa0492558>] ?
xlog_find_head+0x168/0x3a0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.794258] [<ffffffffa04927b7>] ?
xlog_find_tail+0x27/0x3d0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.794910] [<ffffffffa0492b75>] ?
xlog_recover+0x15/0x90 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.795565] [<ffffffffa048b9c4>] ?
xfs_log_mount+0x134/0x170 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.796216] [<ffffffffa0495b8f>] ?
xfs_mountfs+0x38f/0x720 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.796879] [<ffffffffa04a090b>] ?
kmem_alloc+0x7b/0xc0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.797527] [<ffffffffa04a09fb>] ?
kmem_zalloc+0x2b/0x40 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.798171] [<ffffffffa04ad985>] ?
xfs_fs_fill_super+0x225/0x3b0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.798785] [<ffffffff81112c03>] ?
get_sb_bdev+0x1a3/0x1e0

Jul 15 22:45:13 boris kernel: [ 360.799429] [<ffffffffa04ad760>] ?
xfs_fs_fill_super+0x0/0x3b0 [xfs]

Jul 15 22:45:13 boris kernel: [ 360.800046] [<ffffffff81112633>] ?
vfs_kern_mount+0x83/0x1f0

Jul 15 22:45:13 boris kernel: [ 360.800678] [<ffffffff81112813>] ?
do_kern_mount+0x53/0x120

Jul 15 22:45:13 boris kernel: [ 360.801292] [<ffffffff8112abfa>] ?
do_mount+0x28a/0x8a0

Jul 15 22:45:13 boris kernel: [ 360.801910] [<ffffffff81128960>] ?
copy_mount_options+0xe0/0x180

Jul 15 22:45:13 boris kernel: [ 360.802531] [<ffffffff8112b2aa>] ?
sys_mount+0x9a/0xf0

Jul 15 22:45:13 boris kernel: [ 360.803152] [<ffffffff81002e2b>] ?
system_call_fastpath+0x16/0x1b

I'm pretty sure most of that is due to the driver not responding for
4
of

the drives (the first few messages)

Thanks again.

--
Thomas Fjellstrom
tfjellstrom@xxxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe
linux-scsi"
in
the body of a message to majordomo@xxxxxxxxxxxxxxx

More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe
linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
--
Thomas Fjellstrom
tfjellstrom@xxxxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi"
in the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
in the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/


--- Begin Message --- Hello all,

I have also got severe problems with mvsas, but have managed to at least make it usable. My config is a AOC-SASLP-MV8 with an HP SAS expander, with WD and Seagate sata disks connected to the expander using a norco 4020 case that does not have any on board expanders. Kernel version is 2.6.33 with latest srinivas patch. I also dont raid, the 13 disks have their own filesystem.

My experiences are:
The only filesystem that works is JFS. XFS and btrfs crash the controller making all the disks unreadable and that can only be solved by rebooting. (mkfs succeeds, crashes happen only after mounting or when fscking)
JFS works ok as long as i access it via samba or make copies and moves with cp and mv. If I try file operations with Thunar file manager, the controller crashes. Smartctl has not caused any problems so far, and works ok.

Attached are some kernel logs captured from those crashes.

Kind regards,
Konstantinos Skarlatos





On 6/6/2010 3:13 ÎÎ, Jelle de Jong wrote:
Dear Srini,

I spent a few weeks catering information and did some intensive
testing the last few days.

Srinivas Naga Venkatasatya Pasagadugula wrote, on 06-05-10 08:01:
1. Is this is the problem with only WD SATA drives? (I don't have WD SATA drives to reproduce this issue.)
2. Whether the problem is with direct attached SATA drives or drives connected in expanders also?
3. Could you please provide me the "dmesg" log or "/var/log/messages" log.
4. How much the capacity of SATA drives connected to controller?
5. Your HBA is having 6440 chipset?
With a few tricks I managed to boot my OS from the mvsas controller. I
got eleven different sata disks attached from a 4-port mini-sas
backplate without expander to the two Marvel 88SE63xx/64xx mvsas
controllers in my system.

I managed to add five mdadm raid1 arrays without added the actual
active sync device (so one disk for each array) I build my lvm systems
on top of this and did a lot of file transfers for testing.

This worked stable enough, there are HDIO_GET_IDENTITY errors during
boot and operation, but the hard disks seem to be working.

So to debug if issues are related to some brand of harddisk, I started
to add WDC, Hitachi, SAMSUNG, Maxtor and Seagate disks to the
respected raid1 array, the disks are of different sizes (320GB, 500GB,
1TB.)

The sync starts and will fail directly or a while later on failures in
the mvsas driver. I attached an failure example as attachment.

The failures are of grave severity, I have lost complete lvm2 volumes
and raid arrays during testing.

Do you also have a SuperMicro AOC-SASLP-MV8 controller for testing?

I would love to use the controllers for production, but they are
currently unstable. I hope this information helps to solve the mvsas
issues?

With kind regards,

Jelle de Jong

------------[ cut here ]------------
WARNING: at drivers/ata/libata-core.c:5186 ata_qc_issue+0x31f/0x330 [libata]()
Hardware name:
Modules linked in: ipv6 hwmon_vid jfs cpufreq_powersave fan cpufreq_ondemand edac_core powernow_k8 firewire_ohci psmouse firewire_core freq_table serio_raw pcspkr k8temp thermal crc_itu_t evdev edac_mce_amd skge processor button i2c_nforce2 sg forcedeth i2c_core fuse rtc_cmos rtc_core rtc_lib ext2 mbcache dm_crypt dm_mod ses enclosure sd_mod usb_storage ohci_hcd mvsas libsas sata_sil ehci_hcd scsi_transport_sas sata_nv usbcore pata_amd sata_via ata_generic pata_via pata_acpi libata scsi_mod
Pid: 3308, comm: smartctl Not tainted 2.6.33-ARCH #1
Call Trace:
[<ffffffff810528c8>] warn_slowpath_common+0x78/0xb0
[<ffffffff8105290f>] warn_slowpath_null+0xf/0x20
[<ffffffffa002c14f>] ata_qc_issue+0x31f/0x330 [libata]
[<ffffffffa0006fae>] ? scsi_init_sgtable+0x4e/0x90 [scsi_mod]
[<ffffffffa0033cd0>] ? ata_scsi_pass_thru+0x0/0x2f0 [libata]
[<ffffffffa00310c6>] ata_scsi_translate+0xa6/0x180 [libata]
[<ffffffffa0000b10>] ? scsi_done+0x0/0x20 [scsi_mod]
[<ffffffffa0000b10>] ? scsi_done+0x0/0x20 [scsi_mod]
[<ffffffffa0034369>] ata_sas_queuecmd+0x139/0x2b0 [libata]
[<ffffffffa00f3098>] sas_queuecommand+0x98/0x300 [libsas]
[<ffffffffa0000c25>] scsi_dispatch_cmd+0xf5/0x230 [scsi_mod]
[<ffffffffa0006ba2>] scsi_request_fn+0x322/0x3e0 [scsi_mod]
[<ffffffff811b72bd>] __generic_unplug_device+0x2d/0x40
[<ffffffff811bcbf8>] blk_execute_rq_nowait+0x68/0xb0
[<ffffffff811bccc1>] blk_execute_rq+0x81/0xf0
[<ffffffff811b4d0b>] ? blk_rq_bio_prep+0x2b/0xd0
[<ffffffff811bc866>] ? blk_rq_map_kern+0xd6/0x150
[<ffffffffa0007ee7>] scsi_execute+0xf7/0x160 [scsi_mod]
[<ffffffffa0033167>] ata_cmd_ioctl+0x177/0x320 [libata]
[<ffffffffa0033467>] ata_sas_scsi_ioctl+0x157/0x2b0 [libata]
[<ffffffffa00f25f7>] sas_ioctl+0x47/0x50 [libsas]
[<ffffffffa0002225>] scsi_ioctl+0xd5/0x390 [scsi_mod]
[<ffffffffa0134d3e>] sd_ioctl+0xce/0xe0 [sd_mod]
[<ffffffff811be35f>] __blkdev_driver_ioctl+0x8f/0xb0
[<ffffffff811be82e>] blkdev_ioctl+0x22e/0x820
[<ffffffff8114fdf7>] block_ioctl+0x37/0x40
[<ffffffff81131ac8>] vfs_ioctl+0x38/0xd0
[<ffffffff81131c70>] do_vfs_ioctl+0x80/0x560
[<ffffffff811cff46>] ? __up_read+0xa6/0xd0
[<ffffffff81077c29>] ? up_read+0x9/0x10
[<ffffffff811321d1>] sys_ioctl+0x81/0xa0
[<ffffffff8100a002>] system_call_fastpath+0x16/0x1b
---[ end trace 115ad6bf347654e7 ]---
sdm: sdm1
sdn: sdn1
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
INFO: task smbd:3348 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
smbd D ffff88000180f948 0 3348 2949 0x00000000
ffff88002dfe38b8 0000000000000086 0000000000000000 ffffffffa000692b
000000013d124bd0 0000000000011250 ffff88003caff938 ffff88003cf36690
000000010061522b ffff88002dfe3fd8 ffff88002dfe2000 ffff88002dfe2000
Call Trace:
[<ffffffffa000692b>] ? scsi_request_fn+0xab/0x3e0 [scsi_mod]
[<ffffffff810dbd00>] ? sync_page+0x0/0x50
[<ffffffff8135970e>] io_schedule+0x6e/0xb0
[<ffffffff810dbd3d>] sync_page+0x3d/0x50
[<ffffffff81359d32>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dbce2>] __lock_page+0x62/0x70
[<ffffffff81073090>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc3d9>] do_read_cache_page+0x159/0x180
[<ffffffffa02e11e0>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dc434>] read_cache_page_async+0x14/0x20
[<ffffffff810dc449>] read_cache_page+0x9/0x20
[<ffffffffa02e1d85>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffffa02d4fb5>] diRead+0x155/0x200 [jfs]
[<ffffffffa02c8d08>] jfs_iget+0x38/0x160 [jfs]
[<ffffffffa02cb461>] jfs_lookup+0x71/0x140 [jfs]
[<ffffffff81110000>] ? calculate_sizes+0x220/0x4a0
[<ffffffff81359d53>] ? __wait_on_bit_lock+0x73/0xb0
[<ffffffff8135a45d>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff8112bafb>] do_lookup+0x1db/0x270
[<ffffffff8112e127>] link_path_walk+0x6b7/0xf10
[<ffffffff810e1b28>] ? free_hot_page+0x28/0x90
[<ffffffff8112eb1c>] path_walk+0x5c/0xc0
[<ffffffff8112ecb3>] do_path_lookup+0x53/0xa0
[<ffffffff8112f8f2>] user_path_at+0x52/0xa0
[<ffffffff8115f01e>] ? locks_free_lock+0x3e/0x60
[<ffffffff8115fb74>] ? fcntl_setlk+0x64/0x350
[<ffffffff811260a7>] vfs_fstatat+0x37/0x70
[<ffffffff81126206>] vfs_stat+0x16/0x20
[<ffffffff8112622f>] sys_newstat+0x1f/0x50
[<ffffffff81131370>] ? sys_fcntl+0x160/0x5d0
[<ffffffff8100a002>] system_call_fastpath+0x16/0x1b
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 30 b4 cb ff 00 04 00 00
end_request: I/O error, dev sdc, sector 817155071
sd 10:0:12:0: [sdm] Unhandled error code
sd 10:0:12:0: [sdm] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:12:0: [sdm] CDB: cdb[0]=0x2a: 2a 00 6e 04 66 c8 00 04 00 00
end_request: I/O error, dev sdm, sector 1845782216
sd 10:0:2:0: [sdd] Unhandled error code
sd 10:0:2:0: [sdd] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:2:0: [sdd] CDB: cdb[0]=0x2a: 2a 00 ae 97 38 ff 00 00 08 00
end_request: I/O error, dev sdd, sector 2929146111
sd 10:0:2:0: [sdd] Unhandled error code
sd 10:0:2:0: [sdd] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:2:0: [sdd] CDB: cdb[0]=0x28: 28 00 68 6a 4d 57 00 00 40 00
end_request: I/O error, dev sdd, sector 1751797079
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5

--- End Message ---
--- Begin Message --- Here is one more log with crash information. All of my disks are 1.5TB, Seagates and WD. The expander did not affect the crashes, i had the same issues with direct attached drives.


On 6/6/2010 3:13 ÎÎ, Jelle de Jong wrote:
Dear Srini,

I spent a few weeks catering information and did some intensive
testing the last few days.

Srinivas Naga Venkatasatya Pasagadugula wrote, on 06-05-10 08:01:
1. Is this is the problem with only WD SATA drives? (I don't have WD SATA drives to reproduce this issue.)
2. Whether the problem is with direct attached SATA drives or drives connected in expanders also?
3. Could you please provide me the "dmesg" log or "/var/log/messages" log.
4. How much the capacity of SATA drives connected to controller?
5. Your HBA is having 6440 chipset?
With a few tricks I managed to boot my OS from the mvsas controller. I
got eleven different sata disks attached from a 4-port mini-sas
backplate without expander to the two Marvel 88SE63xx/64xx mvsas
controllers in my system.

I managed to add five mdadm raid1 arrays without added the actual
active sync device (so one disk for each array) I build my lvm systems
on top of this and did a lot of file transfers for testing.

This worked stable enough, there are HDIO_GET_IDENTITY errors during
boot and operation, but the hard disks seem to be working.

So to debug if issues are related to some brand of harddisk, I started
to add WDC, Hitachi, SAMSUNG, Maxtor and Seagate disks to the
respected raid1 array, the disks are of different sizes (320GB, 500GB,
1TB.)

The sync starts and will fail directly or a while later on failures in
the mvsas driver. I attached an failure example as attachment.

The failures are of grave severity, I have lost complete lvm2 volumes
and raid arrays during testing.

Do you also have a SuperMicro AOC-SASLP-MV8 controller for testing?

I would love to use the controllers for production, but they are
currently unstable. I hope this information helps to solve the mvsas
issues?

With kind regards,

Jelle de Jong


drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 1c b2 86 6f 00 03 00 00
end_request: I/O error, dev sdc, sector 481461871
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 62 03 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442630912
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 61 ff 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442629888
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 61 fb 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442628864
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 61 f7 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442627840
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 61 f3 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442626816
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 61 ef 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442625792
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 00 00 c8 c8 00 00 08 00
end_request: I/O error, dev sdf, sector 51400
metapage_write_end_io: I/O error
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 00 00 db 90 00 00 58 00
end_request: I/O error, dev sdf, sector 56208
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
metapage_write_end_io: I/O error
INFO: task filezilla:18572 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
filezilla D ffffffff810dc2d0 0 18572 2063 0x00000000
ffff8800175b3768 0000000000000082 00000000000530d7 0000000000000008
0000000000800020 ffff88003ca52cb0 ffff8800175b3798 ffffffff811b6fa4
ffff8800175b3740 ffff8800175b3fd8 ffff8800175b2000 ffff8800175b2000
Call Trace:
[<ffffffff811b6fa4>] ? generic_make_request+0x184/0x4f0
[<ffffffff8107d739>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff810dc2d0>] ? sync_page+0x0/0x50
[<ffffffff8135ad5e>] io_schedule+0x6e/0xb0
[<ffffffff810dc30d>] sync_page+0x3d/0x50
[<ffffffff8135b382>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dc2b2>] __lock_page+0x62/0x70
[<ffffffff810732b0>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc9a9>] do_read_cache_page+0x159/0x180
[<ffffffffa030d210>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dca04>] read_cache_page_async+0x14/0x20
[<ffffffff810dca19>] read_cache_page+0x9/0x20
[<ffffffffa030ddb5>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffff8135d1b4>] ? __down_read+0xd4/0xd6
[<ffffffffa02f87eb>] ? xtLookup+0x18b/0x1a0 [jfs]
[<ffffffffa0304227>] dbAlloc+0x147/0x480 [jfs]
[<ffffffffa030cb92>] extAlloc+0x162/0x4d0 [jfs]
[<ffffffff8111d740>] ? mem_cgroup_cache_charge+0x140/0x1e0
[<ffffffffa02f4b21>] jfs_get_block+0x1c1/0x220 [jfs]
[<ffffffff8114dd0a>] nobh_write_begin+0x1ea/0x4b0
[<ffffffff8114ae16>] ? __set_page_dirty+0x76/0xd0
[<ffffffffa02f461e>] jfs_write_begin+0x1e/0x20 [jfs]
[<ffffffffa02f4960>] ? jfs_get_block+0x0/0x220 [jfs]
[<ffffffff810db70d>] generic_file_buffered_write+0x10d/0x280
[<ffffffff810581f2>] ? current_fs_time+0x22/0x30
[<ffffffff810dd658>] __generic_file_aio_write+0x238/0x450
[<ffffffff8135baad>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff810dd8d4>] generic_file_aio_write+0x64/0xd0
[<ffffffff81122002>] do_sync_write+0xd2/0x110
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8119fcb1>] ? security_file_permission+0x11/0x20
[<ffffffff81122b48>] vfs_write+0xb8/0x1a0
[<ffffffff81122d0c>] sys_write+0x4c/0x80
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
INFO: task filezilla:18574 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
filezilla D ffffffff810dc2d0 0 18574 2063 0x00000000
ffff88003c967768 0000000000000082 0000000000045ca7 0000000000000008
0000000000800020 ffff88003ca52cb0 ffff88003c967798 ffffffff811b6fa4
ffff88003c967740 ffff88003c967fd8 ffff88003c966000 ffff88003c966000
Call Trace:
[<ffffffff811b6fa4>] ? generic_make_request+0x184/0x4f0
[<ffffffff8107d739>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff810dc2d0>] ? sync_page+0x0/0x50
[<ffffffff8135ad5e>] io_schedule+0x6e/0xb0
[<ffffffff810dc30d>] sync_page+0x3d/0x50
[<ffffffff8135b382>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dc2b2>] __lock_page+0x62/0x70
[<ffffffff810732b0>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc9a9>] do_read_cache_page+0x159/0x180
[<ffffffffa030d210>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dca04>] read_cache_page_async+0x14/0x20
[<ffffffff810dca19>] read_cache_page+0x9/0x20
[<ffffffffa030ddb5>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffff8135d1b4>] ? __down_read+0xd4/0xd6
[<ffffffffa02f86ed>] ? xtLookup+0x8d/0x1a0 [jfs]
[<ffffffffa0304227>] dbAlloc+0x147/0x480 [jfs]
[<ffffffffa030cb92>] extAlloc+0x162/0x4d0 [jfs]
[<ffffffff8111d740>] ? mem_cgroup_cache_charge+0x140/0x1e0
[<ffffffffa02f4b21>] jfs_get_block+0x1c1/0x220 [jfs]
[<ffffffff8114dd0a>] nobh_write_begin+0x1ea/0x4b0
[<ffffffff8114ae16>] ? __set_page_dirty+0x76/0xd0
[<ffffffffa02f461e>] jfs_write_begin+0x1e/0x20 [jfs]
[<ffffffffa02f4960>] ? jfs_get_block+0x0/0x220 [jfs]
[<ffffffff810db70d>] generic_file_buffered_write+0x10d/0x280
[<ffffffff810581f2>] ? current_fs_time+0x22/0x30
[<ffffffff810dd658>] __generic_file_aio_write+0x238/0x450
[<ffffffff8135baad>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff810dd8d4>] generic_file_aio_write+0x64/0xd0
[<ffffffff81122002>] do_sync_write+0xd2/0x110
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8119fcb1>] ? security_file_permission+0x11/0x20
[<ffffffff81122b48>] vfs_write+0xb8/0x1a0
[<ffffffff81122d0c>] sys_write+0x4c/0x80
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
INFO: task smbd:18658 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
smbd D ffff88000190f948 0 18658 16782 0x00000000
ffff8800328cd608 0000000000000086 0000000000000000 ffffffff812772b2
ffff8800328cd5a8 ffffffffa000694a ffff880000000000 ffff88003c921830
00000001061958d0 ffff8800328cdfd8 ffff8800328cc000 ffff8800328cc000
Call Trace:
[<ffffffff812772b2>] ? put_device+0x12/0x20
[<ffffffffa000694a>] ? scsi_request_fn+0xaa/0x4d0 [scsi_mod]
[<ffffffff810dc2d0>] ? sync_page+0x0/0x50
[<ffffffff8135ad5e>] io_schedule+0x6e/0xb0
[<ffffffff810dc30d>] sync_page+0x3d/0x50
[<ffffffff8135b382>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dc2b2>] __lock_page+0x62/0x70
[<ffffffff810732b0>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc9a9>] do_read_cache_page+0x159/0x180
[<ffffffffa030d210>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dca04>] read_cache_page_async+0x14/0x20
[<ffffffff810dca19>] read_cache_page+0x9/0x20
[<ffffffffa030ddb5>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffffa0309851>] dtSearch+0x781/0xb00 [jfs]
[<ffffffffa02f7503>] jfs_lookup+0x103/0x150 [jfs]
[<ffffffff81137648>] ? d_alloc+0x158/0x1c0
[<ffffffff8112c64d>] __lookup_hash+0xed/0x150
[<ffffffff8112cb95>] lookup_one_len+0x75/0xb0
[<ffffffffa03a114d>] vfsub_lookup_one_len+0x1d/0x50 [aufs]
[<ffffffffa03a7ae2>] au_lkup_one+0x32/0xf0 [aufs]
[<ffffffff810f9619>] ? handle_mm_fault+0x6d9/0xa30
[<ffffffff8135baad>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff8135d0c5>] ? __down_write_nested+0xd5/0xe0
[<ffffffffa03a526f>] au_wh_test+0x1f/0xd0 [aufs]
[<ffffffff8135bbc1>] ? mutex_lock+0x11/0x30
[<ffffffffa03a8286>] au_lkup_dentry+0x336/0x530 [aufs]
[<ffffffffa03af3cf>] aufs_lookup+0xef/0x230 [aufs]
[<ffffffff8112c4cb>] do_lookup+0x1db/0x270
[<ffffffff8112eb59>] link_path_walk+0x679/0xe50
[<ffffffff8112f502>] path_walk+0x62/0xe0
[<ffffffff8112f6c3>] do_path_lookup+0x53/0xa0
[<ffffffff8112f735>] kern_path+0x25/0x50
[<ffffffff811cb30b>] ? cpumask_any_but+0x2b/0x40
[<ffffffff810fdbb0>] ? unmap_region+0x170/0x1a0
[<ffffffff81151629>] lookup_bdev+0x39/0xb0
[<ffffffff8112e26e>] ? getname+0x1ce/0x230
[<ffffffff81174b03>] sys_quotactl+0xb3/0x3d0
[<ffffffff81077df9>] ? up_write+0x9/0x10
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
sd 10:0:6:0: [sdh] Unhandled error code
sd 10:0:6:0: [sdh] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:6:0: [sdh] CDB: cdb[0]=0x28: 28 00 27 63 5d 6f 00 00 08 00
end_request: I/O error, dev sdh, sector 660823407
metapage_read_end_io: I/O error
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
INFO: task filezilla:18572 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
filezilla D ffffffff810dc2d0 0 18572 2063 0x00000000
ffff8800175b3768 0000000000000082 00000000000530d7 0000000000000008
0000000000800020 ffff88003ca52cb0 ffff8800175b3798 ffffffff811b6fa4
ffff8800175b3740 ffff8800175b3fd8 ffff8800175b2000 ffff8800175b2000
Call Trace:
[<ffffffff811b6fa4>] ? generic_make_request+0x184/0x4f0
[<ffffffff8107d739>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff810dc2d0>] ? sync_page+0x0/0x50
[<ffffffff8135ad5e>] io_schedule+0x6e/0xb0
[<ffffffff810dc30d>] sync_page+0x3d/0x50
[<ffffffff8135b382>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dc2b2>] __lock_page+0x62/0x70
[<ffffffff810732b0>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc9a9>] do_read_cache_page+0x159/0x180
[<ffffffffa030d210>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dca04>] read_cache_page_async+0x14/0x20
[<ffffffff810dca19>] read_cache_page+0x9/0x20
[<ffffffffa030ddb5>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffff8135d1b4>] ? __down_read+0xd4/0xd6
[<ffffffffa02f87eb>] ? xtLookup+0x18b/0x1a0 [jfs]
[<ffffffffa0304227>] dbAlloc+0x147/0x480 [jfs]
[<ffffffffa030cb92>] extAlloc+0x162/0x4d0 [jfs]
[<ffffffff8111d740>] ? mem_cgroup_cache_charge+0x140/0x1e0
[<ffffffffa02f4b21>] jfs_get_block+0x1c1/0x220 [jfs]
[<ffffffff8114dd0a>] nobh_write_begin+0x1ea/0x4b0
[<ffffffff8114ae16>] ? __set_page_dirty+0x76/0xd0
[<ffffffffa02f461e>] jfs_write_begin+0x1e/0x20 [jfs]
[<ffffffffa02f4960>] ? jfs_get_block+0x0/0x220 [jfs]
[<ffffffff810db70d>] generic_file_buffered_write+0x10d/0x280
[<ffffffff810581f2>] ? current_fs_time+0x22/0x30
[<ffffffff810dd658>] __generic_file_aio_write+0x238/0x450
[<ffffffff8135baad>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff810dd8d4>] generic_file_aio_write+0x64/0xd0
[<ffffffff81122002>] do_sync_write+0xd2/0x110
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8119fcb1>] ? security_file_permission+0x11/0x20
[<ffffffff81122b48>] vfs_write+0xb8/0x1a0
[<ffffffff81122d0c>] sys_write+0x4c/0x80
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
INFO: task filezilla:18574 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
filezilla D ffffffff810dc2d0 0 18574 2063 0x00000000
ffff88003c967768 0000000000000082 0000000000045ca7 0000000000000008
0000000000800020 ffff88003ca52cb0 ffff88003c967798 ffffffff811b6fa4
ffff88003c967740 ffff88003c967fd8 ffff88003c966000 ffff88003c966000
Call Trace:
[<ffffffff811b6fa4>] ? generic_make_request+0x184/0x4f0
[<ffffffff8107d739>] ? ktime_get_ts+0xa9/0xe0
[<ffffffff810dc2d0>] ? sync_page+0x0/0x50
[<ffffffff8135ad5e>] io_schedule+0x6e/0xb0
[<ffffffff810dc30d>] sync_page+0x3d/0x50
[<ffffffff8135b382>] __wait_on_bit_lock+0x52/0xb0
[<ffffffff810dc2b2>] __lock_page+0x62/0x70
[<ffffffff810732b0>] ? wake_bit_function+0x0/0x40
[<ffffffff810dc9a9>] do_read_cache_page+0x159/0x180
[<ffffffffa030d210>] ? metapage_readpage+0x0/0x180 [jfs]
[<ffffffff810dca04>] read_cache_page_async+0x14/0x20
[<ffffffff810dca19>] read_cache_page+0x9/0x20
[<ffffffffa030ddb5>] __get_metapage+0x95/0x5a0 [jfs]
[<ffffffff8135d1b4>] ? __down_read+0xd4/0xd6
[<ffffffffa02f86ed>] ? xtLookup+0x8d/0x1a0 [jfs]
[<ffffffffa0304227>] dbAlloc+0x147/0x480 [jfs]
[<ffffffffa030cb92>] extAlloc+0x162/0x4d0 [jfs]
[<ffffffff8111d740>] ? mem_cgroup_cache_charge+0x140/0x1e0
[<ffffffffa02f4b21>] jfs_get_block+0x1c1/0x220 [jfs]
[<ffffffff8114dd0a>] nobh_write_begin+0x1ea/0x4b0
[<ffffffff8114ae16>] ? __set_page_dirty+0x76/0xd0
[<ffffffffa02f461e>] jfs_write_begin+0x1e/0x20 [jfs]
[<ffffffffa02f4960>] ? jfs_get_block+0x0/0x220 [jfs]
[<ffffffff810db70d>] generic_file_buffered_write+0x10d/0x280
[<ffffffff810581f2>] ? current_fs_time+0x22/0x30
[<ffffffff810dd658>] __generic_file_aio_write+0x238/0x450
[<ffffffff8135baad>] ? __mutex_lock_slowpath+0x26d/0x370
[<ffffffff810dd8d4>] generic_file_aio_write+0x64/0xd0
[<ffffffff81122002>] do_sync_write+0xd2/0x110
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8135dc8e>] ? common_interrupt+0xe/0x13
[<ffffffff8119fcb1>] ? security_file_permission+0x11/0x20
[<ffffffff81122b48>] vfs_write+0xb8/0x1a0
[<ffffffff81122d0c>] sys_write+0x4c/0x80
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
INFO: task smbd:18659 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
smbd D ffff88000180f948 0 18659 16782 0x00000000
ffff8800328e9b18 0000000000000086 0000000000000000 ffff8800328e9db8
0000000020000000 0000000000000000 0000000000000000 ffff88003c9209f0
000000010619a86e ffff8800328e9fd8 ffff8800328e8000 ffff8800328e8000
Call Trace:
[<ffffffff8135d085>] __down_write_nested+0x95/0xe0
[<ffffffff8135d0db>] __down_write+0xb/0x10
[<ffffffff8135c6d9>] down_write+0x9/0x10
[<ffffffffa03a74c7>] di_write_lock+0x27/0x50 [aufs]
[<ffffffffa03983f6>] aufs_read_lock+0x106/0x110 [aufs]
[<ffffffff81077e09>] ? up_read+0x9/0x10
[<ffffffffa03afdd3>] ? aufs_permission+0x313/0x3d0 [aufs]
[<ffffffffa03a8ae1>] aufs_d_revalidate+0x31/0x450 [aufs]
[<ffffffff8112eab2>] link_path_walk+0x5d2/0xe50
[<ffffffff8112f502>] path_walk+0x62/0xe0
[<ffffffff8112f6c3>] do_path_lookup+0x53/0xa0
[<ffffffff81130302>] user_path_at+0x52/0xa0
[<ffffffff8113db14>] ? mntput_no_expire+0x24/0x100
[<ffffffff811d1276>] ? __up_read+0xa6/0xd0
[<ffffffff81077e09>] ? up_read+0x9/0x10
[<ffffffff81126927>] vfs_fstatat+0x37/0x70
[<ffffffff8113db14>] ? mntput_no_expire+0x24/0x100
[<ffffffff81126a86>] vfs_stat+0x16/0x20
[<ffffffff81126aaf>] sys_newstat+0x1f/0x50
[<ffffffff8112c0ec>] ? path_put+0x2c/0x40
[<ffffffff81120ee1>] ? sys_chdir+0x51/0x80
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
INFO: task smbd:18660 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
smbd D ffff88000190f948 0 18660 16782 0x00000000
ffff88003a767d18 0000000000000086 0000000000000000 ffff88003a767d08
000001c73a767cb8 000001c8000001c8 000000083a767dd8 ffff88003c922d90
000000010619fccf ffff88003a767fd8 ffff88003a766000 ffff88003a766000
Call Trace:
[<ffffffff8135d175>] __down_read+0x95/0xd6
[<ffffffff8135c6e9>] down_read+0x9/0x10
[<ffffffffa03a76b6>] di_read_lock+0x26/0x90 [aufs]
[<ffffffffa03af5f2>] aufs_getattr+0xe2/0x4a0 [aufs]
[<ffffffff8113030d>] ? user_path_at+0x5d/0xa0
[<ffffffff811268bc>] vfs_getattr+0x4c/0x80
[<ffffffff81126948>] vfs_fstatat+0x58/0x70
[<ffffffff811269c9>] vfs_lstat+0x19/0x20
[<ffffffff811269ef>] sys_newlstat+0x1f/0x50
[<ffffffff81009fc2>] system_call_fastpath+0x16/0x1b
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 62 12 a0 00 04 00 00
end_request: I/O error, dev sdf, sector 442634912
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 62 0e a0 00 04 00 00
end_request: I/O error, dev sdf, sector 442633888
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 62 0b 00 00 03 a0 00
end_request: I/O error, dev sdf, sector 442632960
sd 10:0:4:0: [sdf] Unhandled error code
sd 10:0:4:0: [sdf] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:4:0: [sdf] CDB: cdb[0]=0x2a: 2a 00 1a 62 07 00 00 04 00 00
end_request: I/O error, dev sdf, sector 442631936
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 1c b2 87 6f 00 00 08 00
end_request: I/O error, dev sdc, sector 481462127
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 04 5c a7 00 00 08 00
end_request: I/O error, dev sdc, sector 285863
metapage_read_end_io: I/O error
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 05 30 d7 00 00 08 00
end_request: I/O error, dev sdc, sector 340183
metapage_read_end_io: I/O error
sd 10:0:1:0: [sdc] Unhandled error code
sd 10:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x06
sd 10:0:1:0: [sdc] CDB: cdb[0]=0x2a: 2a 00 ae a6 3d 67 00 00 08 00
end_request: I/O error, dev sdc, sector 2930130279

--- End Message ---