absurdly high "optimal_io_size" on Seagate SAS disk

From: Chris Friesen
Date: Thu Nov 06 2014 - 11:49:03 EST


Hi,

I'm running a modified 3.4-stable on relatively recent X86 server-class hardware.

I recently installed a Seagate ST900MM0026 (900GB 2.5in 10K SAS drive) and it's reporting a value of 4294966784 for optimal_io_size. The other parameters look normal though:

/sys/block/sda/queue/hw_sector_size:512
/sys/block/sda/queue/logical_block_size:512
/sys/block/sda/queue/max_segment_size:65536
/sys/block/sda/queue/minimum_io_size:512
/sys/block/sda/queue/optimal_io_size:4294966784

The other drives in the system look more like what I'd expect:

/sys/block/sdb/queue/hw_sector_size:512
/sys/block/sdb/queue/logical_block_size:512
/sys/block/sdb/queue/max_segment_size:65536
/sys/block/sdb/queue/minimum_io_size:4096
/sys/block/sdb/queue/optimal_io_size:0
/sys/block/sdb/queue/physical_block_size:4096

/sys/block/sdc/queue/hw_sector_size:512
/sys/block/sdc/queue/logical_block_size:512
/sys/block/sdc/queue/max_segment_size:65536
/sys/block/sdc/queue/minimum_io_size:4096
/sys/block/sdc/queue/optimal_io_size:0
/sys/block/sdc/queue/physical_block_size:4096

According to the manual, the ST900MM0026 has a 512 byte physical sector size.

Is this a drive firmware bug? Or a bug in the SAS driver? Or is there a valid reason for a single drive to report such a huge value?

Would it make sense for the kernel to do some sort of sanity checking on this value?

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/