Re: Read I/O starvation with writeback RAID controller

From: Chris Friesen
Date: Fri Feb 22 2013 - 15:58:51 EST


On 02/22/2013 02:35 PM, Jan Engelhardt wrote:

On Friday 2013-02-22 20:28, Martin Svec wrote:

Yes, I've already tried the ROW scheduler. It helped for some low iodepths
depending on quantum settings but generally didn't solve the problem. I think
the key issue is that none of the schedulers can throttle I/O according to e.g.
average request roundtrip time. Shaohua Li is right here:
https://lkml.org/lkml/2012/12/11/598 -- as long as there's free room in
device's queue they blindly dispatch requests to it.

Which is exactly what I see in deadline scheduler fifo queues: There're no read
requests to be scheduled between writes because all readers are starving. So
the scheduler keeps dispatching writes using all the remaining capacity of
device queue. Which in turn worses the read starvation. Bigger queue depth and
bigger writeback cache means higher chance for read starvation even from a
single writer.

Sounds just like the bufferbloat problem in networking.
Waiting for codel for the block layer :)

Is there any way to somehow have the reads jump to the head of the queue in the disk controller?

Otherwise it seems like we might need to minimize the disk cache usage and do the scheduling in software.

This effectively mirrors what the codel people are doing with using tiny tx ring buffers to fight bufferbloat. The difference is that with a NIC all you have to do is make sure the buffer doesn't empty and you get full speed whereas with a disk the more you stuff in the cache the better it can schedule things.

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/