Re: [PATCH] cfq: Fix starvation of async writes in presence of heavysync workload

From: Shaohua Li
Date: Mon Jun 20 2011 - 22:15:24 EST

2011/6/20 Vivek Goyal <vgoyal@xxxxxxxxxx>:
> In presence of heavy sync workload CFQ can starve asnc writes.
> If one launches multiple readers (say 16), then one can notice
> that CFQ can withhold dispatch of WRITEs for a very long time say
> 200 or 300 seconds.
> Basically CFQ schedules an async queue but does not dispatch any
> writes because it is waiting for exisintng sync requests in queue to
> finish. While it is waiting, one or other reader gets queued up and
> preempts the async queue. So we did schedule the async queue but never
> dispatched anything from it. This can repeat for long time hence
> practically starving Writers.
> This patch allows async queue to dispatch atleast 1 requeust once
> it gets scheduled and denies preemption if async queue has been
> waiting for sync requests to drain and has not been able to dispatch
> a request yet.
> One concern with this fix is that how does it impact readers
> in presence of heavy writting going on.
> I did a test where I launch firefox, load a website and close
> firefox and measure the time. I ran the test 3 times and took
> average.
> - Vanilla kernel time ~= 1 minute 40 seconds
> - Patched kenrel time ~= 1 minute 35 seconds
> Basically it looks like that for this test times have not
> changed much for this test. But I would not claim that it does
> not impact reader's latencies at all. It might show up in
> other workloads.
> I think we anyway need to fix writer starvation. If this patch
> causes issues, then we need to look at reducing writer's
> queue depth further to improve latencies for readers.
I'm afraid this can causes read latency because cfq_dispatch_requests
doesn't check preempt. we will dispatch 4 requests at least instead of
just one. can we add a logic to force it just dispatches one request?

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at