Re: [patch]block: avoid building too big plug list

From: Jens Axboe
Date: Fri Jul 08 2011 - 02:17:27 EST


On 2011-07-08 03:59, Shaohua Li wrote:
> When I test fio script with big I/O depth, I found the total throughput drops
> compared to some relative small I/O depth. The reason is the thread accumulates
> big requests in its plug list and causes some delays (surely this depends
> on CPU speed).
> I thought we'd better have a threshold for requests. When a threshold reaches,
> this means there is no request merge and queue lock contention isn't severe
> when pushing per-task requests to queue, so the main advantages of blk plug
> don't exist. We can force a plug list flush in this case.
> With this, my test throughput actually increases and almost equals to small
> I/O depth. Another side effect is irq off time decreases in blk_flush_plug_list()
> for big I/O depth.
> The BLK_MAX_REQUEST_COUNT is choosen arbitarily, but 16 is efficiently to
> reduce lock contention to me. But I'm open here, 32 is ok in my test too.

Thanks, I have wondered whether that would potentially cause an issue.
So this patch is quite fine with me, generally a good idea to cap it.
I'll queue it up with 16 for the max depth, that's still quite a decent
proportion of local to queued requests.

Thanks!

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/