[PATCH 1/1] Mention that I/O priorities also work on direct writes.

From: Martin Steigerwald
Date: Mon Nov 28 2011 - 10:19:23 EST


Am Montag, 28. November 2011 schrieb Jens Axboe:
> On 2011-11-28 15:42, Martin Steigerwald wrote:
> > Hi jens und Vivek,
> >
> > Vivek, I cc'd you, cause you wrote the new cfq-iosched.txt.
> >
> >
> > In trying to understand how I/O priorities actually really work, I tried
> > to dd with
> >
> > rm nullen-id ; sync ; /usr/bin/time ionice -c3 dd if=/dev/zero
> > of=nullen-id count=500 bs=1M conv=fsync
> >
> > versus
> >
> > rm nullen-rl; sync ; /usr/bin/time ionice -c1 -n0 dd if=/dev/zero
> > of=nullen-rl count=500 bs=1M conv=fsync
> >
> > concurrently. No differences. At first I was puzzled, then I thought
> > maybe direct I/O makes a difference. So I tried with oflag=direct.
> >
> > And it does.
> >
> > Then I actually read the documentation block/ioprio.txt (3.1 here):
> >> With the introduction of cfq v3 (aka cfq-ts or time sliced cfq), basic
> >> io priorities are supported for reads on files. This enables users to
> >> io nice processes or process groups, similar to what has been possible
> >> with cpu scheduling for ages. This document mainly details the current
> >> possibilities with cfq; other io schedulers do not support io priorities
> >> thus far.
> >
> > According to it I/O priorities will even only work on reads. Is that
> > correct? I mean they do work on reads, I tested it, but *only* on reads?
> >
> > From what I see here, it also works for direct I/O write requests
> >
> > So from what I conclude is that CFQ I/O priorities work for all requests
> > that are issued via synchronous system calls, but not for those issued
> > via asynchronous calls, i. e. everything that goes through the
> > pagecache.
> >
> > Is that correct?
>
> Priorities work for reads AND direct writes. In other words, it does not
> work for buffered writes.
>
> > Vivek, one thing on cfq-iosched.txt: Could slice_idle=0 make sense on
> > SSDs? Later on you write that there are some SSD optimizations in
> > place that cut down idling already.
>
> It will have a functional difference even on SSDs, depending on your
> workload, even if the scope of idling is smaller on an SSD.