Re: performance "regression" in cfq compared to anticipatory, deadline and noop
From: Matthew
Date: Tue May 13 2008 - 15:24:26 EST
On Tue, May 13, 2008 at 8:40 PM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
>
> On Tue, May 13 2008, Jens Axboe wrote:
> > On Tue, May 13 2008, Matthew wrote:
> > > On Tue, May 13, 2008 at 3:05 PM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, May 13 2008, Matthew wrote:
> > > > > On Tue, May 13, 2008 at 2:20 PM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Sun, May 11 2008, Kasper Sandberg wrote:
> > > > > > > On Sun, 2008-05-11 at 14:14 +0100, Daniel J Blueman wrote:
> > > > > > > > I've been experiencing this for a while also; an almost 50% regression
> > > > > > > > is seen for single-process reads (ie sync) if slice_idle is 1ms or
> > > > > > > > more (eg default of 8) [1], which seems phenomenal.
> > > > > > > >
> > > > > > > > Jens, is this the expected price to pay for optimal busy-spindle
> > > > > > > > scheduling, a design issue, bug or am I missing something totally?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Daniel
> > > > > [snip]
> > > > > ...
> > > > > [snip]
> > > > > > >
> > > [snip]
> > >
> > > ...
> > >
> > > [snip]
> > > > > well - back to topic:
> > > > >
> > > > > for a blktrace one need to enable CONFIG_BLK_DEV_IO_TRACE , right ?
> > > > > blktrace can be obtained from your git-repo ?
> > > >
> > > > Yes on both accounts, or just grab a blktrace snapshot from:
> > > >
> > > > http://brick.kernel.dk/snaps/blktrace-git-latest.tar.gz
> > > >
> > > > if you don't use git.
> > > >
> > > > --
> > > > Jens Axboe
> > > >
> > > >
> > >
[snip]
...
[snip]
> >
> > They seem to start out the same, but then CFQ gets interrupted by a
> > timer unplug (which is also odd) and after that the request size drops.
> > On most devices you don't notice, but some are fairly picky about
> > request sizes. The end result is that CFQ has an average dispatch
> > request size of 142kb, where AS is more than double that at 306kb. I'll
> > need to analyze the data and look at the code a bit more to see WHY this
> > happens.
>
> Here's a test patch, I think we get into this situation due to CFQ being
> a bit too eager to start queuing again. Not tested, I'll need to spend
> some testing time on this. But I'd appreciate some feedback on whether
> this changes the situation! The final patch will be a little more
> involved.
[snip]
...
[snip]
>
> --
> Jens Axboe
>
>
unfortunately that patch didn't help:
hdparm -t /dev/sde
/dev/sde:
Timing buffered disk reads: 178 MB in 3.03 seconds = 58.67 MB/sec
hdparm -t /dev/sdd
/dev/sdd:
Timing buffered disk reads: 164 MB in 3.00 seconds = 54.61 MB/sec
-> the first should be around 74 MB/sec, the second around 102 MB/sec
Thanks
Mat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/