Re: performance "regression" in cfq compared to anticipatory, deadline and noop

From: Matthew
Date: Tue May 13 2008 - 08:59:19 EST


On Tue, May 13, 2008 at 2:20 PM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
>
> On Sun, May 11 2008, Kasper Sandberg wrote:
> > On Sun, 2008-05-11 at 14:14 +0100, Daniel J Blueman wrote:
> > > I've been experiencing this for a while also; an almost 50% regression
> > > is seen for single-process reads (ie sync) if slice_idle is 1ms or
> > > more (eg default of 8) [1], which seems phenomenal.
> > >
> > > Jens, is this the expected price to pay for optimal busy-spindle
> > > scheduling, a design issue, bug or am I missing something totally?
> > >
> > > Thanks,
> > > Daniel
[snip]
...
[snip]
> >
> > Thisd would appear to be quite a considerable performance difference.
>
> Indeed, that is of course a bug. The initial mail here mentions this as
> a regression - which kernel was the last that worked ok?
>
> If someone would send me a blktrace of such a slow run, that would be
> nice. Basically just do a blktrace /dev/sda (or whatever device) while
> doing the hdparm, preferably storing output files on a difference
> device. Then send the raw sda.blktrace.* files to me. Thanks!
>
> --
> Jens Axboe
>
>

Hi Jens,

I called this a "regression" since I wasn't sure if this is a real bug
or just something introduced recently, I just started to use cfq as
main io-scheduler so I can't tell ...

testing 2.6.17 unfortunately is somewhat impossible for me (reiser4;
too new hardware - problems with jmicron)

google "says" that it seemingly already existed since at least 2.6.18
(Ubuntu DapperDrake) [see:
http://ubuntuforums.org/showpost.php?p=1484633&postcount=12]

well - back to topic:

for a blktrace one need to enable CONFIG_BLK_DEV_IO_TRACE , right ?
blktrace can be obtained from your git-repo ?

Thanks

Mat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/