Re: [PATCH] io stalls (was: -rc7 Re: Linux 2.4.21-rc6)

From: Andrea Arcangeli (andrea@suse.de)
Date: Mon Jun 09 2003 - 17:19:50 EST


Hi,

On Mon, Jun 09, 2003 at 05:39:23PM -0400, Chris Mason wrote:
> Ok, there are lots of different problems here, and I've spent a little
> while trying to get some numbers with the __get_request_wait stats patch
> I posted before. This is all on ext2, since I wanted to rule out
> interactions with the journal flavors.
>
> Basically a dbench 90 run on ext2 rc6 vanilla kernels can generate
> latencies of over 2700 jiffies in __get_request_wait, with an average
> latency over 250 jiffies.
>
> No, most desktop workloads aren't dbench 90, but between balance_dirty()
> and the way we send stuff to disk during memory allocations, just about
> any process can get stuck submitting dirty buffers even if you've just
> got one process doing a dd if=/dev/zero of=foo.
>
> So, for the moment I'm going to pretend people seeing stalls in X are
> stuck in atime updates or memory allocations, or reading proc or some
> other silly spot.
>
> For the SMP corner cases, I've merged Andrea's fix-pausing patch into
> rc7, along with an altered form of Nick Piggin's queue_full patch to try
> and fix the latency problems.
>
> The major difference from Nick's patch is that once the queue is marked
> full, I don't clear the full flag until the wait queue is empty. This
> means new io can't steal available requests until every existing waiter
> has been granted a request.
>
> The latency results are better, with average time spent in
> __get_request_wait being around 28 jiffies, and a max of 170 jiffies.
> The cost is throughput, further benchmarking needs to be done but, but I
> wanted to get this out for review and testing. It should at least help
> us decide if the request allocation code really is causing our problems.
>
> The patch below also includes the __get_request_wait latency stats. If
> people try this and still see stalls, please run elvtune /dev/xxxx and
> send along the resulting console output.
>
> I haven't yet compared this to Andrea's elevator latency code, but the
> stat patch was originally developed on top of his 2.4.21pre3aa1, where
> the average wait was 97 jiffies and the max was 318.
>
> Anyway, less talk, more code. Treat this with care, it has only been
> lightly tested. Thanks to Andrea and Nick whose patches this is largely
> based on:

I spent last Saturday working on this too. This is the status of my
current patches, would be interesting to compare them. they're not very
well tested yet though.

They would obsoletes the old fix-pausing and the old elevator-lowlatency
(I was going to release a new tree today, but I delayed it so I fixed
uml today too first [tested with skas and w/o skas]).

those backout the rc7 interactivity changes (the only one that wasn't in
my tree was the add_wait_queue_exclusive, that IMHO would better stay
for scalability reasons).

Of course I would be very interested to know if those two patches (or
Chris's one, you also retained the exclusive wakeup) are still greatly
improved by removing the _exclusive weakups and going wake-all (in
theory they shouldn't).

Andrea





-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Jun 15 2003 - 22:00:22 EST