Re: [resent PATCH] Re: very slow parallel read performance

From: Daniel Phillips (phillips@bonn-fries.net)
Date: Sun Aug 26 2001 - 11:55:04 EST


On August 25, 2001 06:28 pm, pcg@goof.com ( Marc) (A.) (Lehmann ) wrote:
> On Sat, Aug 25, 2001 at 12:50:51PM -0300, Rik van Riel
<riel@conectiva.com.br> wrote:
> > Remember NL.linux.org a few weeks back, where a difference of
> > 10 FTP users more or less was the difference between a system
> > load of 3 and a system load of 250 ? ;)
>
> OTOH, servers the use a single process or thread per connection are
> destined to fail under load ;)

There's an obvious explanation for the high loadavg people are seeing when
their systems go into thrash mode: when free is exhausted, every task that
fails to get a block in __alloc_pages will become PF_MEMALLOC and start
scanning. The remaining tasks that are still running will soon also fail in
__alloc_pages and jump onto the dogpile. This is a pretty clear
demonstration of why the current "self-help" strategy is flawed.

Those tasks that can't get memory should block, leaving one or two threads to
sort things out under predictable conditions, then restart the blocked
threads as appropriate.

--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Aug 31 2001 - 21:00:20 EST