Re: Increase dirty_ratio and dirty_background_ratio?

From: Linus Torvalds
Date: Wed Jan 07 2009 - 11:40:25 EST




On Wed, 7 Jan 2009, Peter Zijlstra wrote:
>
> > So the question is: What kind of workloads are lower limits supposed to
> > help? Desktop? Has anybody reported that they actually help? I'm asking
> > because we are probably going to increase limits to the old values for
> > SLES11 if we don't see serious negative impact on other workloads...
>
> Adding some CCs.
>
> The idea was that 40% of the memory is a _lot_ these days, and writeback
> times will be huge for those hitting sync or similar. By lowering these
> you'd smooth that out a bit.

Not just a bit. If you have 4GB of RAM (not at all unusual for even just a
regular desktop, never mind a "real" workstation), it's simply crazy to
allow 1.5GB of dirty memory. Not unless you have a really wicked RAID
system with great write performance that can push it out to disk (with
seeking) in just a few seconds.

And few people have that.

For a server, where throughput matters but latency generally does not, go
ahead and raise it. But please don't raise it for anything sane. The only
time it makes sense upping that percentage is for some odd special-case
benchmark that otherwise can fit the dirty data set in memory, and never
syncs it (ie it deletes all the files after generating them).

In other words, yes, 40% dirty can make a big difference to benchmarks,
but is almost never actually a good idea any more.

That said, the _right_ thing to do is to

(a) limit dirty by number of bytes (in addition to having a percentage
limit). Current -git adds support for that.

(b) scale it dynamically by your IO performance. No, current -git does
_not_ support this.

but just upping the percentage is not a good idea.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/