Peter Chubb wrote:
>
>Some benchmarking done at Berkeley showed that for development loads,
>30seconds was too short to avoid excessive writes.
>
>See Roselli, Lorch and Anderson, `A Comparison of File System
>Workloads' in Usenix 2000.
>
>http://research.microsoft.com/~lorch/papers/fs-workloads/fs-workloads.html
>
>Their observations (summarised) were that most blocks die because of
>overwriting, not because of file deletes. Their workloads show that
>for NT, the write timeout to avoid commits blocks that will soon
>become dead needs to be around a day; for typical Unix loads (web
>serving, research, software development), an hour is enough. To catch
>75%, a timeout of around 11 minutes is needed. 30seconds worked only
>for webserving and undergraduate teaching workloads, and caught around
>40% for those workloads; for a research workload and NT fileserving,
>30seconds catches only 10-20% of the rewrites.
>
>See especially figure 3 in that paper.
>
>
>
There is also a longer PhD thesis by her. 10 minutes is about as much
work as I personally am willing to lose and try to remember. Avoiding
75% of writes instead of 20% is a substantial performance gain worth
paying a cost for. Unfortunately it is not easy to say if it is worth
that much cost, but I suspect it is. An approach we are exploring is
for blocks to reach disk earlier than that if the device is not
congested, on the grounds that if not much IO is occuring, then
performance is not important.
Hans
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Nov 07 2002 - 22:00:40 EST