Re: multiply files in one (was GNU/Linux stance by Richard Stallman)

Stephen C. Tweedie (sct@redhat.com)
Thu, 1 Apr 1999 01:33:42 +0100 (BST)


Hi,

On Tue, 30 Mar 1999 06:11:36 -0700, yodaiken@chelm.cs.nmt.edu said:

>> - a modest read-ahead (hundreds of kBytes) of the inode blocks will
>> halve the number of seeks, and will make the remaining seeks
>> generally less expensive (no need to seek between two different
>> areas on the disc)

> Only if the directory contents are grouped together on the disk.

This is an important point. Sequential file accesses are normally
efficient, but that is precisely because the files are usually written
out all at once, and the filesystem can easily be optimised to make
large sequential writes sequential.

Writing files out slowly over time, even if done sequentially, is much
harder to make good fragmentation guarantees for. Fortunately, we can
easily sidestep this by extending the file in large chunks, and if
necessary rewriting the whole thing to a new location if that will help
fragmentation.

There are serious tradeoffs concerned if we want to be doing this
automatically every time we write a small file, but it could undoubtably
benefit things greatly if we have a lazy repacker for small files.

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/