Re: ext2fs "performace"

Stephen C. Tweedie (sct@dcs.ed.ac.uk)
Thu, 27 Jun 1996 11:24:50 +0100


Hi,

In article <24847.835294404@drax.isi.edu>, Craig Milo Rogers
<rogers@isi.edu> writes:

>> A 1 GB file on a 1k block ext2 filesystem will have 4096 indirect
>> blocks and a few dindirect blocks. Deleting the file will involve
>> essentially doing a random-access seek and read of each of these
>> blocks, so if it takes 100 seconds you are getting over 40 seeks/reads
>> per second.

> I notice you said "random-access" seeks. Do you think it
> would be worthwhile to sort the inode and/or dindirect entries to make
> the seeks more sequential in nature?

They are already optimally placed. The seeks are not truly random;
the reads are sequential, but not contiguous, and that's the killer.
We're just reading every 257'th block from the disk, effectively.

We could try to keep these blocks together, but we'd have to know in
advance how long the file was going to be, and we'd lose performance
for sequential file reads and writes. The current layout is overall
by far the best option.

Cheers,
Stephen.

--
Stephen Tweedie <sct@dcs.ed.ac.uk>
Department of Computer Science, Edinburgh University, Scotland.