Re: atomicity [offtopic?]

Stephane Belmon (sbelmon@cs.ucsd.edu)
Sun, 6 Dec 1998 11:16:37 -0800 (PST)


On Sat, 5 Dec 1998, Tim Smith wrote:

>
> On Sat, 5 Dec 1998, Alan Cox wrote:
> > With ext2fs you should never need a defragmenter
> while file system not full
> create random small files
[...]
> Are there any file systems around that will manage to resist fragmentation
> if subjected to that?

Yes: a log-based FS should be immune to what you describe. Or rather, it
doesn't make that much difference for these FSs: no matter what, things go
at the end of the log.

Not that I think that log FSs are such a great idea for traditional uses.
The assumptions are diferent: The cache is supposed to be so big that you
almost never (like in: never ever) read; you need writes at the speed of
the raw disk; everything needs to be committed immediately; and you're
more or less assuming a "flow of transactions" workload. I don't know for
you, but that doesn't describe _my_ needs at all. I need fast reads,
because the process I'm interested in is frozen during a read (you can't
do "read-back"); writes can be delayed a couple seconds; and my average
write load, smoothed over 10 seconds or so, is tiny.

Usual file systems, that correlate location on disk with files, should all
be fragmented after what you describe. The trick is to not let the FS
_completely_ fill up. That's what Alan is referring to by saying that you
should never need a defragmenter: a decent FS (ext2), never going beyond
90% utilization, doesn't fragment that much.

--
Stephane Belmon <sbelmon@cse.ucsd.edu>
University of California, San Diego
Computer Science and Engineering Department

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/