Re: fsync on large files

Pavel Machek (pavel@bug.ucw.cz)
Fri, 19 Feb 1999 00:05:19 +0100


Hi!

> > >> About the *only* problem with ext2 right now is the long fsck times.
> > >No. The fact you can lose data after a crash is. The long fsck times is
> > >a secondary concern. Data integrity is the *primary* concern.
> > Journaling doesn't help - journaling essentially only ensures metadata
> > consistency, it doesn't enforce actual file contents are "up-to-date"
> > (for example, you might have part of a write visible, but not
> > necessarily all).
>
> Yes, I should have been clearer. I realize we cant ensure application
> level consistency from an OS level, what im mostly concerned about is
> filesystem consistency. I lost count of the times ive had to poke around
> lost+found to recover the filesystem layout. Or the times e2fsck has
> required manual intervention to complete its checking. :(
>
> The 2 hour e2fsck's is purely a secondary concern, but its still
>annoying.

Not everywhere. In school, there's ultra sparc running linux, which is
dodgy (because it is running experimental SMP kernel and because it
gets rather high load and because it has much too many disks and
because linux's SCSI layer has problems with error recovery, etc,
etc.) and which has 150G (do I remember correctly?) of disks.

It is ftp mirror. On ftp mirror, you don't care about data being
lost. If they are, they can be retrieved again. But fsck times are
killing that machine.

> It might be interesting to have a kernel level transaction facility
> available to applications...

I might say something very stupid, but is not fsync() one of ways to
ensure consistency? Yes, it speed sucks...

Pavel

-- 
I'm really pavel@atrey.karlin.mff.cuni.cz. 	   Pavel
Look at http://atrey.karlin.mff.cuni.cz/~pavel/ ;-).

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/