Re: fsync on large files

jens@pinguin.conetix.de
Sun, 14 Feb 1999 20:31:03 +0100


On Mon, Feb 08, 1999 at 04:57:40PM +0000, Stephen C. Tweedie wrote:

>> Might this be related to my Linux 2.1.131ac10 needing >10sec for deleting
>> a ~700 MB file on a 4.3GB ext2fs partition?
>> (during which this partition is blocked - even "ls" stops)
> No, that's unrelated to fsync. At least part of the rm behaviour is
> simply due to the huge amount of random-access data which the kernel
> needs to read in during the delete: on a 1K filesystem there is one
> block of pointer indirection information for every 256 blocks of file

this was a 4k block partition. I am thinking of making a md (striping)
partition out of it, will this help? Or is reiserfs or lvd better in this
respect?

> content, so deleting a 700MB file has to complete nearly 3000 read IOs
> with a seek between each one just to locate all of the file's data. On
> any Unix-style filesystem, that will be expensive.

hm. Can this be done in the background with low priority? I do all rm'ing
with "&" by now, but can this be automated?

> Planned for 2.3's btree extensions for ext2 is the ability to compress
> the indirection information for continuous runs of blocks on disk, which
> will substantially improve the delete performance for these large files.

well, I think its a little too early to start thinking about 2.3 - we
haven't even got everyone addicted to 2.2 yet :)

Well, are projects like the LVM or reiserfs (or md, for that matter) stable
enough to try on 'production-like' machines? This one is a LAN server for
the home LAN and a desktop workstation for my personal use.

-- 
_ciao, Jens_______________________________ http://www.pinguin.conetix.de
    cat /dev/boiler/water | tea | sieve > /cup
    mount -t hdev /dev/human/mouth01 /mouth ; cat /cup >/mouth/gulp

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/