Re: blocksize > 4K in ext2 ?

Sieger Ralf AI (sieger@alpha.fh-furtwangen.de)
Wed, 20 May 1998 22:43:58 +0200 (MET DST)


On Wed, 20 May 1998, Peter Monta wrote:

> Gerhard Mack writes:
>
> > This brings to mind a question I asked myself last time I saw a thread
> > along this line. Why doesn't someone make a fs just for cases that need
> > large files? It seems to me there is quite a few people who need it for
> > various reasons. This would imho solve the speed loss argument since the
> > people who need the larger files would be the only ones using it.
>
> Yes, I'd find this useful too. Ideally the granularity would be configurable
> up to at least 4 MB, so that a certain I/O bandwidth could be guaranteed
> even with a worst-case seek after every block transferred. Buffer cache
> optional would be great also.
I currently developing a fs which supports among other things
fragmentation. This means for better throughput I want to use the highest
possible blocksize. So if someone has an solution to this limit I would
cheer. I'm not too deep in the page functions so I don't want to modify
the code by myself.

Ralf

,-~~-.___.
/ | ' \ Yup that's what I like ....
( ) 0 __
\_/-, ,----' | |
==== | |
/ \-'~; ___| |
/ __/~| / |
=( _____| (_________|

-------------------- *** --- *** --- *** --- *** -----------------------------
sieger@alpha.fh-furtwangen.de

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu