> On Wed, 20 May 1998, Rik van Riel wrote:
> > On Wed, 20 May 1998, Chris Wedgwood wrote:
> > > On Wed, May 20, 1998 at 08:58:48AM +0200, Harald Koenig wrote:
> > > > On May 19, Rik van Riel wrote:
> > > > > Currently, the buffercache only handles buffers up to 4k in
> > > > > size and the memory allocation algorithm doesn't handle
> > > > > the handing out of larger (>1 page) chunks very well.
> > > >
> > > > then, would it be possible to use 8k ext2 on AXP, as AXP uses 8k pages ?
> > >
> > > or 32k on the ARM?
> > I don't think the buffer and memory subsystems will have
> > any difficulties handling this. (could be wrong about
> > the buffers though ;)
> > Then 'all that needs to be done' is adapting part of
> > ext2fs...
> This brings to mind a question I asked myself last time I saw a thread
> along this line. Why doesn't someone make a fs just for cases that need
> large files? It seems to me there is quite a few people who need it for
> various reasons. This would imho solve the speed loss argument since the
> people who need the larger files would be the only ones using it.
Ok I'm currently on a 64Bit fs. Which is at the moment slightly (10-20%)
slower than ext2 but has a lot of other advantages. The only problem is
the 2GB limit with an 32Bit integer on x86.
Well I understand that we can't move generally to 8byte off_t types. Cause
an old i386 would go down in pain. But is there no way to use such a 8byte
value by accessing such large files?
/ | ' \ Yup that's what I like ....
( ) 0 __
\_/-, ,----' | |
==== | |
/ \-'~; ___| |
/ __/~| / |
=( _____| (_________|
-------------------- *** --- *** --- *** --- *** -----------------------------
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com