Re: blocksize > 4K in ext2 ?

John G. Alvord (jalvo@cloud9.net)
Thu, 21 May 1998 03:43:06 GMT


On Wed, 20 May 1998 11:50:20 -0700 (PDT), Gerhard Mack <gmack@imag.net>
wrote:

>On Wed, 20 May 1998, Rik van Riel wrote:
>
>> On Wed, 20 May 1998, Chris Wedgwood wrote:
>> > On Wed, May 20, 1998 at 08:58:48AM +0200, Harald Koenig wrote:
>> > > On May 19, Rik van Riel wrote:
>> > > > Currently, the buffercache only handles buffers up to 4k in
>> > > > size and the memory allocation algorithm doesn't handle
>> > > > the handing out of larger (>1 page) chunks very well.
>> > >
>> > > then, would it be possible to use 8k ext2 on AXP, as AXP uses 8k pages ?
>> >
>> > or 32k on the ARM?
>>
>> I don't think the buffer and memory subsystems will have
>> any difficulties handling this. (could be wrong about
>> the buffers though ;)
>>
>> Then 'all that needs to be done' is adapting part of
>> ext2fs...
>>
>This brings to mind a question I asked myself last time I saw a thread
>along this line. Why doesn't someone make a fs just for cases that need
>large files? It seems to me there is quite a few people who need it for
>various reasons. This would imho solve the speed loss argument since the
>people who need the larger files would be the only ones using it.

I remember someone posting about a rough fs just for large files like
that. He had a big collection of 200-400meg files that needed some
processing. It was 3-4 months ago.

john alvord

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu