On Monday 16 July 2001 21:16, Hans Reiser wrote:
> Jussi Laako wrote:
> > Daniel Phillips wrote:
> > > We are not that far away from being able to handle 8K blocks, so
> > > that would bump it up to 32 TB.
> >
> > That's way too small. Something like 32 PB would be better... ;)
> > We need at least one extra bit in volume/file size every year.
>
> Daniel, if I was real sure that 64k blocks were the right answer, I
> would agree with you. I think nobody knows what will happen with
> reiserfs if we go to 64k blocks.
For 32 bit block numbers:
Logical Block Size Largest Volume
------------------ --------------
4K 16 TB
8K 32 TB
16K 64 TB
32K 128 TB
64K 256 TB
You don't have to go to the extreme of 64K blocksize to get big
volumes. Anyway, with tailmerging there isn't really a downside to big
blocks, assuming the tailmerging code is fairly mature and efficient.
Maybe that's where we're still guessing?
> It could be great. On the other
> hand, the average number of bytes memcopied with every small file
> insertion increases with node size. Scalable integers (Xanadu
> project idea in which the last bit of an integer indicates whether
> the integer is longer than the base size by an amount equal to the
> base size, chain can be infinitely long, they used a base size of 1
> byte, but we could use a base size of 32 bits, and limit it to 64
> bits rather than allowing infinite scaling) seem like more
> conservative coding.
Yes, I've used similar things in the past, but only in serialized
structures. In a fixed sized field it doesn't make a lot of sense.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Jul 23 2001 - 21:00:08 EST