Re: Transparent compression in the FS

From: Andreas Dilger
Date: Wed Oct 15 2003 - 12:40:28 EST


On Oct 15, 2003 13:19 -0400, Richard B. Johnson wrote:
> On Wed, 15 Oct 2003, Nikita Danilov wrote:
> > It could not if block-level compression is used. Which is the only
> > solution, given random-access to file bodies.
>
> Then the degenerative case is no compression at all. There is no
> advantage to writing a block that is 1/N full of compressed data.
> You end up having to write the whole block anyway.

In the ext2 compression code, they compress maybe 8 source blocks into
(hopefully) some smaller number of compressed blocks. Yes, there is still
a minimum block size, but you can save some reasonable fraction of the
total space (e.g. 8 blocks down to 4.5 blocks still gives you 5/8 = 37%
compression, although not 50%). You get more efficient compression the
more source blocks you use, although your "damage area" grows in case of
error.

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/