...
Actually (see my reply to Timothy Miller) I really want to do "compression" even if it does not reduce space: it is a matter of growing the per-bit entropy rather than to gain space (see http://jsam.sourceforge.net). Moreover I do not want to use sophisticated algorithms (in order to be able to compute plain text random distributions that ensure that the compressed distributions will be uniform, which is very difficult with for e.g zlib; in particular having any kind of "meta-data", "signatures" or "dictionnary" is a no-go for me). See details at the end of this post.
...
A while ago I started working on a proof of concept kind of thing, that wasWould it be possible for you to point me to the relevant material ?
a network block device server that compressed the data sent to it.
....
2 - The compression layer should report a large block size upwards, and useI failed to understand; could you provide me with more details please ?
a little block size downwards, so that compression is as efficient as
possible. Good results are obtained with a 32kB / 512 byte ratio. This can
cause extra read-modify-write cycles upwards.
...As I said earlier I my point is definetely not to gain space, but to grow the "per-bit entropy". I really want to encode my data even if this grows its length, as is done in http://jsam.sourceforge.net . My final goal is the following: for each plain block first draw a chunk of random bytes, and then compresse both the random bytes followed by the plain data with a dynamic huffman encoding. The random bytes are _not_ drawn uniformly, but rather so that the distribution on huffman trees (and thus on encodings) is uniform. This ensures (?) that an attacker really has not other solution to decipher the data than brute-force: each and every key is possible, and more precisely, each and every key is equi-probable.