On Fri, 20 June 2003 15:09:57 -0400, Jeff Garzik wrote:
>
> The big question is whether the bzip2 better compression is actually
> useful in a kernel context? Patches to do bzip2 for initrd, for
> example, have been around for ages:
>
> http://gtf.org/garzik/kernel/files/initrd-bzip2-2.2.13-2.patch.gz
>
> But the compression and decompression overhead is _much_ larger
> than gzip. It was so huge for maximal compression that dialing back
> compression reaching a point of diminishing returns rather quickly,
> when compared to gzip memory usage and compression.
>
> I talked a bit with the bzip2 author a while ago about memory usage.
> He eventually added the capability to only require small blocks
> for decompression (64K IIRC?), but there was a significant loss in
> compression factor.
You are puzzling me a bit. What exactly do you consider to be the
overhead? Code size? Memory consumption? CPU time?
When it comes to compression, the combination of compressed data and
decompression code should be larger than uncompressed data, everything
else is secondary. The secondary stuff might still affect some users,
but it is not a general problem.
> So... even in 2003, I really don't know of many (any?) tasks which
> would benefit from bzip2, considering the additional memory and
> cpu overhead.
That overhead will become less important with time. And if we will
merge bzlib anyway, we may as well do it today.
But I do agree with you that some choices made by the maintainer were
rather stupid. And one of them is the reason for this thread and
appears to be the whole point behind your argument.
Jörn
-- The cost of changing business rules is much more expensive for software than for a secretaty. -- unknown - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon Jun 23 2003 - 22:00:33 EST