Re: [PATCH] init: bzip2 or lzma -compressed kernels and initrds

From: Frans Meulenbroeks
Date: Mon Sep 15 2008 - 08:46:57 EST


2008/9/15 Rob Landley <rob@xxxxxxxxxxx>:
> On Sunday 07 September 2008 00:48:31 Willy Tarreau wrote:
>> Hi Alain,
>> > +config KERNEL_LZMA
>> > + bool "LZMA"
>> > + help
>> > + The most recent compression algorithm.
>> > + Its ratio is best, decompression speed is between the other
>> > + 2. Compression is slowest.
>> > + The kernel size is about 33 per cent smaller with lzma,
>> > + in comparison to gzip.
>>
>> isn't memory usage in the same range as bzip2 ?
>
> Last I checked it was more. (I very vaguely recall somebody saying 16 megs
> working space back when this was first submitted to busybox, but that was a
> few years ago...)
>
> A quick Google found a page that benchmarks them. Apparently it depends
> heavily on which compression option you use:
>
> http://tukaani.org/lzma/benchmarks
>

[...]

Apologies if I'm sidetracking the discussion, but I'd like to coin a remark.

For kernel/ramfsimage etc the best choice is the one that has the
fastest decompression (info on tukaani.org says gzip).
Rationale: as it uncompresses faster the system will boot faster.

Of course this only holds if the background memory can hold that
image. For disk based systems, I assume this is not a problem at all,
but for embedded systems with all software in flash a higher
compression ration (e.g. lzma) can just make the difference between
fit and not fit (so in those cases lzma could just make your day).

Side note: although I think the conclusion at the tukaani website
holds, the data themselves are questionable.
I guess this is done on the internal hard disk of the laptop (this is
not specified). It would be better to do this on a ramfs to avoid
effects from data still being in the buffer cache (or not yet it).

Also the actual time in the tests is spent on three things: read from
disk, decompress, write to disk. (i'll only talk about decompress
here, guess an additional second or so to compress is not that
important).
You can argue that the latter is a constant as the same amount of data
is written, but the first one (the read time) depends on the actual
amount of data and the transfer rate of the device.
In case of slower devices it could well be that higher compression
yields a smaller image. If the reduction in read time is bigger than
the additional cost for the slower decompress, the net effect still
would be a win when it comes to boot time.

and finally: I've seen substantial timing differences when comparing
algorithms on different architectures (arm/mips/x86), so processor
might also make a difference on what is best. (and so will the
compiler).

FM
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/