Re: A packaged kernel

Illuminati Primus (vermont@gate.net)
Fri, 24 Jan 1997 11:14:29 -0500 (EST)


Well, the method I was hoping to inspire would affect the servers the
least, and leave the work of choosing the files and compiling the kernel
up to the client (besides I dont think there would be enough servers
compiling kernels for people to keep up with the demand.. there are also
alot of security issues you have to take into consideration)..

Someone just recently mailed about a utility called "kprune".. I was
thinking, it would be very easy to modify kprune to output a list of files
NEEDED instead of unneeded.. Then that output could be converted into a
small script for ftp, and the files would be downloaded. This would only
need some ftp sites with the latest stable & development kernels untarred
and available to the public in that form..

The problem with this is that whenever new files are added to the kernel,
the script wouldn't know that it should download them. Maybe it could
test the files it downloaded for dependencies in parallel with the
download process, so that it would be prepared to get more files. Or maybe
just download everything except the files in the "unneeded list"..

However, a much cleaner method would be to have the kernel organized into
"packages" based on config options on the ftp site, so that the script
wouldn't run into trouble when it was trying to determine dependencies.
This would also mean it could easily retrieve new documentation along with
the rest that it wouldn't know it should download otherwise.

Maybe there would be some way to generate a list of files affected by the
various config options, and use that output to generate the various kernel
"packages"? That way the distribution site would only have to run the
script once on a newly released kernel and the files would be organized
into a format ready to be made available to the public.

Could anyone help me with some code that will generate a list of files
affected by config options? I would greatly appreciatte it..

-vermont@gate.net

PS
Is there some way to specify in the ftp protocol that you would like to
use the same data connection for succesive file transfers? I would think
this would help with the time needed to download many small files, since
connections wouldn't have to be opened every time a new file comes along..
(which can sometimes take a while.. add that to the tcp slowstart
algorithm and it probably chews up alot of time from the transfer)

On Thu, 23 Jan 1997, Billy Harvey wrote:

> Another possibility involves having a custom kernel compiled and emailed to
> you. This would possibily be implimented easiest via a web page interface,
> where a series of boxes could be checked to indicate what was needed.
> Indeed, an html interface for compilation selection options on your own
> hardware would be nice. What would need to be mailed I assume is the
> zImage, and the appropriate header files. Is there anything else programs
> need to reference in the source tree? The question of why to do this
> versus allowing ftp access could probably be answered via a measurement of
> the load on the system. Additionally, since an average compile time could
> be estimated once the host hardware was identified, a response to the
> request could include an expected delivery time. Obviously, the compile
> could 'niced' to cause minimal load on a system used for real work. The
> nice thing about that is a control of only one compile at a time could be
> implemented. Which takes overall less system time: The compilation and
> emailing of a custom compiled kernel with any necessary support files, or
> the ftp session to download 7 MB? Is there anyone out there who would be
> willing to do this as an experiment? Someone with a fast Alpha or an
> unused Sun maybe.
>
> Billy Harvey
> weh@magic.bunt.com
>