RE: proposal for generic interface to /proc text files

Rob Riggs (rriggs@tesser.com)
Mon, 30 Sep 1996 12:05:48 -0600 (MDT)


On 01-Oct-96 Keith Owens wrote:
>One problem that has reared its ugly head is reading newer format proc
>entries with older programs. The current plain text /proc files are not
>suited to easy upgrades. It is difficult enough to add a field at the end of
>an existing line, often user mode utilities will break. Adding a field to
>the middle is just not possible without some form of tagged text. I know the
>standard response is "well if you upgrade the kernel you just have to upgrade
>your utilities as well" but how many problems have we seen because this just
>does not happen?

I know of only one suite of utilities that broke do to changes in
/proc. The problem is that there was no attempt made to insure
backwards compatibility. A kernel command line arg or a kernel
config option could have been used to address this problem. I'm
sure others can find even more clever ways to have dealt with it.

>I would like to see a clean break to tagged text for *all* proc files. That
>way we only get caught once when the change is made, thereafter user mode
>utilities will simply ignore /proc fields they do not recognise, making for a
>much cleaner upgrade path.

How often do common /proc entries change? Very infrequently. This
is overkill for a rare problem. We need to find a solution that is
a little more subtle. These issues can be addressed with less
draconian measures that what you are proposing. Heck, a simple
policy, such as "when kernel changes affect a large enough
user base, some means of backwards compatibility must be
maintained" would be immensely helpful. It would definitely solve
the problem you are attempting to address. Fortunately most
developers follow these words of wisdom already.

Would a /proc/procversion file work for you? Each proc file
would have an entry. When changes are made the procversion for
that file is bumped. A program would parse this file for the
pages it wants to access, and if the version is greater than
expected, it could issue a warning. If it dumps core, you got
a good idea as to why it happened. [This idea was plagiarized
from someone describing ELF problems in linux-gcc]

>The other problem that needs to be addressed in /proc is handling output of
>more than 4K. The only indication the procinfo routine gets is the "offset"
>into the generated output, the procinfo code is expected to somehow
>reposition itself and pick up where it left off. Since the underlying tables
>are continually changing, this suffers from race conditions. Some of the
>work arounds are not very nice.

I will be looking into this. Which /proc entries have this problem?

>It does not seem possible to cleanly restart procinfo output with the current
>calling sequence. We come close but it is not perfect. I suggest that the
>driver be modified to pass a 4K page, if that is too small then instead of
>doing it in chunks with races, the procinfo routine returns a code saying
>"buffer too small". The proc driver then passes larger and larger buffers
>until the procinfo routine has enough space to display all the data in one
>go. Once the data has been generated, the driver can copy to user space in
>page chucks. High water marks would be saved for each procinfo routine that
>needed more than 4K (a small chain owned by the driver) so the algorithm
>would quickly discover how much space each procinfo routine needed.

Currently, the way *most* of the /proc routines work is to pass a
pointer to a 4K buffer and **char to the service routine. The general
assumption is that if the 4K page ain't big enough, the service routine
will kmalloc a big enough area and pass the pointer back through in the
**char variable.

The proc code has evolved quite a bit. There are lots of pieces
that do not take advantage of many of the new procfs features.

Let me know which /proc entries are using that screwy method of sending
more than 4K of data. And start writing that Proc_Style guide :-).

Rob
(rriggs@tesser.com)