>> So I must put significant work into a parser that will understand
>
> Well, this is a leading question, and should be rejected by the judge!
> I.e, the answer is yes, AND there isn't any significant work involved
> in writing a parser.
Ah, have I found a volunteer?
>> whatever crazy changes might happen (an AI parser), and get a
>> result that is very slow. No thanks. Dumb parsing is slow enough.
>
> Parsing is very fast, as any measurement will show. You are just
> ill-informed (and wrong) here. See, for example (with apologies, since
Do it 12000 times every second.
> Linus produced a good example of fast snappy parsing via a simple state
> machine when he did the dependencies speedup code.
Such a state machine would have failed, because the changes
have been too unpredictable.
> Perhaps one thing people need is a dynamically configurable command
...
> dynamically cionfigurable parser as a shell script. Looks like it works
Shell script? Parsing thousands of /proc files per second?
You also forgot the AI. I'm serious about the AI.
I really do not think you could have written a parser to handle
both Linux 2.0 and a (then future) Linux 2.1. Who would have guessed
that the "SigCgt" in /proc/*/status would turn into "SigCat"?
Who would have guessed that the signal info in /proc/*/stat would
change from 32-bit decimal to 64-bit hex?
If you can guess what strange changes will happen by Linux 2.4,
please tell me how to write a parser for them.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu