RE: Kernel Internals

James Mohr (jimmo@blitz.net)
Sun, 2 Feb 1997 11:33:43 +-100


> What does PID running in a specific range have to do with it? SCO has PIDs that run in a specific range *and* the size of the process table grows dynamically *and* the PID is the slot number in the process table. Also,
**
**I don't believe that is correct. I don't believe that has ever been true. The
**papers I have read on v7 unix indicate it was never true.

Within the the fork.c source is a function called newproc(). This is pretty complicated (IMHO) as it also addresses SMP stuff, which I am unfamiliar with. However, this is where I make the conclusion that the above statement is true.
The newproc() function contains a variable , which starts at 1, to avoid even checking the sched process (The SCO process scheduler), which has a PID==0.
The variable is incremented as the process table is searched for a free slot.
After a free slot is found, this value is assigned to the PID value within the process structure.

My assumption is that this is done in order to save time later. With this 1:1 mapping, you don't need to *search* the process table later. You have the PID, you simply access proc[PID].whatever to get the information you need or do what you need.

Ted mentioned that this is only done in places that are not time crititical. So, what? To me it makes sense to have the process table defined as an array to be able to "jump" and not "traverse." What advantages would traversing bring over jumping? Since the array is already being defined, why not use it?

It's understandable that you travers the table with pointer++, rather that proc[PID++] or task[PID++]. However, you can still use both when you need to.

Also, how do you know if a particular PID is used? The only way I can see is to search the *entire* table. When you do a fork, you find a free slot (the first one?) in the table. If that is *not* the PID, where does the PID come from? How else do you know if this PID is in use without searching the *entire* table? This kind of search is not looking for a postive, so you search only half the table. You are looking for a negative and therefore have to search the *entire* table.

**
**> isn't the task[]array the "process structure in the kernel." Looking through the kernel source, I only see references to tasks and not to processes. So, what structure is the "process structure"?
**
**The task array is an array of pointers to task structures. task==process.
**

Which is the same as the SCO proc[] array, which is the process table.

**> - Any time you need information about a speficic PID.
**Signals are almost the only case of this.
**
What about the ps -p example? You mentioned that it is done in user space, but IMHO it's irrelevant where it is done. If you can simply access the info with proc[PID] you avoid searching the process table. The same applies to signals. Granted if you send a singal to a parent, then you need to propogate the signal, which is why you have the list of pointers. However, to make that first connection to the parent, it is (IMHO) easy with task[PID], not a traverse.

**> Here again, that wasn't the question. I know when the process table entry is cleared, I know what is kept in the the process table entry after the process dies, and I know what happens when there is no parent waiting on the child. So, to put the question as clear as I can:
**>
**
**That cannot occur. Process 1 always exists, processes are reparented as in
**other unices. Init will clear them up.

You misunderstood what I was saying. "I know what happens when there is no parent waiting on the child." Here, I am saying I know that they are inherited by init. That's what the SCO source says, as well. However, the original issue started from someone saying that it was the process *itself* that was responsbile for cleaning up the process table, which didn't make sense. There would be too much baggage to carry around. If the process that did the fork() no longer exists, then it is inherited by init, Init becomes the parent and cleans up after it's children. (kinda like my house.)

**
**> In SCO, the virtual memory of each process between 3Gb-4Gb is for portions of the kernel that the process is using. So, when I am using a device, I have a particular driver loaded that ends up in the 3-4Gb range. Although two (or more) processes are waiting on the same event (i.e. input from the keyboard) and the WCHAN maps to the same function, the numeric WCHAN value is different. In Linux, both the numeric value and the address mapping is the same.
**
**Linux wait queues are not the same as the Unix sleep hashing and scheduling

Sorry, Alan, I'm missing the point. However, Tim gave me the answer to my question in that Linux is waiting on the function and other Unixes are waiting on data. Since the function is always at the same address you get the same WCHAN in Linux. Data is at different addreses, therefore a different WCHAN as in SCO.

Regards and thanks to all,

jimmo