The traditional picture of the states of a process, as found in many elementary operating systems textbooks, is as follows:
Dead _ ^| |\ || \ kill|| \Terminate || \ || \ <----|------- \ Waiting || Wait Running --- \ || _ | \ ||Create /| | \ || / | Signal\ | / Schedule | \ v / | Preempt _| / | Relinquish Ready <--------------
* Running Exactly one process is in the running state on each CPU in the system, never more or less. * Ready There is a queue of processes ready to use the CPU. This may be a FIFO queue a priority queue
If no process in the system needs the CPU, something must be running on the CPU! For this reason, operating systems always include some kind of background process to soak up the otherwise unused CPU cycles. Sometimes, this is the background process, (as opposed to other background processes).
On early UNIX systems, one well known use for the background process was for computing more digits of pi. This is silly, and there are useful housekeeping jobs the background process can do. For example, it can run CPU and memory diagnostics, or it can manage garbage collection.
The ready queue may have any of a number of interesting data structures, but it always has two basic operations, enqueue a process and dequeue a process. A FIFO queue gives a system where all processes have equal priority. A priority queue allows high priority processes to have more access to the CPU.
Fixed priority processes are used in many real-time applications. With UNIX, an interesting variable priority scheme was introduced, in which the priority of a process rises when it voluntarily relinquishes the CPU, for example, to wait for I/O completion, and the priority of a process falls when it is preempted at the end of a time-slice. This favors interactive users over compute bound processes.
A running process is preempted when it is forced into the ready state in order to give the CPU to some other process. This might be done, for example, by the real-time-clock interrupt service routine at the end of a time-slice in a timesharing system. This is illustrated in the following pseudocode:
Clock interrupt service routine: begin Disable Interrupts (if not already disabled) -- Save process state of interrupted process current.pc := interrupt-return.pc current.registers := interrupt-return.registers -- Change interrupted process state to ready Enqueue( current, ready queue ) -- Make new process into the current process current := Dequeue( ready queue ) -- Restore process state of new process interrupt-return.pc := current.pc interrupt-return.registers := current.registers Enable Interrupts (if necessary) return end
Note that, in addition to involuntary preemption, many systems include provisions for a running process to voluntarily move itself from running to ready. This is sometimes called voluntarily relinquishing the CPU, and the code for this is essentially the same as that given above for preemption, except for differences in the calling sequence.
In old systems, clock interrupts were periodic, for example, every 1/60 of a second. Modern hardware usually has a programmable timer, so it is possible for the scheduler to vary the length of the time-slice (interval between interrupts) depending on the behavior of the process being scheduled.
The Berkeley Timesharing System did this by the late 1960's, giving long time slices to jobs that appeared compute bound, in an effort to minimize context switching, while giving short timeslices to jobs that appeared I/O bound.
Interrupts must be disabled during the interrupt service routine so that some other interrupt doesn't cause scheduling activity when the ready queue is halfway through being updated or when the process state is only partially saved or restored.
* Waiting There may be many events for which processes may wait. Any number of processes may be waiting for each. Waiting processes are typically enqueued on a semaphore. A semaphore has two parts: A queue of waiting processes. An integer count. One semaphore is needed for each condition on which some process might wait.
Semaphores were invented by E.W.Dijkstra in the late 1960's and they were only widely accepted in the early 1970's.
Prior to this, most operating systems had more ad-hoc and frequently incorrect implementations of the waiting state.
UNIX, which was developed in the late 1960's, does not entirely reflect the post-Dijkstra view of processes. With the exception of some later versions (for example, System V UNIX), UNIX does not provide user-level semaphore services, so UNIX programmers must avoid any such services that are provided if they wish to write portable programs.
The name P is Dijkstra's original name for the wait operation; it comes from the Dutch word which is the cognate of Pause in English. Here is some pseudocode that implements this on a uniprocessor:
Wait( sem ) or P( sem ): begin Disable Interrupts if sem.counter > 0 sem.counter := sem.counter - 1 else -- Save process state of interrupted process current.pc := procedure-return.pc current.registers := procedure-return.registers -- Change state to waiting enqueue( current, sem.queue ) -- Make new process into the current process current := Dequeue( ready queue ) -- Restore process state of new process procedure-return.pc := current.pc procedure-return.registers := current.registers endif Enable Interrupts return end
Signal( sem ) or V( sem ): begin Disable Interrupts if empty( sem.queue ) sem.count := sem.count + 1 else temp := dequeue( sem.queue ); enqueue( temp, ready queue ) endif Enable Interrupts return end
The above implementations are correct only if all processes have equal priority. If some have a higher priority, and if a low priority process issues a signal on a semaphore for which a high priority process is waiting, then the above code lets the low priority process continue running while the high priority process waits in the ready queue. This is wrong.