next up previous contents Back to Operating Systems Home Page
Next: Process Description and Control Up: Process Description and Control Previous: The concept of process

Process states

Carefully reconsider the simple process implementation scheme of the previous section from the point of view of the individual programs, or tasks, being executed in each process. Do they need to know about their execution context, or about their being repeatedly suspended and restarted by the OS? Of course not, as long as they can rely upon the OS providing them the resources they need. The OS needs context information to perform its job of allocating resources among tasks according to a certain policy, and the application programmers who write the tasks usually don't want (and sometimes must not be allowed) to care about that policy. Moreover, as long as the OS represents faithfully and completely the state of a suspended task in its context, that task can be restarted at any time without it taking notice of the past suspension.

So far we have considered only two possible conditions (or states) for a process, namely running and not-running (we'll use the term suspended in another well-defined sense later on): a running process has control of the machine's CPU and is executing its task, a not running process is in the OS process list, and will crunch some CPU time when the OS think it should. We have already noticed, among the reasons that can cause the OS to switch a running process into non-running state, the need to wait for the completion of an I/O event, and the expiration of a quantum of CPU time allotted to the process. We'll encounter other reasons further on.

It's useful, at this point, to identify the possible transitions between process states. We can, for example draw (do it) a state transition diagram with four nodes, one for each of the so far identified states, plus one to point at a new process and one to a terminated process, and call admission the creation of a new process, which is put into not-running state, dispatch its transition into running state, pause its transition from running into not-running, and exit the termination of a running process. This way of dealing with the two transitions which are not between running/not-running states, namely the admission and the exit, makes apparently sense, since

The new and terminated states are worth a bit of more explanation. The former refer to a process that has just been defined (e.g. because an user issued a command at a terminal), and for which the OS has performed the necessary housekeeping chores. The latter refers to a process whose task is not running anymore, but whose context is still being saved (e.g. because an user may want to inspect it using a debugger program).

A simple way to implement this process handling model in a multiprogramming OSgif would be to mantain a queue (i.e. a first-in-first-out linear data structure) of processes, put at the end the queue the current process when it must be paused, and run the first process in the queue.

However, it's easy to realize that this simple two-state model does not work. Given that the number of processes that an OS can manage is limited by the available resources, and that I/O events occur at much larger time scale that CPU events, it may well be the case that the first process of the queue must still wait for an I/O event before being able to restart; even worse, it may happen that most of the processes in the queue must wait for I/O. In this condition the scheduler would just waste its time sifting the queue in search of a runnable process.

A solution is to split the not-running process class according to two possible conditions: processes blocked in the wait for an I/O event to occur, and processes in pause, but nonetheless ready to run when given a chance. A process would then be put from running into blocked state on account of an event wait transition, would go running to ready state due to a timeout transition, and from blocked to ready due to event occurred transition.

This model would work fine if the OS had an very large amount of main memory available and none of the processes hogged too much of it, since in this case there would always be a fair number of ready processes. However, because the costs involvedgif this scenario is hardly possible, and again the likely result is a list of blocked processes all waiting for I/O .

Modern architectures rely on a hyerarchical organization of the available memory, only part of which exists in silicon on the machine board, while the rest is paged on a high speed storage peripheral like a hard disk. We will address this topic in detail later on; for now it will sufice to say that while the processor has the capability to address memory well above the amount of available RAM, it physically accedes the RAM only, and dedicated hardware is used to copy (or page) from the disk to RAM those pieces of memory which the processor refers to and are not available on RAM, and copy back areas of RAM onto disk when RAM space needs be made available. In particular, the act of copying entire processes back and forth from RAM to disk and viceversa is called swapping. As we'll see further on, swapping is often mandated by efficiency considerations even if paged memory is available, since the performance of a paged memory system can fall abruptly if too many active processes are mantained in main memory.

The OS could then perform a suspend transition on blocked processes, swapping them on disk and marking their state as suspended (after all, if they must wait for I/O, they might as well do it out of costly RAM), load into main memory a previously suspended process, activating into ready state and go on, However, swapping is an I/O operation in itself, and so at a first sight things might seem to get even worse doing this way. Again the solution is to carefully reconsider the reasons why processes are blocked and swapped, and recognize that if a process is blocked because it waits for I/O, and then suspended, the I/O event might occur while it sits swapped on the disk, We can thus classify suspended processes into two classes: ready-suspended for those suspended process whose restarting condition has occurred, and blocked-suspended for those who must still wait instead. This classification allows the OS to pick from the good pool of ready-suspended processes, when it wants to revive the queue in main memory. Provisions must be made for passing processes between the new states. In our model this means allowing for new transitions: activate and suspend between ready and ready-suspended, and between blocked-suspended and blocked as well (why also the activation?), end event-occurred transitions from blocked to ready, and from blockes-suspendedto ready-suspended as well. The final scheme thus obtained is sketched in Fig. 2.

  
Figure 2: A 7-state process model


next up previous contents Back to Operating Systems Home Page
Next: Process Description and Control Up: Process Description and Control Previous: The concept of process

Franco Callari