next up previous contents Back to Operating Systems Home Page
Next: Semaphores Up: 304-427 Lecture Notes Previous: Pipes

Synopsis on concurrency

This article is a brief summary of the arguments covered in the lectures on concurrency. The section on semaphores in UNIX summarizes the content of one of the tutorials. A postscript version is available here.

Under the heading of ``Concurrency'' goes the analysis of the problems that in a multiprogramming environment may occur when processes share resources like the content of a shared memory area, files, devices. In general, these are all problems due to lack of proper synchronization in the access to the shared resources, which may cause data corruption or evil situations, known as deadlocks, in which processes are stalled waiting to access a shared resource, and never being allowed to. These problems are often pretty subtle in that their manifestation is often unpredictable, depending on the actual operations performed by the processes, hence they typically defy common, source-oriented debugging techniques, requiring an overall vision of the system behaviour. It then follows that specific system design techniques must be employed in order to provide application programmers with tools by which these problems can be avoided.

A general treatment of concurrency problems starts with the definition of the concept of ``critical section'' of a program: it's simply a code fragment in which a shared resource is accessed, and encapsulation in the source of these code fragments is of vital importance for the fruitful use of the above mentioned tools.

The theory then goes on by requiring that efficient solutions to conflict problems must be characterized by a synchronization among processes, so that they have to wait for executing a critical section if another process is accessing the shared resource that critical section is for. This kind of synchronization is named ``mutual exclusion'' (from running the critical section at the same time).

The following design guidelines should be followed:

Among the simplest ways to enforce mutual exclusion, we observes that a ``hard'' approach like a user program disabling interrupts when entering a critical section is certainly a solution, since processes must be interrupted in order for other processes to gain CPU control and possibly access a shared resource, but it's certainly a bad one, since it leaves too much control to the user program itself: interrupt disabling means that not even the kernel can gain CPU control if something goes astray in the user program.

The use of simple shared variables or files to signal that a process is in a critical section is wrong in principle: they would simply be new shared resources, and an access to them without proper synchronization would simply add a new problem.

A strict alternation among processes is deemed to be highly inefficient.

A provable correct software solution to the mutual exclusion problem is Peterson's algorithm (see Tanenbaum's ``Modern Operating Systems'' for details).




next up previous contents Back to Operating Systems Home Page
Next: Semaphores Up: 304-427 Lecture Notes Previous: Pipes

Franco Callari