When all processes sharing a resource are on the same machine, mutual exclusion is easily assured by the use of a semaphore, spin-lock or other similar shared abstraction. When the processes involved are on different machines, however, mutual exclusion becomes more difficult.
Consider the following example: A number of machines in a network are competing for access to a printer. The printer is so constructed that every line of text sent to the printer must be sent in a separate message, and thus, if a process wants to print an entire file, it must obtain exclusive use of the printer, somehow, send all the lines of the file it wishes to print, and then release the printer for use by others on the network.
A trivial solution to this problem is to install a print spooler process somewhere on the net. That print spooler would gather lines of files provided by various applications processes, maintain a queue of completed files that are ready to print, and print one such file at a time. This works, but it introduces some problems: First, the spooler must have buffer capacity to hold the aggregate of all the files that have not yet been printed. Second, the spooler may become a bottleneck, limiting the performance of the entire system, and third, if the processor supporting the spooler is unreliable, the spooler may limit the reliability of the system.
In the printer example being used here, the problem of storage space in the spooler typically becomes acute with graphics printers. In such a context, it is desirable to block an applications process until the printer is ready to accept data from that applications process, and then let that process directly deliver data to the printer.
For example, an applications process may be coded as follows:
Send request for permission to the spooler Await reply giving permission to print Loop send data directly to printer End Loop Send notice to spooler that printing is doneIf all users of the spooler use this protocol, the spooler is no-longer serving as a spooler, it is merely serving as a mutual exclusion mechanism! In fact, it implements exactly the same semantics as a binary semaphore, but it implements it using a client server model.
The process implementing a semaphore using message passing might have the following basic structure:
Loop Await message from client Case message type of P: If count > 0 Send immediate reply count = count - 1 Else Enqueue identity of client End if V: If queue is empty, count = count + 1 Else Dequeue one blocked client Send a delayed reply End if End case End LoopThis requires a count and a queue of return addresses for each semaphore.
The disadvantage of implementing semaphores using a server process is that that server becomes a potential source of reliability problems. If we can build a mutual exclusion algorithm that avoids use of a dedicated server, for example, by having the processes that are competing for entry to a critical section negociate directly with each other, we can potentially eliminate the reliability problem.
One alternative is to arrange the competing processes in a ring and let them exchange a token. If a process receives the token and does not need exclusive use of the resource, it passes on the token to the next process in the ring. If a process needs exclusive use of the resource, it waits for the token and holds it until it is done with the resource, at which point it puts the token back in circulation.
This is the exact software analog of a tokin ring network. In fact, in a token ring network, only one process at a time may transmit, and the circulating token is used to assure this, exactly as described.
This solution is not problem free. What if the token is lost? What if a process in the ring ceases transmission? Nonetheless, it is at the root of a number of interesting algorithms.
One decentralized algorithm in common use, for example in bakeries, is to issue numbers to each customer. Then, when the customers want to access the scarce resource (the clerk behind the counter), they compare the numbers on their slips and the user with the lowest numbered slip wins.
The problem with this is that there must be some way to distribute numbers, but this has been solved. A trivial solution is to introduce a very small server to distribute numbers, but this is recursive -- the server itself requires either mutual exclusion or message passing for its implementation.
Before going on to more interesting implementations for distributing numbers, note that clients of such a protocol may make extensive use of their numbers! For example, if the bakery contains multiple clerks, the clients could use their number to select a clerk (number modulo number of clerks). Similarly, in a FIFO queue implemented with a bounded buffer, the number modulo the queue size could indicate the slot in the buffer to be used, allowing multiple processes to simultaneously place values in the queue.
There is a large literature of synchronization algorithms based on the "take a number" scheme. Much of this originates from new work by Gottleib at New York University, where a machine is being built, the ultracomputer, that has a hardware solution to the problem of distributing unique numbers to multiple processes with minimal contention.
Lamport's Bakery Algorithm provides a decentralized implementation of the "take a number" idea. As originally formulated, this requires that each competing process share access to an array, but later distributed algorithms have eliminated this shared data structure. Here is the original formulation:
For each process, i, there are two arrays, C[i] and N[i]:
_ _ _ _ _ _ _ _ _ _ _ C |_|_|_|_|_|_|_|_|_|_|_| N |_|_|_|_|_|_|_|_|_|_|_| N[i] = 0 --> Process i is not waiting for the baker. C[i] = 0 --> Process i is not trying to pick a number. when N[i] = min( for all j, N[j] where N[j] > 0 ) Process i is allowed into the critical section.In Lamport's bakery algorithm, each process has two associated variables per mutual exclusion semaphore. C is used to help break ties if two processes try to pick numbers at the same time, and N is the number that process has chosen.
Instead of having a central source of numbers, processes pick numbers by examining the numbers held by all other processes, and instead of having the baker post the number of the process allowed to go next, processes decide among themselves who gets to go next.
Here is the algorithm used to take a number:
C[i] := 1; N[i] := max( for all j, N[j] ) + 1; C[i] := 0;
In effect, the customer walks into the bakery, checks the numbers of all the waiting customers, and then picks a number one larger than the number of any waiting customer.
If two customers each walk in at the same time, they are each likely to pick the same number. Lamport's solution is to let them pick the same number, but to make sure that they notice that this has happened and break the tie in a sensible way.
To help the customers detect ties, each customer who is currently in the process of picking a number holds his hand up (by setting C[i] to 1. He pulls down his hand when he is done selecting a number -- note that selecting a number may take time, since it involves inspecting the numbers of everyone else in the waiting room.
A process does the following to wait for the baker:
Step 1: while (for some j, C(j) = 1) do nothing; First, wait until any process which tied with you has finished selecting a number. Step 2: repeat W := (the set of j such that N[j] > 0) -- W is the set of indeces of waiting processes M := (the set of j in W such that N[j] <= N[k] for all k in W) -- M is the set of process indices with minimum numbers j := min(M) -- j is in M and the tie is broken. until i = j; Second, wait until your ticket number is the minimum of all tickets in the room.Note that if someone else tied with you when you picked a number, he may still be trying to pick his number when you are ready to wait your turn. Therefore, the first thing you do after picking your number is look around to see if anyone has their hand up. As soon as nobody has their hand up, you know that anyone who picked the same number as you picked has now determined what number that is.
This is inefficient, because you might wait a bit too long while some other process picks a number after the number you picked, but for now, we'll accept this cost.
Once you know that any numbers that tied with your number are picked, you begin checking all the numbers in the room to see if you find any numbers smaller than your number, or alternately, you check all the numbers in the room and determine the identity of someone who holds the smallest number.
If you are not the person holding the smallest number, you start checking again. If you hold the smallest number, it is also possible that someone else holds the smallest number. Therefore, what you've got to do is agree with everyone else on how to break ties.
The solution shown above is simple. Instead of computing the value of the smallest number, compute the process ID of a process holding the smallest value. As long as all processes use the same deterministic algorithms to do this, they will arrive at the same conclusion -- at least, they arrive at the same conclusion if they happen to be in the set that are tied for the smallest.
The process that computes its own index as the holder of the smallest ID is the one that enters the critical section.
To return its ticket, a process simply executed the following:
N[i] := 0;When you return your ticket, if any other process was waiting, then on its next scan of the set of processes, some waiting process will find that it is holding the winning ticket.
The useful properties of this algorithm are that: Process i only modifies its own N[i] and C[i], but process i must read the entries for all others.
A distributed implementation of this algorithm can be produced directly by storing N[i] and C[i] locally with process i, and using message passing when any process wants to examine the values of N and C for any process other than itself.
This demonstrates that mutual exclusion can be done without any central authority! Each process must read the variables of all other processes a minimum of 3 times -- once to select a ticket number, once to see if anyone else is in the process of selecting a number, and once to see if it holds the minimum ticket.
It is possible to improve on these minima, but the demonstration that a centralized resource is not needed to assure mutual exclusion is important.
For each process contending for entry to the critical section, there are about 6N messages exchanged, which is clearly not very good. Much better algorithms have been devised, but even this algorithm can be improved by taking advantage of knowledge of the network structure. On an ethernet or on a tree-structured network, a broadcast can be done in parallel, sending one message to N recipients in only a few time units. On a tree-structured network, the reply messages can be merged on the way to the root (the originator of the request) so that sorting and searching for the maximum N or the minimum nonzero N can be distributed efficiently.
On a ring structured network, a token can be circulated indicating the state of the critical section. If a process wants to enter the section, it claims the token the next time the token comes by. If a process doesn't want to enter, it forwards the token everytime it sees it. There are many variations on this kind of approach.
Another alternative is for anyone wishing to enter a critical section to broadcast their request; if everyone else agrees that it is OK to enter the section, continues.
If you are in a critical section and you hear others request entry, keep their name in mind, and when you leave, tell them that it's their turn, as far as you are concerned.
This sounds like a remarkably naive algorithm, but with point-to-point communications between N processes, it takes only 2(N-1) messages for a process to enter the critical section, and N-1 requests and N-1 replies.
There are some subtle issues that make the result far from naive. For example, what happens if two processes each ask at the same time?
Ricart and Agrawala's mutual exclusion algorithm solves these problems. Each process has 3 significant states:
* Outside the critical section The process replies immediately to every entry request. * After requesting entry, awaiting replies The process only replies to higher priority requests. * Inside critical section The process holds all replies until exit.As with Lamport's bakery algorithm, this algorithm has no central authority. Nonetheless, the interactions between a process requesting entry to a critical section and each other process have a character similar to client-server interactions. That is, the interactions take the form of a request followed (possibly some time later) by a reply.
As such, this algorithm can be made fault tolerant by applying the same kinds of tricks as are applied in other client server applications. On receiving a request, a processor can be required to immediately send out either a reply or a negative acknowledgement. The latter says "I got your request and I can't reply yet!"
With such a requirement, the requesting process can wait for either a reply or a negative acknowledgement from every other process. If it gets neither, it can retry the request to that process. If it retries some limited number of times, it can assume that the distant process has failed and give up on it.
If a process receives two consecutive requests from the same process because acknowledgements have been lost, it must resend the acknowledgement. If a process waits a long time and doesn't get an acknowledgement, it can send out a message saying "are you still there", to which the distant process would reply "I got your request but I can't reply yet." If it gets no reply, it can retry some number of times and then give up on the server as being gone.
If a process dies in its critical section, the above code solves the problem and lets one of the surviving processes in. If a process dies outside its critical section, this code also works.
There are many ways to break ties between processes that make simultaneous requests; all of these are based on including the priority of each requesting process in the request message:
A unique process ID can be used. This is a static priority assignment and is almost always needed to break ties in any of the more complex cases. Typically, a process will append its statically assigned process ID to any more interesting information it uses for tiebreaking, thus guaranteeing that if two processes happen to generate the same interesting information, the tie will still be broken.
The number of times the process has previously entered the same critical section can be used; if processes that have entered the critical section more frequently are given lower priority, then the system will be fair, giving the highest priority to the least frequent user of the resource.
The time since last access to the critical section offers a similar opportunity to enforce fairness if the process that used the critical section least recently is given the highest priority.
There are many ways to assign process priorities for the purpose of breaking ties. What matters is that, prior to requesting entry to a critical section, a process must pick a unique priority. It broadcasts this priority with its request to enter the critical section, and it keeps that priority constant as it compares its priority with that of other requests.
In all of these distributed synchronization algorithms, each process needs to respond immediately to some interprocess messages, while it must block awaiting others. The easy way to do this is to divide each process into an application process, which initiates requests for critical section entry, and a local mutual exclusion server, which handles the need for immediate responses to distant processes. These must share the variables used by the synchronization algorithm, as illustrated:
outgoing requests __________ <----------------------------------| | incoming replies | | ---------------------------------->| User | _________ ___ | Process | incoming | | | | | | requests | Agent | | V | | | ------------>| |===| a |===| | <------------| |===| r |===| | outgoing | | | s | | | immediate | | | | | | replies |_________| |___| | | | | outgoing deferred replies | | <----------------------------------|__________|The Agent process, which could be an interrupt service routine on a primitive system, always handles any immediate reaction to an incoming request (it can also handle the details of the fault tolerant protocol outlined in an earlier aside). It either replies immediately or it logs the request in the variables it shares with the user process.
The user process manipulates the variables, and it directly makes requests to the agents of remote processes and awaits the replies. The user process also sends deferred replies when the time comes to leave the critical section.
The following data structures are needed:
State -- what is the state of the user process, normal, awaiting replies, or in the critical section. State must change from normal to awaiting replies after Priority is set before sending out the request to enter.
Priority -- what is the priority of the user process.
Replies Needed -- how many replies have yet to arrive after sending out the requests (this is not needed by the agent process, just the user).
Pending Replies -- the set of replies to requests received by the agent but not immediately responded to. State must be set to normal before sending any of these replies.
It is instructive to look at the code for this mutual exclusion algorithm when it is packaged in the form of an agent process and user callable routines for entry and exit from a critical section -- try writing pseudocode for these.
A good exercise to ask is: What mutual exclusion do the user and agent need with respect to the variables shown above? The code given in the book (in figure 6.9) says nothing about this because it assumes that the four procedures shown are executed in mutual exclusion.