Homework 10 Solutions

22C:116, Fall 1995

Douglas W. Jones
  1. Part A: Outline the structure of a decentralized clock synchronization protocol.

    One of many solutions: A simple decentralized clock synchronising protocol can be devised using averaging. Consider organizing the processes in a logical ring with processes being numbered 0 to N-1. If communiction were instantaneous, each process i could simply execute:

         do
             C[i] := (C[i] + C[(i+1)MOD N]) DIV 2
         od
    
    where C[i] is obtained from the local clock and C[(i+1) MOD N] from the neighbouring process' clock. Then the local clock could be speeded up or slowed down as required. After some time, all clocks in the system would converge to a common value.

    Real communication is not instantaneous, so we can use the following algorithm based on the above. We assume that a message is timestamped with the local clock of the sender process.

         Process(i) periodically does the following:
            t0 := C(i)
            send request message to process (i+1) MOD N
            receive reply
            t1 := C(i)
    	C' = (t0 + t1) div 2
            update C(i) to gain (reply.timestamp - C') time units.
    
    In addition, Process(i) sends immediate timestamped replies to all requests it receives.

    Part B: Consider the problem of running this clock synchronization protocol on a ring network. Would the resulting asymmetry introduce any systematic error in the protocol?

    The corresponding answer: This algorithm assumes that the outgoing and return time delays are equal, while in a ring network, if the logical ring used to couple processes for clock synchronization is directly modeled on the physical ring, the request delay will be short (one hop) while the reply delay will be long (N-1 hops). As a result, clock i+1 will be polled at a time before the time assumed at station i, causing clock i to be averaged with an early reading. The net effect of this asymmetry is that all of the clocks around the ring will remain synchronized (although not as well synchronized as they would be with symmetrical delays), but they will all be systematically slowed down.

    If the logical ring used for clock synchronization is superimposed on the physical ring in reverse order, the result will be similar, with a systematic speedup instead of a systematic slowdown. If the logical ring is randomly superimposed on the physical ring, the expected systematic error is zero.

  2. The Question: Which of Tannenbaum's distributed mutual exclusion algorithms does the following Ada code exploit?
    task SEMAPHORE is
       entry WAIT;
       entry SIGNAL;
    end task;
    task body SEMAPHORE is
    begin
       loop
          accept WAIT;
             -- rendezvous had empty body
          end accept;
          accept SIGNAL;
             -- rendezvous had empty body
          end accept;
       end loop;
    end task body;
    
    The Answer: The given Ada implementation of a binary semaphore for mutual exclusion appears to be a centralized solution since there is a single dedicated task controlling the mutual exclusion. After calling SEMAPHORE.WAIT, a process waits till the semaphore task accepts the rendezvous at entry point WAIT. Other processes will be kept waiting till the process in the CS calls SEMAPHORE.SIGNAL because the semaphore task is suspended pending rendezvous at entry point SIGNAL.

  3. The Question: The solution given to problem 4 of the midterm exam uses three semaphores, one for mutual exclusion. Why is the mutual exclusion semaphore needed, and what small change to client and server code allows the elimination of this mutual exclusion semaphore?

    A Solution: The mutual exclusion semaphore is needed so that the server executes the queue commands atomically without interference from other clients. For example, when the server receives the "enqueue" command, the next message it receives is guaranteed to contain the data to be enqueued and from the same process that sent the enqueue command.

    The need for "mutex" can be removed by a number of methods. Perhaps the most elegant is to use two channels, one for client server requests, and one for the data transfer between client and server:

         repeat
             receive(request_channel, msg)
             if msg is enqueue then
                 receive(data_channel, msg)
                 put msg in queue
             elsif msg is dequeue then
                 send(data_channel, data from queue)
             endif
         forever
    
    The clients become:
         enqueue(item):
             P(space)
             send(request_channel, enqueue)
             send(data_channel, item)
             V(data)
    
         dequeue(item):
             P(data)
             send(request_channel, dequeue)
             receive(data_channel, item)
             V(space)
    

    This solution is closely related to the Ada binary semaphore given in the previous problem, in that it relies on the fact that all message exchanges are non-buffered to assure that no client will attempt to access the data channel until that client's request message has been delivered.

  4. The problem, part A: Propose a centralized solution to the problem of lost tokens in token ring mutual exclusion, using a designated station on the ring to regenerating lost tokens.

    One of many solutions: Designate one process as the token monitor. If it hasn't received the token for some time period, it sends a probe message around the ring. If a process holds the token, it sets the data field "token-present" in the message to true and passes it on else the probe message is simply passed on. When the probe message is received back at the monitor process, it can determine whether the token has been lost by checking the "token-held" field of the message and if necessary regenerate the token.

    The problem, part B: Propose a decentralized solution to this problem.

    One of many solutions: Assume that processes have unique IDs. If a process hasn't seen the token for some time period, it sends a probe message around the ring.

       Probe message:
          sender: the ID of the process which originally sent the probe
          token-present: set to false initially
          abort: initially set to false
    
    As before, if a process holds the token, it sets the "token-present" field to true and passes on the message, thus preventing regeneration. If two or more processes send probes around the ring, only one may regenerate the token! We use the abort field to assure this; when a process is awaiting the return of a probe it has already sent around the ring, and it receives a probe originated by some other process, it sets the abort field of the probe as follows before passing on the probe:
       abort = (originator's ID > my ID)
    
    In effect, the result is that an election is held between the processes that attempt to recover from a lost token.

    Of course, if a process receives the token itself before it receives a probe it has sent around the ring, it must also ignore the probe it sent out!