Homework 9 Solutions
22C:116, Spring 1997
Ioana M. Lungeanu
Problem 1 part A:
The ISO OSI protocol hierarchy addresses this problem at the
PRESENTATION level.
Problem 1 part B:
In the context of RPC relationship between client and server,
any data format conversion is done at the client and/or server
stubs level. In practice both stubs convert any data format to
the "standard" network format and back from the standard to
their own format.
Other strategies one may imagine could make the client stub send
first it's data format and the server stub would convert it to
its own format. Using a canonical format for is computationally
easier because it means that each stub need know only 2 formats,
the local format and the canonical format. If the local and
canonical formats are the same, the stub need do no work.
Problem 2 part A:
Clock drift rate is the amount a real clock reading deviates from
the "ideal" clock's value. The difference between two clocks that
have the same initial value, can be, after one second (assuming
the drift rates are given in seconds per second) as much as double
the maximum drift, if the clocks drift in opposite directions.
Let us assume that the synchronization algorithm sets the two
clocks on the link to the mean value of the two readings.
Consider a linear network with 11 processors and 10 links, where
Let's say P0 has drift -0.01 and all P1 - P10 have drifts of
+0.01. Assume that synchronization is done in the order: P10-P9,
P9-P8, ,,,P1-P0. This is a bad case, because the error will
take 10 seconds to reach P10. For the first 9 seconds P10 will
run in phase with P9, so no adjustment is done to the clock value
since they have similar clock drifts.
Using a simple simulation, the maximum error observed in this
situation was 0.04 seconds. A simple theoretical argument
suggests that the maximum observed error will grow as O(n) where
n is the diameter of the network.
Problem 2 part B:
The drift of a composite clock is equal to the mean of the
individual drifts of the component clocks (sum of all the drifts
over the number of clocks).
Since individual closk drift rates are randomly distributed over
the allowable range of drift rates, both in value and in sign
(lower vs faster clocks, the sum of all the 100 drifts of the
clocks is very close to 0. Dividing it by 100 makes it even
smaller so we can expect a very high accuracy for the composite
clock.
Recalling elementary statistics, if N independent measurements
of the same quantity are averaged, where each measurement has an
expected error of e, the expected error in the average is
e/sqrt(N) -- so the composite of 100 clocks, where each has an
expected drift of d, will have an expected drift of d/10.
Problem 3:
Each process i keeps local copies of the arrays C[i] and N[i]:
_ _ _ _ _ _ _ _ _ _ _
C |_|_|_|_|_|_|_|_|_|_|_|
N |_|_|_|_|_|_|_|_|_|_|_|
The main idea to reduce the trafic of messages compared to the original
algorithm is analogous to the comaparison between polling and interrupts.
Instead polling each process j about its state C[j] and N[j],
each process j will broadcast to all other processes its state only
when it modifies it. To be sure that the messages are received by their
destinations, the sender waits for acknowledges from all receivers.
In this way the trafic is moved from evaluation of C and N to assignment
of C and N. The new version is better because there are exactly 6N
messages not more:
2N messages when C[i] := 1;
2N messages when N[i] := max(...)+1; and C[i] := 0;
2N messages when N[i] := 0;
In the original version, C[j] and N[j] are checked in busy waiting loops,
each checking involving 2N messages, so the total number of messages
may be very large.
The algorithm is the same, with the observation that when C[i] or
N[i] is modified, the process i will broadcast the new values and
wait for the acknowledges from all other processes. When a process
receives a message, an interrupt routine is called which updates
the corresponding entries in the local arrays C and N and sends the
acknowledge to the sender.
Problem 4 part A:
Mutual exclusion algorithms are needed whenever there is a shared
data involved, while election algorithms are run whenever a fault
causes a special authority process to fail.
Mutex algorithms are hard to conceive in the presence of faults
(most assume faults are not allowable), while election algorithms
are designed for faulty medium.
In mutex algorithms fairness is an important issue and the process
trying to enter the critical section must succeed eventualy. In
election algorithms the intiator need not be the winner and fairness
is not an issue.
In mutex algorithm one process only needs to know if it is the
winner or not, in election algorithms all currently alive processes
must agree upon the same winner.
In a mutex algorithm only the processes interested to enter the
critical section participate ( may happen that all need to do that),
while in an election algorithm all alive processes must participate.
Problem 4 part B:
Mutual exclusion algorithm based one a centralized authority cannot
be modified to serve the purpose of election since in the presence
of a fault of the "authority" the whole system fails for good.
Distributed mutual exclusion algorithms, in which every process has
the same code, can be transformed to perform election. The algorithm
is initiated by a process that dicovers the failure af the leader
(instead of a process that wants to enter the critical section). All
the other processes respond as if they all want to enter the critical
section. Finally , the winner lets everyother process know its
identity. Apart from those three modifications the algorithm runs
as it did for the mutual exclusion problem.