This exam is open-book, open-notes, closed neighbor!
NOTE: If you want your final grade sent to you by E-mail, put your E-mail address on the front of the exam book. If you want your graded exam mailed to you, put a postal address on the front of the exam book. Otherwise, graded exams will be available for pick-up during the first two weeks of next semester.
NOTE: Essentially all questions on this test can be answered in one short paragraph or less! There are no long essay questions here! Illegible and overly long answers are difficult to grade. Please leave margins around your answers!
This exam is worth 3/10 of the final grade (30 points; allocate 4 minutes per point).
Problem: Assume mark-sweep garbage collection is used. What roots must be checked by the marking phase. (2 points)
Problem: What layers in the ISO OSI protocol hierarchy must be present in the software running on the gateway machine. (2 points)
Problem A: Which of these services would be addressed using the server's widely known public capability, and which would be addressed to objects in the server's internal object table. (2 points)
Problem B: Explain the relationship between the system's limit on the number of threads per process and the number of semaphores this server can safely instantiate. Include an explanation of the problems that might arise if the server instantiates too many semaphores. (2 points)
On receipt of the message, all participants not contending for that lock reply immediately; any participant in a critical section using that lock replies on exiting that critical section, and all participants already awaiting responses their own requests to enter that critical section only reply immediately if their unique ID is less than the unique ID of the initiator of the current request.
Each participant may accumulate a list of replies that are to be sent when it finally exits the critical section, in case there are many participants awaiting entry. With this algorithm, the set of replies that each process has deferred tells it the set of processes that are blocked awaiting its exit from the critical section.
Problem A: How does this information differ from that presented by the traditional waitfor graph, as used in introductory discussions of deadlock detection algorithms. (1 point)
Problem B: Does this algorithm give a blocked process the information it needs to determine what process currently holds the lock it is awaiting? (1 point)
Problem C: Give a very high level description of a deadlock detection algorithm suitable for use on this system, given your conclusions from parts A and B. Contrast this algorithm with the classical algorithm based on the waitfor graph. (1 point)
Problem D: Aside from the list of deferred replies stored with each process currently in a critical section, what other information must be stored in this network on behalf of blocked processes? Where is it stored? (1 point)
lock( char * L ) { int fd; fd = open( L, O_CREAT|O_EXCL ); while (fd < 0) { sleep( 1 ); -- delay 1 second fd = open( L, O_CREAT|O_EXCL ); } close(fd); } unlock( char * L ) { unlink( L ); }
Problem A: Why the sleep command? What would happen if it was omitted? (1 point)
Problem B: What computational cost would you expect to be associated with critical sections that use this kind of lock? (1 point)
Problem C: Suppose code that uses this approach to locking is compiled on an Amoeba system. (In fact, this is usually the case when UNIX utilities such as mailers and web browsers are ported to Amoeba.) Given that the Amoeba file system is distributed, does this imply that Amoeba uses a sophisticated distributed atomic transaction protocol for elementary operations on files? Explain why or why not! (1 point)
All load transfers are initiated by overloaded machines. The overloaded machine seeks bids from machines willing to accept load from it, selects a target machine, and transfers an appropriate job to that machine.
All load transfers are initiated by underloaded machines. The underloaded machine seeks bids from overloaded machines, selects an appropriate job, and requests the transfer of that job to itself.
Problem A: What kind of information does each load balancing scheme require to be included in a bid? (2 points)
Problem B: Are there situations where these two algorithms will lead to different load transfers even though the criteria for offloading jobs and accepting jobs are made as nearly identical as possible? Why? (2 points)
Problem C: Generally, is there a reason to prefer one of these algorithms? Why? (1 point)
Each activation object contains a CPU state (PC and other registers) plus a short C-list that we will refer to as the capability registers. Just as several of the registers in the CPU are specialized and essential to our model of a stored program computer, so are the capability registers specialized. These include the DC, or domain capability, designating the C-list used to represent the activation's domain, the RC or return capability, used to allow the running thread to return to its caller, and a pair of capabilities PC1 and PC2, used for passing capability parameters.
There is an operation, activate, that creates an activation object with all access rights. There are operations for reading and writing the data and capability registers in an activation, assuming the required access rights are present, and there is an enter operation allowing the transfer of control to an activation.
The enter operation will, optionally, pass the values from the caller's parameter registers to the new activation, and it will optionally pass a newly created return capability that allows entry to the caller's activation. The enter operation takes a parameter indicating which of these are to be passed, and the entered activation contains a field indicating which it expects to be passed. These must agree or the operation will fail.
As with DreadcOS, DreadcOS ME is not inherently a distributed system, but the proud developers at Dreadco are convinced that it can be easily incorporated into distributed systems.
Problem A: The enter operation is a gate crossing operation. What operation in Amoeba comes closest to having similar semantics? (2 point)
Problem B: Consider the problem of implementing protected FIFO queues in DreadcOS ME. Obviously, you need to give potential users of such queues access to a capability that allows the user to create a new queue. What kind of object does this capability refer to, and how does the user use that object to create a queue. (2 points)
Problem C: Consider the problem of implementing protected FIFO queues in DreadcOS ME. How is a queue represented (what collection of DreadcOS objects must be involved, and how do they relate to each other), and what does the create queue operation return to the user? (2 points)
Problem D: Consider a network of DreadcOS ME systems, where a user program running on one machine needs to enter an activation on a different machine. To what local object must the user's capability refer, and how can we make actions on this local object have the net effect of transferring control to the remote activation. (2 points)
Problem E: Consider a network of DreadcOS ME systems, where we have solved the problem of implementing a remote enter operation. What kinds of parameters will be easy to pass, what kinds may be difficult to pass, and what kinds are likely to be impossible to pass? (2 points)