Midterm Solutions And Commentary

22C:116, Fall 2001

Douglas W. Jones


Exam Grade Distribution

Mecian = 8.0
                 X
                 X       X
_________X___X___X_______X_X_X_____X_X_X_________________________________X__
 2 . 3 . 4 . 5 . 6 . 7 . 8 . 9 . 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20


Solutions and Comments

  1. Part A: Which trap service routine dispatches kernel calls.
    A function call instruction can only cause a trap if the destination address is invalid, so we must conclude that the MMU trap service routine handles kernel call dispatching. 6 got this right, a very few gave complex wrong poor, and the remainder received no credit.

    Part B: What other traps does this service routine handle.

    If kernel calls are handled by the MMU trap service routine, this routine may also illegal address traps and access rights traps, and if the system supports demand paged virtual memory, page fault traps. 6 said page fault traps, 4 said access rights traps, 2 gave correct answers in the context of wrong answers to part a.

    Part C: How does this service routine distinguish between kernel calls and the other traps it handles.

    We could reserve some virtual address for all kernel calls, or we could use a special value in the page-table entry to indicate that a page is being used for kernel calls. 5 got this much. We should also check to see that the bad virtual address is the same as the saved program counter; if not, the trap was caused by a load or store operation, not a jump or call to the special address, as specified in the problem statement. Only 2 noticed this.

    Part D: How does the kernel-call trap service routine return. Hint: It does not simply do a return-from-trap, as is done on return from a page-fault-trap; the saved program counter is wrong!

    The calling program pushed its return address on its local stack (or saved its return address wherever this machine saves return addresses) prior to the attempted instruction fetch from the destination address. Therefore, the return from a kernel call involves looking into the saved state of the caller and emulating a return from the called function prior to the actual return from trap. We might, for example, pop a word from the caller's stack into the caller's saved program counter prior to the conventional return from trap.

  2. Part A: Give these unlink operations in the correct order.
    To execute "rmdir x" we unlink "x/." "x/.." (in any order) before we unlink "x". 5 did this, 2 forgot "x/.." and 4 forgot "x" itself.

    Part B: What actually causes the reclamation of the storage space occupied by the directory x?

    The unlink operation does two things. First, it removes the link from the directory to the file or directory being unlinked, and second, it decrements the reference count of the file or directory being unlinked. If this second operation produces a reference count of zero, the I-node and all disk sectors it references are reclaimed. 7 got this and 2 had odd partially correct answers.

    3 asserted that the space is not reclaimed until it is needed for a new file, but this confuses the issue of reuse (the actual overwriting of the data in the disk sectors of the old file) and reclamation (placing these sectors in the free-space data structure).

  3. Part A: What deadlock model is applicable to interprocess communication using this service? (The notes for lecture 8 covered this subject best.)
    If the only blocking is done by the lockf() service, the applicable deadlock model is the and model (implied by the statement that the locks are not claimed until all are available). 9 got this.

    Part B: Some kind of lock data structure must be attached to files. Where does this go?

    The lock data structure cannot be in the open file table of a process, because if it was there, other processes could not easily check it to see if some sector is locked. The data structure cannot be stored on disk because it is not permanent; it only applies to files that some process has open. Therefore, it should be in the in-memory data structure that is shared by all processes that have opened the same file; this is sometimes called the in-memory copy of the I-node. 3 got this right, 2 had half credit.

    Part C: The conceptual model suggested in the background is not an appropriate implementation. Suggest an appropriate lock data structure.

    One semaphore per byte of the file would require an outrageous amount of memory. Even one bit per file (as several suggested) would be outrageous and would not allow any deadlock detection, nor would it allow the bytes locked by an application to be unlocked when that application closes the file. The obvious thing to note is that the lock operation requests locks on ranges of bytes. Therefore, it would be reasonable to store, with the open file, a list of the ranges of bytes that each process has locked. Each range is described by its starting and ending byte number, plus the process ID of the process that has locked that range. 3 did well here, 3 had partial credit.

  4. Part A Suggest a representation, at a coarse level of detail, for an bounded buffer implementation of an interprocess message passing communication channel.
    A bounded buffer of capacity n could be described by a shared file containing storage for n items plus head and tail pointers. This file may be accessed using either mmap() or read and write; it will work either way. The lockf() primitive can be used for mutual exclusion on the head and tail pointers. 1 did well here, 1 forgot to worry about mutual exclusion, and 8 got a small amount of credit for very complex but unworkable proposals.

    Part B Given the process ID of the receiving process, outline at a coarse level of detail how a process can send a message to the recipient.

    We could use the process ID of the receiving process to construct a textual file name unique to the receiver to be used to hold the the message channel for messages to that recipient, using the answer to part a to construct this channel. 1 did well, 6 had complex but unworkable answers, usually involving a single shared channel with destination addresses stored with the messages in the channel.

    Part C Something is missing in the set of primitives that have been given. What? To find the answer, consider how a process awaits a message in the channel you have built.

    None of the primitives given in the exam allow a reasonable solution to the synchronization problem. Therefore, the producer must use busy waiting to await free space in the channel and the consumer must use busy waiting to await a message. This is workable but inefficient. 2 did well here and 4 mentioned semaphores without convincing me that they understood what the semaphores were needed for.