The exam was graded on an 8 point scale, one point for each part of each
problem. Grades were then multiplied by 2.5 and rounded to one place after
the point so that the maximum possible score was brought up to 20 points.
-
Background:
Recall the first study question. Our hypothetical system has one virtual
address map per process, and any number of processes may share any particular
page.
Consider the option of lazy map update. In this case, when a page is
moved from disk to main memory, only the map of the process that actually
caused the page-fault is updated. Other processes may have map entries that
still point to the page on disk.
Part A:
As defined here, lazy map update applies only when pages are moved from disk
to main memory. Why can't lazy update be used when pages are moved from main
memory to disk?
- Answer:
-
If invalid map entries are inconsistent with the current location of a page,
the page-fault-service routine will be called into action when they are used,
and this code can fix the indonsistency. This is why lazy update works for
bringing pages in from disk. On moving pages from main memory to disk,
however, there may be a number of valid map entries pointing to the frame
the page used to occupy, and because they are valid, the hardware will not
call on the software needed to correct them when they are used.
- Comment:
-
Some students were distracted by the clean-page versus dirty-page distinction
or by issues of page replacement policy. Checking the wording of the question
shows that these are clearly irrelevant.
It turns out that a complete implementation of lazy update involves
invalidating all map entries on map pages before those pages are moved to
disk. As a result, the only valid map entries will from map pages that
are in main memory. These may be easily invalidated when a page is ejected
to disk.
Part B:
Outline the operation of a page-fault handler that employs lazy map update
when pages are brought from disk to main memory while still ensuring that
shared pages are correctly implemented from the user's viewpoint.
Focus on bringing new pages into memory and on the differences between
this modified handler and a handler which does prompt update.
- Answer:
-
When a page fault occurs, the handler must first determine if the page in
question is already in main memory. If it is, the current page fault is
a soft page fault, and all that needs to be done is to update the map entry
to point to the page's current location in main memory. If not, the page
fault can be handled conventionally, by first ejecting some other page from
main memory, and then bringing in the desired page and updating the map entry.
The difficult question is, how do you find if the desired page is already in
main memory. If the frame table contains, for each frame, the disk address
of the page currently occupying that frame, a search of the frame table will
suffice to answer this question.
-
Background: Recall that semaphores are typically implemented as
a counter plus a linked list of blocked processes and that
a binary semaphore is one where the counter may never take
on values other than zero or one.
Part A: What information is missing from the implementation of
semaphores suggested above that would be needed to detect deadlocks in a
resource centered system that used binary semaphores for mutual exclusion.
- Answer:
-
The list of blocked processes attached to each semaphore provides the needed
information to determine which processes are waiting for what resources,
but with conventional semaphore implementations, there is no record of
what semaphores have been claimed by which process.
The missing information could be stored as a pointer from the semaphore to
the process that had claimed it. This would be set by P to point to the
process that has been granted the semaphore and reset by V when that process
releases the semaphore.
The one difficulty with the available information is that the list of blocked
processes is stored with the semaphore -- that is, the pointers go in the
opposite direction of the links needed for deadlock direction on a graph.
The necessary information is available, it is just in the wrong form. Small
changes to the way the list of blocked processes is maintained can correct
this.
Part B:
What deadlock detection algorithm is
applicable, when should the algorithm be initiated, and what would you suggest
it do when it detects a deadlock?
- Answer:
-
The applicable model is a single resource model (the P operation blocks on
a single semaphore), so simply tracing through the graph by following pointers
from blocked process to semaphore to process to semaphore seeking a cycle
will suffice. If the search is initiated by a P operation before that
operation blocks. If the search ends up finding the initiating process,
there is a deadlock; if it finds a running process, there is no deadlock.
If a deadlock is detected, the easy solution is to abort the process that
initiated the P operation that completed the cycle. It would be nicer to
raise an exception in that process. It is very dangerous to abort a process
that is in a critical section, as this could leave the data structures
protected by that critical section in a corrupted state.
- Comment:
-
No deadlock detection algorithms "construct the graph" as part of the
algorithm. The graph is already there, in the links between the various
data structures in the system. Initiation of the algorithm only when the
ready list is empty or when the CPU load falls below some threshold will not
detect all deadlocks. Periodic initiation of the algorithm is sufficient,
but the algorithm is more complex if this is done. The problem statement
makes it quite clear that this is not an and-model, as each process waits on
exactly one binary semaphore at a time.
-
Background:
Our hypothetical system has one virtual
address map per process, and any number of processes may share any particular
page. When a process opens a file, the pages of that file are inserted into
the address space of that process. The right to open a file is controlled
by an access control list attached to that file.
Part A:
What protection model does this system implement?
For the purpose of this part, focus only on processes and the pages they
currently share, and pay no attention to how pages come to be shared.
- Answer:
-
This system uses capability based addressing. If you ignore how the pages
come to be shared and focus only on access to allready shared pages. as
instructed, it is clearly based on capability lists, where the address map
of a process acts as a capability list.
Part B:
Does the complete system implement some specific protection model? Please
take into account the file directory structure, the rights to open files,
and the rights to access pages of files already opened. If yes, what
model and why? If no, why not?
- Answer:
-
The use of capability based addressing at one level, while access control
lists are used at a higher level suggests that there is no coherent protection
model for the whole system. Students who pointed this out got full credit.
In fact, however, the access rights conferred by the capability lists are
derived from the rights conferred in the access control lists that govern
the right to open a file, so the overall protection model for the system is
given by the access control lists on the files. This is a subtle point!
-
Background:
Recall the second study question. Our hypothetical system includes processes
that may lock records in a database before inspecting and updating them.
A transaction proceeds by locking the requisite records, one at a time, then
inspecting and updating them, as needed, and then unlocking them, one at a
time.
Part A:
Outline, in graph theoretic terms, the structure of a deadlock detection
algorithm applicable to this system.
- Answer:
-
This is a single-resource model. The graph is bipartite, with at most a
single outgoing edge per vertex. The algorithm finds cycles in this
graph by following outgoing edges a vertex and terminating when it reaches
a vertex it has already visited on the same pathr. (This runs in at most
O(n) time, where n is min(p,r), where p is the number of processes and r
is the number of lockable database records.)
Part B:
Suppose that the complete set of database records needed for a transaction
can be determined before any of them are locked. There is a simple deadlock
avoidance algorithm that applies in this case. Explain it.
- Answer:
-
Imposing an order on the records and claiming locks in that order will
completely eliminate the possibility of deadlock (as was mentioned in class,
using bank account numbers to order locks needed on multi-account
transactions).