Exam 2: Final

Solutions and Commentary

Part of the homework for 22C:112, Spring 2012
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Grade Distributions

Exam 2

                   X   X           X     X
    0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24.

Exams 1 plus 2

Mean   = 18.72
Median = 17.6        X X                     X
    4 . 6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32

Programming Assignments 1 to 6

Mean   = 18.62
Median = 17.0        X     X                   X   X
    4 . 6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30

Homework Assignments, top 10 of 12

Mean   = 18.30
Median = 18.2
             X                                X
  ___________X_________X_X_X___X_X_X___ X_____X_____________
    4 . 6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30

Total Scores

Mean   = 55.63
Median = 57.9
    28. 32. 36. 40. 44. 48. 52. 56. 60. 64. 68. 72. 76. 80
         - -    C    + + - -    B    + + - -    A    + +

Exam Solutions and Commentary

  1. Background: Consider these two variations on the posted solution to homework 12. These assume that shmid is properly established in advance; how to do this is outside the scope of these questions, but for review, the header for shmat() is quoted from the man page.
         /* void *shmat(int shmid, const void *shmaddr, int shmflg); */
    a-1  char * p = shmat( shmid, NULL, 0 );
    a-2  if (fork()) { /* parent */
    a-3          a( p );
    a-4          wait();
    a-5  } else { /* child */
    a-6          b( p );
    a-7          exit();
    a-8  }
    b-1  if (fork()) { /* parent */
    b-2          a( shmat( shmid, NULL, 0 ) );
    b-3          wait();
    b-4  } else { /* child */
    b-5          b( shmat( shmid, NULL, 0 ) );
    b-6          exit();
    b-7  }

    a) Which solution permits the operating system to assign different virtual addresses to the shared memory in the two processes? Explain briefly and concisely. (1 point)

    Solution b, because the shared segment is attached after the two processes are created.

    This was supposed to be a trivial question. 2/3 got it right.

    b) What are the consequences of the above if functions a() and b() use the shared memory segment to establish a shared heap? Explain briefly and concisely. (1 point)

    Solution b is hazard prone for a shared heap, because if the heap is shared at different addresses, pointers in the heap will not work.

    Just 1 student got this. 2/3 gave answers that were totally wrong.

    c) For the child process, fork() returns zero (false). For the parent process fork() returns a value interpreted as true. Explain this value. (1 point)

    The returned value (for the parent) is the process ID of the child.

    1/2 got this. 1/3 gave answers that were totally wrong.

    d) The most obvious implementation of the fork() operation makes a copy of the caller's page table. Which entries in the page table are easier to process during this operation: entries referring to shared pages or entries referring to private pages? Explain. (1 point)

    Shared page entries are easier because private pages must be duplicated.

    1/2 gave good answers. 1/4 of the class took a user perspective in their explanation, while the problem directly asks about the simplicity of the implementation -- that is, the system code.

  2. Background: Consider the mkdir() and rmdir() services in Unix/Linux. The following are quotations from the man page:
    int mkdir(const char *pathname, mode_t mode);
            mkdir() returns zero on success, or -1 if an error occurred
            (in  which case, errno is set, for example,
            errno = EEXIST means that pathname already exists.
    int rmdir(const char *pathname);

    Legacy Unix code frequently uses these for mutual exclusion, creating and destroying lock files (actually directories) to control access to a critical section. Assume the application uses /tmp/<pid> as the name of the lock file (the directory /tmp is available to all applications, and pid is the process ID of the current process, rendered as a character string. You can assume that this string has already been computed and stored in the string variable lockfile.

    If at any time you need to use a polling loop in any of your solutions, your loop body should call sleep(1) to avoid hogging the CPU.

    a) Give code to enter a critical section implemented using a lock file. (1 point)

    do {
         returncode = mkdir( lockfile, LOCKFILEMODE );
    } while (returncode != 0);
    if (errno != EEXISTS) somethingseriousiswrong();

    Note that there is an error in the problem statement involving the use of process IDs as part of the lock file name, and that the solution given above does not depend on this error.

    Half the class got this right. 1/4 earned no credit. Only one student noticed the bug in the problem statement (earning full credit).

    b) Give code to exit a critical section implemented using a lock file. (1 point)

    rmdir( lockfile );

    Almost everyone got this right.

    c) List the names of the files or directories the system must examine or modify in the process of successfully entering a critical section. For each, identify whether is is examined or modified and the nature of the change(s) made if it is modified. (1 point)

    /         -- the root directory, examined
    /tmp      -- modified, create entry referring to lock file
    /tmp/name -- created, including self-link and parent link

    Nobody did well, over half the class forgot to mention the root of the file system. 1/3 of the class earned no credit.

    d) I notice that I am the owner of some files with names like /tmp/<1195>. Not knowing what they are for, I delete them. What have I done? (1 point)

    You have released the locks on some critical sections.

    Over half got this right, 1/6 earned no credit.

  3. Background: Algorithms people love to point out that the problem of detecting deadlock in a computer system is simple and well formed: All you do is create a directed graph where each vertex in the graph is a process (numbered) or a resource (lettered), and each edge represents:
    -- for edges of the form 1 to 2, process 1 is waiting for an action to be performed by process 2.
    -- for edges of the form 2 to A, process 2 is waiting for resource A.
    -- for edges of the form A to 3, resource A is currently held by process 3.
    Given this graph, a deadlock exists if the graph contains a cycle.

    a) Given a system where all user-level synchronization uses lock files, as suggested in a previous problem, do any of these edges exist in a form that an observer could analyze for the purpose of deadlock detection? (1 point)

    If the presence of a lock file named A2 (or perhaps /tmp/A/2) means that the process 2 holds a lock on resource A, then we have a representation for edges from resources to the processes that hold them. None of the other edges are representated in a system that uses lock files, so an observer cannot generally determine if there is a deadlock.

    1/6 did well. only 1/3 earned partial credit. Generally, the wrong answers did not connect to the problems with the misstatement of the previous question. Common wrong answers grossly overestimated the amount of information available to the system and concluded that deadlock detection was feasible in this context.

    b) Given a system where all user-level synchronization uses classical semaphore operations, do any of these edges exist in a form that an observer could analyze for the purpose of deadlock detection? (1 point)

    Here, processes waiting for a resource are queued on the semaphore queue for that resource, so we have the edges from processes waiting for resources. Unfortunately, conventional semaphores do not keep a record of what process is entitled to signal a semaphore, so an observer cannot identify the edges from resources to the processes that own them or from process to process in the case where one process waits for a signal from another.

    1/3 did well. 1/6 earned partial credit. Wrong answers, as in the previous part, tended to overestimate what was possible.

  4. A short question: Your heap runs from address 40 to address 80. You use the binary buddy system. Give the starting addresses and sizes of all the free blocks making up the heap before any blocks are allocated. (1 point)

    Block of size 8 from 40 to 47,
    block of size 16 from 48 to 63,
    block of size 16 from 64 to 79.

    1/3 did well. 1/2 earned no credit. Typical errors leading to partial credit included merging the blocks of size 16 into a free block of size 32 starting at address 48. Since 48 is not a multiple of 32, this is not a legal merger.

  5. Background: Consider a boundary tag heap manager where the allocation function uses first-fit allocation and prompt merger at deallocate time. The heap manager uses an unsorted free-list distinct from the sequence of boundary tags in memory.

    a) Explain the "first fit" policy in a bit more detail -- how does it decide if a fre block should be allocated? When are blocks split? (1 point)

    The manager allocates the first free block it finds that is larger than the requested size, and if the block is large enough to split off a new free block, it does that at allocation time.

    3/4 got this right. 1/5 only explained the first fit policy without mentioning when blocks are split. One earned no credit.

    b) Why does this storage manager do lots of extra work in a typical application? (1 point)

    Prompt merger implies that adjacent free blocks are merged immediately when blocks are deallocated. This means that if there is a popular block size, a common pattern will be to allocate and split, then deallocate and merge, with the result that many extra splits and merges are done.

    1/6 got this right. 1/3 only discussed extra splitting, while 1/4 only discussed extra mergers. 1/4 earned no credit.

    c) Why is it necessary in this heap manager, when given a pointer to some boundary tag, to be able to find both the previous and next tag in the heap? (Short answer follows from the givens.) (1 point)

    Prompt merger requires examining both the previous and the following blocks in the heap to see if either or both are free and therefore candidates for merger.

    1/2 did well. 1/3 earned no credit. It is apparent that too many students only learned one heap manager, that being the one they implemented for the machine problem, without studying any of the alternatives.

    d) Why is it necessary in this heap manager for the free list to be doubly linked? (The answer is short anf follows directly from what is given.) (1 point)

    Prompt merger requires the ability to delete a block from anywhere in the free list when it is merged with the newly deallocated block (or with the block resulting from merger of the newly deallocated block with one of its neighbors). Deletion from the middle of a linked list is only easy if the list is doubly linked.

    1/6 did well. 1/6 earned partial credit, 2/3 earned no credit. It is apparent that many students have difficulty reasoning about lists and pointers.

  6. Background: Consider a virtual memory system for a 32-bit computer. Both virtual and physical addresses are 32-bits each. The virtual address is organized as follows:
    10 bits
    page in seg
    10 bits
       byte in page   
    12 bits
    Memory is byte addressible, and the MMU uses a TLB holding enough information to translate the addresses of recently used pages. When there is a TLB miss, the MMU forces a trap, and the trap service routine updates the TLB from page and segment table data structures in RAM.

    a) Deep in the operating system, there is a storage manager for main memory. What kind of storage allocation scheme should it use? Fixed size blocks? Buddy system? Boundary tags? First fit? Justify your choice in terms of the information given! (1 point)

    Fixed-size blocks with 4K bytes per block are a natural fit for this MMU because that is the page size and also probably the size of the page table for a segment and the segment table for an address space.

    1/2 got this right. 1/6 gave poor reasoning to support the fixed block size. 1/3 earned no credit.

    b) How many words are in a hardware-defined segment? (approximately) (1 point)

    1 megaword (4 megabytes).

    1/3 got this right, and 1/3 said 4 megawords for partial credit. This was supposed to be easy, but 1/3 earned no credit.

  7. Background: The conventional model of demand-paged virtual memory uses a page-table to show, for each page, where that page is stored (in a page-frame in RAM or on some sector of disk). A classical MMU uses the page table to translate addresses. The conventional model also includes a frame table showing, for each frame, whether that frame is in use or not, and if in use, what its virtual address is. The classical page-replacement policies focus on the frame table. Real operating systems add significant complexity to this picture.

    a) If the operating system supports one page table per process, with no shared pages, what data would you add to each frame-table entry? (1 point)

    The process ID of the owner (or equivalent information) is needed to relate the frame to the correct page table.

    1/2 got this. 1/3 earned no credit.

    b) If the operating system supports one page table per process, with pages freely shared between processes, what additional problem does this pose for the designer of the frame table? (1 point)

    Each frame now needs to be associated with a list or set of page-table/virtual-address pairs, since it might be visible at a different address in each page table, and there is no bound on the number of page tables in which it might appear.

    Only one did well here. Partial credit was given for those who indicated some understanding of the issues, but 1/2 earned no credit.

    c) If the operating system supports a paged-segmented address space, where each page belongs to exactly one segment and is is segments that may be shared between processes, how does this change the nature of the frame table? (1 point)

    Now, each frame only needs to indicate what segment it appears in and what virtual address in that segment.

    One did well, 2/3 earned no credit. Given that this problem builds on the previous one, this is not surprising.

    d) If, as in Multics or as in Unix systems that support mmap(), segments may be mapped to disk files, we need to know when the page table for a segment may be deleted. This suggests some data structure analogous to the frame table that identifies which frames contain page tables for segments tables. For each frame containing the page table for a segment, what minimum information do we need to keep in order to know when it may be reclaimed?

    The page table for each segment now needs to know which segment tables include it, and at what virtual address. This is exactly the same problem discussed in part b), but now it applies to segment tables, not individual page frames.

    Nobody got this right, but 1/3 got partial credit. As with part c), this was not unexpected in light of the difficulty of part b).

  8. Background: Consider a disk scheduler that uses the cyclic scan scheduling discipline, with requests serviced in order by ascending cylinders (an elevator that always takes an express trips down, providing local service on the way up).

    One implementation uses an array of fifo queues, one for each cylinder. Requests are enqueued in the queue for the requested cylinder, and dequeues are always done from the queue for the 'current' cylinder until that cylinder is empty.

    A second implementation uses a lexicographic tree of pending requests, requests are sorted into the tree totally ordered by disk address, with all insertions to the left of any item with greater or equal address. Dequeues always move the 'current' pointer to the next item to the right as each item is removed.

    a) One of the above solutions is subject to starvation -- that is, there is a pattern of disk requests that could prevent other disk requests from ever being served. Which one is vulnerable, and what is the pattern of requests? (1 point)

    The array solution is subject to starvation. If requests keep arriving in the queue currently being served by the head, that queue will never empty and the heads will never move from that cylinder, so requests for service in other cylinders will never be handled.

    just over 1/2 got this. The remainder earned no credit. Surprisingly, there was no partial credit.

    b) The other is not vulnerable to starvation. Why? (1 point)

    The tree solution is starvation free. Sequences of enqueues for the cylinder currently being served will be to the left of the request currently being served, guaranteeing that a sequence of dequeues will be able to move right and advance to the next cylinder.

    Just under 1/2 got this. 1/4 gave weak explanations, and 1/3 earned no credit.

    c) One of the above requires the interrupt service routine to do more work, on average, than the other. Which, why? (1 point)

    Dequeues are done by the interrupt service routine. With the array, the worst case time for a dequeue is O(n) in the number of cylinders, a search through n empty queues to find that there are no pending requests, or if there is just one pending request, a search through n-1 empty queues. In contrast, finding the right neighbor of a vertex in a binary tree takes O(log n) time on average, and if the tree is maintained in balanced form, O(log n) in the worst case.

    1/6 ignored the worst case behavior of the array and focused on the worst case behavior of a tree, earning partial credit. 1/4 made correct assertions about the cost of enqueue, earning a little credit while effectively missing the central issue of the question. The majority earned no credit.

  9. Background: Consider this code fragment:
    write( 1, "a", 1 );
    if (fork()) {                                -- grandparent
            write( 1, "b", 1 );
            if (fork()) {                        -- grandparent
                    write( 1, "c", 1 ); wait();
            } else {                             -- child 1
                    write( 1, "d", 1 ); exit();
            write( 1, "e", 1 ); wait();
    } else {                                     -- child 2
            write( 1, "f", 1 );
            if (fork()) {                        -- child 2
                    write( 1, "g", 1 ); wait();
            } else {                             -- grandchild
                    write( 1, "h", 1 ); exit();
            write( 1, "i", 1 ); exit();
    write( 1, "j", 1 );

    a) This program can easily produce the outputs abcdefghij or afhgibdcej; can it also produce the output abcfhdegij? If so, explain which call to wait() is signalled by which call to exit(). (you can use the letters output to refer to the wait() and exit() calls on the same line. (1 point)

    Yes, and it is possible even if exit d signals wait d, exit h signals wait g and exit i signals wait e.

    1/2 got this. 1/3 earned no credit. This problem was supposed to be harder, with a sequence that could only happen if exit i signalled wait d, but there was a typo in the string of letters.

    b) If the programmer really intended that e not be output until both c and d had been completed, what kinds of changes should the programmer make? Explain, do not give code. (1 point)

    Each wait must somehow specify the process ID that it is waiting for. (The exact mechanism used to solve this problem is not part of the question, but note that there is a waitpid() system call to do this, and there are ways to do it with wait() plus auxiliary variables and loops.)

    Only one got this, and only 1/6 got partial credit. This was harder than expected because of the typo in part a, a typo that prevented the student from naturally finding the problem that was to be solved in this part.