# Final Exam

## Solutions and Commentary

#### Final Exam

```Mean   = 10.95
Median = 11.5

X             X
_____________X___X_X_X___X_X___X_____X_X______
0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20
```

#### Composite Exams

```Mean   = 22.91
Median = 23.3
X           X
___________X_X___X_______X___X___X_X___________X___X______________
10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40
```

#### Homework Scores (top 10 out of 12)

```Mean   = 31.78
Median = 32.4
X                     X X
_______________X___________________X___X___X___X_______X_X_X______
10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40
```

#### Machine Problem Scores

```Mean   = 15.01                            X
Median = 18.3                             X
X             X X
_________X_____________X_X_____X_______X_X____
0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20
```

#### Overall Scores

```Mean   = 69.70
Median = 72.6
X
_________X_X_____X___X_______X_____X_X___X___X_______X____________
40. 44. 48. 52. 56. 60. 64. 68. 72. 76. 80. 84. 88. 92. 96. 100
-    D    + -    C    + -    B    + -    A    +
```

## Exam Solutions

1. Background: Consider the problem of implementing macros -- whether in assembly language, a high-level language preprocessor, or a command language, it doesn't matter.

• i - One way to do this, discussed at length in the notes, is to store the macro body in RAM (for example, in the string pool). In this case, the macro symbol table associates the macro name with the memory address where the macro body is stored.

• ii - Another way to do this is to "bookmark" the source file at the point where the macro body begins, for example, by saving the offset in bytes from the start of the source file where the macro begins. In this case, the macro symbol table associates the name with this offset.

a) What difference in processing speed would you expect between these two, and why? (if it matters, assume lots of macro calls, no macro parameters and no disk cache) (1.5 points)

i will be faster because reading the macro from RAM each time it is expanded will be faster than reading it from some part of the file, on disk.

7 did well here, 2 gave odd reasons for partial credit, and 2 earned no credit.

b) One of these models does not allow preprocessing of the macro body in order to speed the substitution of macro actual parameters for macro formal parameters. Which does not, and why? (1.5 points)

ii does not, because this requires reading the macro body directly from the source file each time it is expanded, while preprocessing requires making an intermediate copy.

5 did well here, 2 gave odd reasons for partial credit, 3 gave no reason, and one earned no credit.

2. In a buddy system heap manager, you need to compute 2i frequently. What expression computes this efficiently in C, C++ or Java? (the answer is the same in all three.) (1 point)

1 << i does this. This was given in lecture, and it is also given in the course notes on the Buddy system. Students who attempted the final machine problem should have found this!

7 did well, 2 indicated use of a shift operator, and 2 earned no credit.

3. Background: Consider an operating system where, when a page fault occurs, the running process is simply stopped, and a message is delivered to that process's supervisor telling it that this happened (and nothing more). The supervisor of a proces is just another process, but because it is the supervisor, it has the right to perform the following opertions on the process it supervises:
• start it,
• stop it,
• peek into its memory address space,
• poke new values into its memory address space,
• attach a new uninitialized frame to a specific page of its address space,
• detatch a frame from a specific page and make it free,
• inspect all of its registers, and
• modify any of its registers.

The supervisor of a process on this system can implement demand paged virtual memory, even though the supervisor cannot directly access the MMU or access any MMU registers.

a) With reference to the above operations and normal file system operations, how does the supervisor clean a page frame and transfer its contents to disk? (1.5 points)

peek to get the contents of the page, then write it to disk, and then detatch the frame from that page.

3 did well here, 2 more earned strong partial credit, and 6 earned no credit.

b) What page replacement policies can the supervisor implement, and what feature of this system prevents it from implementing better policies. the supervisor would perform in response to a page fault message. (1.5 points)

Random and FIFO replacement are practical alternatives. More interesting alternatives cannot be done because the documented interface provides no way to determine whether a page of the address space has or has not been referenced.

3 did well here, 2 earned partial credit, either by listing some feasible policies or by giving a correct explanation. 5 earned no credit, either because what they wrote was irrelevant or wrong. Two of these claimed that this interface allowed LRU replacement. It does not!

c) What does the supervisor do in order to determine the specific page in the process's address space that caused the fault? (1.5 points)

It has to emulate the fetch-execute cycle. To do this, it inspects the program counter and peeks to get the instruction that caused the fault, then computes the effective address. This may require inspecting index registers and peeking for the values of index displacements and indirect words.

2 did well here, 7 didn't mention the need for emulation, and 2 did poorly.

d) What does the supervisor do at the point where a normal page-fault service routine would return to the process that caused the fault? (1.0 point)

It starts the process that caused the fault.

4 did well here, one got partial credit for adding irrelevant other activity, and 6 earned no credit.

4. The expression (i ^ (i - 1)) >> 1 in C, C++ and Java, gives a result with ones in each bit position where the variable i had trailing zeros, and zeros elsewhere. Where would this be useful in the implementation of a buddy-system heap initializer? (1.5 points)

During heap initialization, given a pointer to a word in the heap, the number of one bits in (i ^ (i - 1)) >> 1 tells us what free list we could put the block in, and adding one to this number gives the block size, the amount by which we increment the pointer to get the address of the next block. Those who had worked seriously on heap initialization for the final machine problem should have had a fairly easy time seeing how this was relevant to that problem.

1 did well, 2 had something relevant to say, 3 mixed errors in with their relevant comments, and 5 earned no credit.

5. Background: Consider a classical Unix file system where each file is described by an i-node. Assume all files and directories are small (1 sector), and assume that none of the relevant data is in RAM to start with.

a) The program executes int fd = open("/a/b.txt", O_RDONLY); how many disk accesses are required? (1.0 point)

5 disk accesses, to read the i-node of the root directory, one to read the data from the root directory, one to read the i-node for the file /a, one to read the data in that directory, and one to read the i-node of the file being opened.

Nobody did well here, with the popular answers being 2 and 3 disk accesses; 9 students gave one or the other of these, earning partial credit. 2 earned no credit.

b) And then, the program executes read(fd, buf, 1); how many additional disk accesses are required? (1.0 point)

One disk access, since the i-node of the file has already been read.

5 did well here, 3 had two accesses, possibly because they assumed that there was no need to read the i-node until the last moment, and 3 earned no credit.

c) When the user process does read(fd, buf, 1);, it only reads one byte of data. Does the DMA hardware directly read from disk to the user's buffer buf? If not, how does the data get there? (1.0 point)

No, the DMA transfer will read one sector to a system buffer, and then system software will copy one byte to the user's buffer.

1 did well here, one simply said no, with no explanation, 5 gave weird reasoning that was not worth any credit, and 5 said yes.

6. Background: Consider two different allocation policies for a heap manager:
• i - first fit, with no separate free-list and the search always starting from the beginning of the heap.

• ii - circular scan, with a separate free-list linking only free blocks.

a) Which policy is likely to improve the locality of reference in a demand-paged virtual memory environment for applications that make intensive use of the heap? Why? (1.5 points)

i - this policy will keep the used blocks in the heap near one end, improving locality of reference.

4 did well here, 2 gave odd reasons and 1 gave no reason, earning partial credit, and 4 earned no credit.

b) Which policy is likely to lead to faster searches for a large enough free block if there are no page faults, and what slows down the other policy? (1.5 points)

ii - the use of a separate unsorted free list makes the search for a free block faster, compared with policy i, where the heap ends up roughly sorted by block size, so all searches are likely to scan through a multitude of small blocks, used or free, before finding a block that is big enough.

5 did well here, 4 gave odd reasons and 1 gave no reason, for partial credit, and 1 gave no answer.

7. Background: Assume a disk with 12 sectors per track. With no interleaving, which is the same as one-way interleaving, these are in the order 0-1-2-3-4-5-6-7-8-9-A-B.

a) Aside from one, what interleaving factors make sense here? (1.5 points)

5, 7 and 11 (all the numbers smaller than the number of sectors per track, but with no common factors with the number of sectors per track).

6 did well, 3 earned partial credit, 2 earned no credit.

a) Show the order of the sectors with the smallest of these. (1.5 points)

0-5-A-3-8-1-6-B-4-9-2-7

9 did well here, correctly giving something close to a reasonable interpretation of the smallest interleaving factor they listed for part a.