Solutions and Commentary
Mean = 10.95 Median = 11.5 X X _____________X___X_X_X___X_X___X_____X_X______ 0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20
Mean = 22.91 Median = 23.3 X X ___________X_X___X_______X___X___X_X___________X___X______________ 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40
Mean = 31.78 Median = 32.4 X X X _______________X___________________X___X___X___X_______X_X_X______ 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40
Mean = 15.01 X Median = 18.3 X X X X _________X_____________X_X_____X_______X_X____ 0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20
Mean = 69.70 Median = 72.6 X _________X_X_____X___X_______X_____X_X___X___X_______X____________ 40. 44. 48. 52. 56. 60. 64. 68. 72. 76. 80. 84. 88. 92. 96. 100 - D + - C + - B + - A +
a) What difference in processing speed would you expect between these two, and why? (if it matters, assume lots of macro calls, no macro parameters and no disk cache) (1.5 points)
i will be faster because reading the macro from RAM each time it is expanded will be faster than reading it from some part of the file, on disk.
7 did well here, 2 gave odd reasons for partial credit, and 2 earned no credit.
b) One of these models does not allow preprocessing of the macro body in order to speed the substitution of macro actual parameters for macro formal parameters. Which does not, and why? (1.5 points)
ii does not, because this requires reading the macro body directly from the source file each time it is expanded, while preprocessing requires making an intermediate copy.
5 did well here, 2 gave odd reasons for partial credit, 3 gave no reason, and one earned no credit.
1 << i does this. This was given in lecture, and it is also given in the course notes on the Buddy system. Students who attempted the final machine problem should have found this!
7 did well, 2 indicated use of a shift operator, and 2 earned no credit.
The supervisor of a process on this system can implement demand paged virtual memory, even though the supervisor cannot directly access the MMU or access any MMU registers.
a) With reference to the above operations and normal file system operations, how does the supervisor clean a page frame and transfer its contents to disk? (1.5 points)
peek to get the contents of the page, then write it to disk, and then detatch the frame from that page.
3 did well here, 2 more earned strong partial credit, and 6 earned no credit.
b) What page replacement policies can the supervisor implement, and what feature of this system prevents it from implementing better policies. the supervisor would perform in response to a page fault message. (1.5 points)
Random and FIFO replacement are practical alternatives. More interesting alternatives cannot be done because the documented interface provides no way to determine whether a page of the address space has or has not been referenced.
3 did well here, 2 earned partial credit, either by listing some feasible policies or by giving a correct explanation. 5 earned no credit, either because what they wrote was irrelevant or wrong. Two of these claimed that this interface allowed LRU replacement. It does not!
c) What does the supervisor do in order to determine the specific page in the process's address space that caused the fault? (1.5 points)
It has to emulate the fetch-execute cycle. To do this, it inspects the program counter and peeks to get the instruction that caused the fault, then computes the effective address. This may require inspecting index registers and peeking for the values of index displacements and indirect words.
2 did well here, 7 didn't mention the need for emulation, and 2 did poorly.
d) What does the supervisor do at the point where a normal page-fault service routine would return to the process that caused the fault? (1.0 point)
It starts the process that caused the fault.
4 did well here, one got partial credit for adding irrelevant other activity, and 6 earned no credit.
During heap initialization, given a pointer to a word in the heap, the number of one bits in (i ^ (i - 1)) >> 1 tells us what free list we could put the block in, and adding one to this number gives the block size, the amount by which we increment the pointer to get the address of the next block. Those who had worked seriously on heap initialization for the final machine problem should have had a fairly easy time seeing how this was relevant to that problem.
1 did well, 2 had something relevant to say, 3 mixed errors in with their relevant comments, and 5 earned no credit.
a) The program executes int fd = open("/a/b.txt", O_RDONLY); how many disk accesses are required? (1.0 point)
5 disk accesses, to read the i-node of the root directory, one to read the data from the root directory, one to read the i-node for the file /a, one to read the data in that directory, and one to read the i-node of the file being opened.
Nobody did well here, with the popular answers being 2 and 3 disk accesses; 9 students gave one or the other of these, earning partial credit. 2 earned no credit.
b) And then, the program executes read(fd, buf, 1); how many additional disk accesses are required? (1.0 point)
One disk access, since the i-node of the file has already been read.
5 did well here, 3 had two accesses, possibly because they assumed that there was no need to read the i-node until the last moment, and 3 earned no credit.
c) When the user process does read(fd, buf, 1);, it only reads one byte of data. Does the DMA hardware directly read from disk to the user's buffer buf? If not, how does the data get there? (1.0 point)
No, the DMA transfer will read one sector to a system buffer, and then system software will copy one byte to the user's buffer.
1 did well here, one simply said no, with no explanation, 5 gave weird reasoning that was not worth any credit, and 5 said yes.
a) Which policy is likely to improve the locality of reference in a demand-paged virtual memory environment for applications that make intensive use of the heap? Why? (1.5 points)
i - this policy will keep the used blocks in the heap near one end, improving locality of reference.
4 did well here, 2 gave odd reasons and 1 gave no reason, earning partial credit, and 4 earned no credit.
b) Which policy is likely to lead to faster searches for a large enough free block if there are no page faults, and what slows down the other policy? (1.5 points)
ii - the use of a separate unsorted free list makes the search for a free block faster, compared with policy i, where the heap ends up roughly sorted by block size, so all searches are likely to scan through a multitude of small blocks, used or free, before finding a block that is big enough.
5 did well here, 4 gave odd reasons and 1 gave no reason, for partial credit, and 1 gave no answer.
a) Aside from one, what interleaving factors make sense here? (1.5 points)
5, 7 and 11 (all the numbers smaller than the number of sectors per track, but with no common factors with the number of sectors per track).
6 did well, 3 earned partial credit, 2 earned no credit.
a) Show the order of the sectors with the smallest of these. (1.5 points)
9 did well here, correctly giving something close to a reasonable interpretation of the smallest interleaving factor they listed for part a.