Mean = 8.7 X X X Median = 9.0 X X X X X X X X X X ______________X_X_______X_X_____X_X_X_X_X_X___X_X_X_____X_______ 0 . 1 . 2 . 3 . 4 . 5 . 6 . 7 . 8 . 9 . 10. 11. 12. 13. 14. 15 Approximate Grades: C C C + + - - B B B + + - - A A A + +
Part A: An application with a data structures that occupy 2 megabytes will require 512 pages of 4K bytes each to store those data structures.
This question was intended to offer a free point to everyone; very few missed it!
Part B: If we dynamically allocate data on in a typical heap supporting a Pascal or C programming environment, with block sizes varying by a factor of two, we expect, in the steady state, that the heap will be fragmented in such a way that the typical block, plus the adjacent free block, add up make a block as big as the largest commonly allocated size of block. As a result, if block sizes are linearly distributed, we expect the size of the space lost to fragmentation to be roughly 1/3 of the total user demand, or we expect the total heap required to be 4/3 the total user demand.
Given this, and given a total user demand of 2 megabytes, we expect the total heap to occupy 2.66 megabytes or 683 pages.
Only one student gave this figure, many got partial credit by estimating that the size of the data segment would increase, but not by more than a factor of two.
Part C: Ancient UNIX programming advice suggested that users periodically traverse all of their data structures and, for each item, allocate a block of the same size, copy the contents of the old item into the new one, and then free the old copy; the effect of this, under the primitive boundary tag heap managers used in early UNIX systems, was to compact the heap, reducing the impact of fragmentation.
Most students got this; a few had very strange reasoning.
Part D: In general, fragmentation of a heap reduces the spatial locality of the data structures, and this reduction in spatial locality increases the page-fault rate.
A few students declared that fragmentation had no impact on locality. Others focused on execution locality (program counter based) or asserted that the problem was thrashing, an extreme statement that effectively dodges the question of correlation between the two scalar quantities, page-fault-rate and fragmentation.
A small number of students gave definitions of kernel calls without connecting their answers to the problem, or suggested using interrupts for kernel calls. Most, however, did well here.
Part B: Gate crossing on the example machine is accomplished by interrupts, traps, and the associated return from interrupt and return from trap instructions.
This was not quite a duplicate of part A but most did well here. Again some merely gave definitions without connecting their answers to the example machine.
Part C: If a user program passes an integer to the kernel by pushing it onto the stack, the kernel will have to go through the following steps to find the value of this parameter:
First, the kernel must make sure the page referenced by the user's stack pointer is in physical memory. It is unlikely but possible that, between the time the user pushed the parameter on the stack and the time of the kernel call, the user was preempted and the user's stack paged out to disk!
Second, the kernel must translate the user's stack pointer to an address in the kernel's address space. Recall that the user's stack pointer is a virtual address, but the kernel runs with the MMU turned off.
This problem was hard, many did not try to answer it, and many stopped short, merely saying that the kernel code must access the user's stack, without indicating what is involved.
Some students assumed that the trap pushes the CPU state onto the same stack that the user uses for passing the parameter. This cannot be the case! The user's stack pointer refers to a stack in the user's virtual address space, and if a page of this stack is not in main memory, the trap mechanism might have to force a page fault trap in order to save the cpu state when responding to a trap. This would lead to an infinite loop inside the CPU's hardware! Therefore, the save area used by the interrupt and trap mechanisms must be in the kernel's physical address space and must ignore the user's stack!
Part D: If a pointer is passed to a kernel call, all of the problems dealt with in part C must be handled merely to get a copy of the pointer. Once this is done, the pointer itself must be converted to physical address. Note that a new problem arises here! Many pointers passed to kernel calls refer to buffers or other multiword objects. The buffer may span multiple pages of the user's address space, and for each of these pages, it is possible that that page may not be currently in main memory when the kernel needs it!
Few students seem to have understood the issues raised by this problem!
This question was intended to offer a free point, but a surprising number of students answered that pipes provided interprocess communication (an answer that was literally given in the question!) or that pipes solved the synchronization, mutual exclusion or readers-writers problems. Some of these latter problems are sufficiently close to the producer-consumer problem that partial credit was granted.
Part B: In implementing the UNIX kernel, where had a good semaphore implementation is available, it would make sense to use two or three semaphores per pipe:
This was intended as a straightforward question, but many missed details, suggesting, for example, that binary semaphores be used, or that there was no need to count free space in the buffer.
Part C: After creating a pipe, the data structures you would expect to be created are an elaboration of the following:
array returned by pipe(): Descriptor ______ ______ |_____o----------------------->|_r___o---> |_____o-- Descriptor |_w__/_| | ______ |_s__/_| -->|_r____| |_c___o---> |_w___o---> |_d___o-- |_s____| | |_c___o---> | |_d___o-- Pipe | | ______ | ---->|_head_|<--- |_tail_| |_data_|\ |_free_| > semaphores |_mutex|/ | | | 4096 | | byte | |buffer| |______|A surprising number of students had severe problems with this, using ineffective tools such as C-style structure declarations to attempt to capture this information. Fully half the class left this blank, making no effort to show anything here! Very few seem to have understood that the read and write descriptors for the pipe must be separate data structures from the pipe itself!
Most students correctly stated that the translation must be before the request queue, but perhaps 1/3 thought that the translation could be done by the MMU hardware.
Part B: The page(s) containing the buffer may not be in (a) page frame(s) at the time the user requests I/O. In this case, the disk I/O routine must ask the page fault service routine to bring the page(s) into memory prior to computing the physical address(es).
Between the time the I/O request is enqueued and the time the I/O is done, it is possible that the page replacement policy could try to replace the page(s) containing the buffer. There must be some bit on the page frame that the I/O system can set to prevent that frame from being considered for replacement while I/O on that frame is pending.
Even if the disk sector size and the page size are exactly equal, it is possible that a user could request an I/O operation on a buffer that is not aligned on a page boundary. While pages in question may be consecutive in the virtual address space of the process, they will not necessarily be in consecutive frames of physical memory.
Many students said little or nothing here, and of those who earned credit here, most did not consider the possibility of a buffer spanning a page boundary.
In fact, this question is very closely related to question 2, part D, and I had hoped that the incremental addition of difficulty from 2C to here would lead more students to reasonable answers.
Part C, D: Few good answers were given here, so I'll assign these parts as homework!