Homework 3

22C:116, Fall 2000

Due Friday Sep 8, 2000, in class

Douglas W. Jones

  1. A Problem: What problems would a programmer encounter in using Java to write a storage manager for main memory.

  2. Background: Consider a boundary tag storage manager using first-fit allocation, where the free list is maintained as something of a push-down list -- that is, where deallocated blocks are pushed onto the head of the free list, and the search for free blocks always begins from the list head.

    Note that there is an alternative free-list organization where the free list is arranged as a circular list with a single head pointer that "walks around" the circle. Deallocation leaves the newly freed block just behind the head, while allocation walks the head forward until a large enough free block is found.

    Part A: As time passes, after many allocate and deallocate cycles for blocks of random sizes, what distribution of free block sizes would you expect in the free list. That is, would you expect the list to be sorted, or almost sorted in some way, or would the distribution of block sizes as a function of list position be random? What sorting would you expect for the alternative free list organization?

    Part B: Given your answer to part A, how would you expect these two free list organizations to differ for real applications where most allocate and deallocate operations involve only a few common block sizes, while other more random appearing block sizes are manipulated only occasionally?

  3. Background: Lazy merger of free blocks is bad for real-time performance, but prompt merger uses more total CPU time because it merges small blocks of popular sizes to make large blocks of unpopular sizes which are highly likely to be re-split.

    Consider this alternative: Incremental lazy merger. Instead of waiting until a free block is needed to do any merger, we maintain a routine called merge-step that inspects one free block each time it is called, checking to see whether that block is adjacent to another free block and performing exactly one block merger if so. This merge routine needs a persistant state from one call to the next, a pointer that is advanced down the free list one step per call. Code needs to be added to the allocate routine to retain the validity of this pointer in the event that the block it points to happens to be removed from the free list (for allocation).

    The Question: Does incremental lazy merger guarantee that the resulting system will meet real-time constraints? How would you expect its performance to compare with prompt and purely lazy merger?

  4. Background: Consider the example thread package in http://homepage.cs.uiowa.edu/~dwjones/opsys/threads/. This thread manager supports only non-preemptive transfers of control between threads, but aside from that, conforms closely to the process manager model we have discussed in lectures.

    Part A: The code implementing thread_exit() contains notes that parts of this code are risky. Give a clear and concise explanation of the mechanism by which this risk could lead to failures!

    Part B: Consider this fragment of a FIFO queue package written to be used under this thread manager:

    	enqueue(item i, struct queue * q)
    	{
    		thread_wait( q->space );
    		thread_wait( q->mutex );
    		q->buffer[q->tail] = i;
    		q->tail = (q->tail + 1) % queuesize;
    		thread_signal( q->mutex );
    		thread_signal( q->data );
    	}
    
    Explain why, under the example thread package, 2 of the 6 lines in the body of the above function can be safely deleted without any effect on the correctness of the code.

  5. Background: Refer to the discussion of the MIPS R2000 in the text (section 3.3.4). Also refer to section 3.3.5.

    A Question: Why is there a process identifier field in each entry of the MIPS R2000 MMU's address translation associative memory? What software cost would you expect to have to pay if this field was not present in the address translation hardware?