Note that there is an alternative free-list organization where the free list is arranged as a circular list with a single head pointer that "walks around" the circle. Deallocation leaves the newly freed block just behind the head, while allocation walks the head forward until a large enough free block is found.
Part A: As time passes, after many allocate and deallocate cycles for blocks of random sizes, what distribution of free block sizes would you expect in the free list. That is, would you expect the list to be sorted, or almost sorted in some way, or would the distribution of block sizes as a function of list position be random? What sorting would you expect for the alternative free list organization?
Part B: Given your answer to part A, how would you expect these two free list organizations to differ for real applications where most allocate and deallocate operations involve only a few common block sizes, while other more random appearing block sizes are manipulated only occasionally?
Consider this alternative: Incremental lazy merger. Instead of waiting until a free block is needed to do any merger, we maintain a routine called merge-step that inspects one free block each time it is called, checking to see whether that block is adjacent to another free block and performing exactly one block merger if so. This merge routine needs a persistant state from one call to the next, a pointer that is advanced down the free list one step per call. Code needs to be added to the allocate routine to retain the validity of this pointer in the event that the block it points to happens to be removed from the free list (for allocation).
The Question: Does incremental lazy merger guarantee that the resulting system will meet real-time constraints? How would you expect its performance to compare with prompt and purely lazy merger?
Part A: The code implementing thread_exit() contains notes that parts of this code are risky. Give a clear and concise explanation of the mechanism by which this risk could lead to failures!
Part B: Consider this fragment of a FIFO queue package written to be used under this thread manager:
enqueue(item i, struct queue * q) { thread_wait( q->space ); thread_wait( q->mutex ); q->buffer[q->tail] = i; q->tail = (q->tail + 1) % queuesize; thread_signal( q->mutex ); thread_signal( q->data ); }Explain why, under the example thread package, 2 of the 6 lines in the body of the above function can be safely deleted without any effect on the correctness of the code.
A Question: Why is there a process identifier field in each entry of the MIPS R2000 MMU's address translation associative memory? What software cost would you expect to have to pay if this field was not present in the address translation hardware?