In general, in a program synchronized with semaphores, any thread or process may be able to signal any semaphore. Therefore, the problem is at least as complex as the or-model!
Part B: Would it be practical to add a general purpose deadlock detection mechanism to the example thread manager, so that it would detect and report (with an error message) all deadlocked applications running under the thread manager. If so, explain what detection algorithm you would use. If not, explain why.
Because the or-model applies, deadlock detection relies on knot detection. Unfortunately, even this requires that we be able to determine, for each blocked thread, the set of threads that could potentially signal the semaphore on which it is blocked. Since the example package is based on C or C++, languages that have no protection mechanism, any thread may potentially access any variable. Therefore, if even one thread is not blocked, it is possible that that thread will eventually signal any semaphore. Short of simulating the future course of the computation through to the end of time, we cannot determine if the process will indeed do so. Therefore, the only automatic deadlock detection possible in this thread package is the mechanism already present -- we report a potential deadlock if all threads have blocked.
a) For first-fit allocation, assuming holes are tried in the order of their memory addresses,Holes: 10K 4K 20K 18K 7K 9K 12K 15K Blocks: 10K 12K 9K New holes: 8Kb) For best-fit allocation,Holes: 10K 4K 20K 18K 7K 9K 12K 15K Blocks: 10K 9K 12Kc) For worst-fit allocation,Holes: 10K 4K 20K 18K 7K 9K 12K 15K Blocks: 12K 10K 9K New holes: 8K 8K 6Kd) For next-fit allocation, assuming holes are tried in the order of their memory addresses,Holes: 10K 4K 20K 18K 7K 9K 12K 15K Blocks: 12K 10K 9K New holes: 8K 8K
- The address of the first byte of the instruction.
- The number of instruction bytes fetched in the current instruction.
- The number of operand bytes loaded or stored by the current instruction.
The address bus would be copied at the start of each instruciton cycle and the two counts would be zeroed at the start of each instruction cycle. The counts would be incremented at the end of each successful memory cycle. When a MMU trap is requested, the address and the two counts would be moved from internal MMU registers to the interface registers that the trap service routine can inspect (typically, the virtual address that caused the trap would also end up in such an interface register).
Part B: How would you use this data?
The address of the first byte of the instruction allows the trap service routine to find the opcode of the instruction that caused the trap. This plus the count of instruction bytews allows it to determine how much of the instruction has been fetched, while the count of operand bytes allows it to find out how far the execution of the instruction got before the trap occurred. Knowing these plus some basic details of the microcode -- these can be determined empirically -- it should be possible to figure out which side effects the instruciton has yet to execute and which it has not yet executed.