Assignment 4, due Feb 22

Part of the homework for 22C:112, Spring 2008
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Always, on every assignment, please write your name legibly as it appears on your University ID and on the class list! All assignments will be due at the start of class on the day indicated (usually a Friday). The only exceptions to this rule will be by advance arrangement unless there is what insurance companies call "an act of God" - something outside your control. Homework must be turned in on paper and in class! Late work may be turned in to the teaching assistant's mailbox, but see the late work policy. Never push late work under someone's door!

  1. Background: The main program of an X application (that is, an application running under the Unix X-windows system) generally follows the following outline:
    	main() {
    		create various window objects, some nested within others
    		call XtAddEventHandler() for each object add event handlers
    		for (;;) {
    			XEvent event;
    			XtAppNextEvent( ... , &event );
    			XtDispatchEvent( &event );
    		}
    	}
    

    The window library handles dispatching events to the right handler by determining which window object the event occurred in and then transferring control to that object. Pending X-events are queued in the X-event queue. X-events include such things as keypress events and mouse-click events. The X window manager process is responsible for merging keypress and mouse-click events into a single stream of events.

    a) Propose (in vague outline) the means by which input events on two different devices could be merged into a single stream of events for delivery to your X application. (0.5 points)

    b) Propose (in similar vague outline) how a subroutine registered as an event handler for an X object could be called by the XtDispatchEvent routine. This has several parts: How would you find the right X object, how would you store the handler's identity in that object, and how would you transfer control to the handler. (0.5 points)

  2. Background: Consider a computer where each device has an interrupt-enable bit and a ready bit in one of its device control and status registers. There is also a global interrupt-enable bit in the CPU's state register. Device X will request an interrupt if it is ready and its own enable bit is set and the global enable bit is set.

    There is no hardware support for the concept of interrupt priorities. There is no hardware support for interrupt vectoring. That is, all interrupts transfer control to a single interrupt service routine that must scan the ready bits of the various devices to see which device requested the interrupt.

    The CPU automatically resets the global interrupt-enable bit as it transfers control to an interrupt service routine. There is a special return from interrupt instruction that enables interrupts as it returns.

    This problem deals with the idea of having the software create a notion of interrupt priority, such that higher priority interrupts could interrupt lower priority interrupt service routines.

    a) In What order should the interrupt service routine scan the ready bits of the various devices in order to create a notion of interrupt priority? (0.3 points)

    b) If higher priority devices are allowed to interrupt lower priority interrupt service routines, the global interrupt enable bit will need to be set by the interrupt service routine. What must it do first in order to to prevent itself or lower priority devices from requesting interrupts. (0.3 points)

    c) For some devices, the device interrupt-enable bit is used to effectively turn off that device. For example, for the printer port, if there is no pending output to the printer, the printer interrupt-enable bit is reset. When an application enqueues a character to print, the printer's interrupt is enabled. How does this complicate the activity required in part b? (0.4 points)

  3. Background: Starting around 1970, some computers were marketed with a machine instruction that caused the CPU to cease its fetch-execute cycle until a device asserted an interrupt request. Let's call this instruction sleep to distinguish it from halt, the usual name for an instruction that simply stopped the CPU. When a CPU is sleeping, its power consumption goes way down compared to when it is running its fetch-execute cycle.

    Consider a system with interrupt-driven input-output and a single-threaded application that communicates with devices through queues. Application access to queues is through predicates to test queue empty and queue full, plus routines to enqueue and dequeue data.

    a) In general, when should the sleep instruction be executed? Your answer should be given in terms of queue states, application activity and device activity. (0.5 points)

    b) Conaider the specific case of an application reading characters from an input queue. Where does the sleep instruction go in this code:

    	while (empty(inputqueue)) (void);
            ch = dequeue(inputqueue);
    
    (0.5 points)