Assignment 11 Solutions

Part of the homework for 22C:112, Spring 2012
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

  1. Background: The Unix signal() kernel call (now part of the standard library because lower level primitives are available in the kernel) can be used to install a handler. For example, signal(SIGALRM,handler) can be used to set things up so that the function handler() is called when the SIGALRM signal is delivered. Use man  signal and man sigvec for more information.

    The setitimer() kernel call can be used to set the ITIMER_REAL timer. When this timer expires, the SIGALRM signal will be delivered on timer expiration.

    A problem: Write code to install a handler in your program that, once started, will increment the global variable mytimer once per second, doing this independently of all other activity in your program. In effect, you can use the SIGALRM signal and its handler as if they were interrupt service rutines. (1 point)

    int mytimer; /* initial value not specified */
    void ding() { /* signal handler */
            mytimer ++; /* this is the whole point of the exercise */
    int main() {
            struct itimerval periodic;
            periodic.it_interval.tv_sec = 1;
            periodic.it_interval.tv_usec = 0; /* the period is 1.0 seconds */
            periodic.it_value.tv_sec = 1;
            periodic.it_value.tv_usec = 0;    /* initial wait is 1.0 second */
            signal( SIGALRM, ding );          /* attach the handler first */
            setitimer( ITIMER_REAL, &periodic, NULL ); /* then set the timer */
            for (;;) { /* an unspecified main program doing anything */

    Note: The above code is tested and works perfectly under Linux. You can output in ding() or in the body of the main program to demonstrate that it is working.

  2. One approach to exploiting a multicore processor with user-level threads is to begin the parallel application by:

    The number of replica processes should be no more than the number of CPUs or CPU cores available. It may be less if you want to leave some processors to do other things.

    a) Under the above scheme, which of the following are shared (or may be shared) and which are private?

    (0.5 points)

    Code will naturally be shared because it is read only and therefore trivial to share. Global variables are not shared. Because the heap is in the shared segment and stacks for threads are allocated on the heap, local variables of functions called by threads are inherently sharable. Dynamically allocated objects on the default heap are not shared, but objects on the new heap are shared.

    The most common answer was that global variables (the static segment) were shared -- but the problem statement made it clear that the only shared variables were in a newly allocated shared heap.

    The most common answer was that local variables were private -- but the problem statement made it clear that the stack of each thread was allocated in the shared heap, and therefore, local variables are in shared memory and may be shared.

    The most common answer was that objects dynamically allocated before are shared -- yet they must have been allocated on the old heap in the static segment and therefore, since the static segment is not shared, they must not be shared.

    The most common answer was that objects dynamically allocated aftger are private -- yet it is made clear that the shared segment is used for a new heap, and therefore, allocations made in this heap are potentially shared.

    b) Under the above, if a thread calls, for example, the read() kernel call, and the call blocks, what are the consequences for the other threads? (0.5 points)

    Calling read() (or any other kernel call that blocks the calling process) from within a thread will block the process running that thread, but will leave the other processes unblocked, so the threads they are running will continue to run.

  3. Background: Suppose you have a thread package that contains the following primitives, and no others:

    a) How can a user of this thread package implement the semaphore operations wait(s) and signal(s). Your suggested implementation must, of course, suggest a representation for semaphores. (Hint: the answer is almost trivial, no if-statements or loops are required.) (0.5 points)

    Each semaphore is represented by a queue. Waiting on a semaphore is done by dequeueing from the queue representing that semaphore (the data dequeued is ingored), while signalling a semaphore is done by enqueueing one (nonsense) pointer on the queue.

    Many wrong answers tried to use one queue as a ready list and associate a queue of waiting processes with eacy semaphore. This does not work and cannot be made to work!

    b) From the above, can you infer an appropriate implementation of queues that is likely to be used in the implementation of this thread package? Describe, at minimum, the data structure used. (0.5 points)

    There is no hint that the queue can fill, suggesting some kind of linked list. This, in turn, requires that the queue object itself have, for example, a head and tail pointer, plus a mutual exclusion semaphore and a semaphore to count the number of data items in the queue. Dequeue would wait for data before claiming the lock on the queue. Enqueue would signal that data has been enqueued after releasing the lock on the queue.

    Many students suggested a queue implementation but did not understand that it was necessary to also include synchronization.