Assignment 12, due Apr. 20

Solutions

Part of the homework for CS:3620, Spring 2018
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

On all assignments, your name must be legible as it appears on your University ID card! Assignments are due at the start of class on the day indicated (usually Friday). Exceptions will be by advance arrangement unless there is what lawyers call "an act of God" (something outside your control). Homework must be turned in on paper, either in class or in the teaching assistant's mailbox. Never push late work under someone's door!

  1. Background: The Unix/Linux fork() kernel call returns the process ID of the child process to the parent process, and wait() kernel call returns the process ID of the process that terminated. waitpid() allows you to wait for a specific child to terminate.

    a) Write code to fork a process into parent and child, then have the parent wait for that specific child using waitpid(). (0.5 points)

    pid_t pid = fork();
    if (pid) { /* parent */
    	waitpid( pid, NULL, 0 );
    } else {   /* child */
    	... whatever ...
    }
    

    b) Suppose you wanted to use waitpid() but you only had wait(). Write code for waitpid() using a call to wait(). This is not trivial because it is possible that the user calls waitpid(A) and then waitpid(B), but process B terminates before process A. (0.5 points)

    First, we'll ignore the options and ignore the second sentence:

    pid_t waitpid( pid_t pid, int *status, int ignored ) {
    	while( pid != wait( status ) ) { /* do nothing */ }
    }
    

    The above is inadequate because if you have two children, a and b, and you do waitpid(a,NULL,0) but process b terminates first, the fact that b terminated will be forgotten and a call to waitpid(b,NULL,0) will block forever. So, we need to keep an eye on all of the processes that we learn are done in order to wait for the one. Here's an outline of the code:

    pid_record * done = NULL;
    
    pid_t waitpid( pid_t pid, int * status, int ignored ) {
    	thisrec = done;
    	while (thisrec != null) { /* loop exits by return or by null */
    		if (thisrec->pid == pid) {
    			*status = thisrec&->stat;
    			return pid;
    		}
    		thisrec = thisrec->next;
    	}
    	for (;;) { /* loop exit is by return */
    		int thisstat;
    		pid_t thispid = wait( &status );
    		if (thispid == pid) {
    			*status = stat;
    			return pid;
    		} else {
    			pid_record * newrec = new( sizeof( pid_record ) );
    			newrec->next = done;
    			newrec->pid = thispid;
    			newrec->stat = stat;
    			done = newrec;
    		}
    	}
    }
    

  2. Background: Recall that a set of processes is deadlocked if every process in that set is blocked waiting for some action that can only be done by some other process in that set.

    For each set of conditions discussed below, is automatic deadlock detection possible? If yes, how? In this case, don't give an algorithm, but give enough detail that you can convert the problem to one where there are known algorithms that solve the problem. If no, why not?

    a) All memory is shared between all processes, with processes using semaphores for mutual exclusion and synchronization. (0.5 points)

    Semaphores are objects in memory, so the shared memory implies that any process can potentially signal any semaphore. As a result, the system cannot tell what process is waiting for what other process to do something, so it cannot detect deadlocks except in the ultimate case where all processes are waiting and none are ready or running.

    b) Memory is not shared. Interprocess communication is through shared files, with the Unix/Linux flock kernel call being used for mutual exclusion and synchronization. Ignore the LOCK_NB option that allows for non-blocking operations (polling), so locks have only 3 states: unlocked, shared or exclusive. (0.5 points)

    The system knows which processes have each file open, so if any process is blocked by attempting lock a file, the system knows that only the process that hold locks on this file can unblock the waiting process. (Either the one process holding an exclusive lock or all of the group holding shared locks.) This means that the system has all of the information needed to detect deadlocks.

  3. Background: Consider the following (invented) tools for client-server interaction on a shared-memory system:

    reply_handle = send_request( object_handle, server_ID )
    sends the indicated object as a request to the indicated process, which must act as a server and then await a reply. After the server replies, send_request returns the object that was included in the reply. Requests are queued and held until the server is ready to receive them.

    (object_handle, request_ID) = await_request()
    awaits a request from any client. The return value includes both the object handle sent with the request and the ID of the requesting process. (OK, the notation is a bit invented, but this function returns two distinct values.)

    send_reply( reply_handle, request_ID )
    sends a reply to the indicated client. The client must have sent a request to this process, and the value of request_ID must be a value returned buy await_request(). The reply_handle will be sent to the requesting process and returned by send_request() to that process.

    In this system, from a user perspective, each process may be either running, awaiting a request, or awaiting a reply. There may be additional process states that matter to the system in implementing these primitives. Assume that the implementation of these operations is done using semaphores. All semaphores involved in implementing these primitives go in the process description record.

    a) How many semaphores are required in each process description, who signals each of them, and who waits on each of them? Don't forget that there is a mutual exclusion problem hiding in the above description. (0.5 points)

    Each process needes a semaphore it uses to wait for a request (the counting semaphore on that process's incoming request queue). This is signalled by the process sending the request.

    There must be a mutual exclusion semaphore on each process's request queue to prevent that queue from being corrypted by two processes trying to do something at once (two processes making simultaneous requests or one making a request while the recipient is trying to dequeue a request). Any process trying to enqueue or dequeue from the request queue must wait on this, then signal it when done with the operation.

    A process that sends a request must then wait for a reply, so it needs a sempahore to wait on. Call it the reply semaphore. The server signals this semaphore and the client waits on it.

    b) What would be a sensible way to implement the queue of processes that have sent requests to a server that have not yet been delivered to that server. Hint: Think about how many requests a process may have outstanding at any time and how may processes may be requesting service from any particular server. (0.5 points)

    Since no process has more than one outstanding request, the request queue can be a queue of process descriptions. That is to say, we don't need to introduce any new classes into our operating system.