Assignment 9, Solutions

Part of the homework for 22C:116, fall 2002
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

  1. Background: Problem 2 on the midterm, part B, can be restated in the following terms: Consider a system using demand paged virtual memory. When a page is in main memory, the page table entry is marked as valid and it contains the physical address of the page. When a page has been swapped out to disk, the page table entry is marked as invalid and it contains the disk sector number holding the page.

    A Problem: If each process has its own virtual address map, does this make it difficult to implement shared pages? Why, and why does your answer change when you segment the address map and share entire segments?

    Note: One problem is identical with shared pages or shared segments: with either scheme, a block of shared memory may be assigned different virtual addresses in the two different page tables, and this leads to trouble with linked lists or other data structures containing shared pointers. Clearly, this is not the issue!

    If each process has its own virtual address map, where some map entries hold disk addresses and others hold page frame numbers, then when one process copies a shared page from disk to main memory, it has no way of updating the maps of other processes that share that page! This is a serious difficulty!

    In contrast, if we share segments, the page table for a shared segment would not be duplicated, so changes made to a particular page of that segment will be seen immediately by all processes that share the segment.

  2. Background: The notes for Lecture 22 includes an example of protocol bloat where transmitting an 8-bit character through the session layer requires transmission of 286 bits over the physical communications medium.

    Note that the packet sizes imposed by the formats for the session, transport and network layers are limited to 256 bytes because of the use of 1-byte message-size fields. The link layer uses reserved STX, ETX and EOT characters (presumably, it also uses some kind of escape codes when these special characters are present in the data from upper layers, but you can ignore that detail here).

    For this problem, we define the utilization of the physical layer to the the ratio of actual user data sent (from above the session layer) to total bits transmitted over the physical layer.

    Problem: What is the maximum utilization possible, given the limits on packet size set by each of the layers. Assume that there are only two users of the network, and that these users exchange packets alternately and as fast as possible. Ignore the time taken by all computations.

    The network layer message size is limited to 256 bytes by the 1-byte message-size field. The network layer adds 5 bytes to this, and the link layer adds 5 more bytes to this for a total of 266 bytes sent over the link, where each byte takes 11 bits for a total of 2926 bits sent over the link.

    Above the network layer, the transport layer added 9 bytes and the session layer added 6 bytes, for a total of 15 bytes. Subtracting this from the 256 byte block sent through the network layer, we get 241 bytes as the block size at the presentation or application layer. at 8 bits per byte, this is 1928 bytes per block.

    The utilization is 1928/2926 or 0.66.

  3. Background: Consider the problem of implementing the Demos model of interprocess communication under our example thread manager. Assume that messages are passed by copying from the sender's buffer to the receiver's buffer and that the sender blocks until the message has been copied to the recipient (this is called synchronous message passing).

    Part A: Synchronous message passing involves the sender and receiver both blocking until the message has been conveyed. Given that our thread package has semaphores, these must be the tools we use to block. Give a protocol allowing the sender and receiver of a message to block at the right times; in giving this, you must specify what semaphores are used, what components they are associated with, and where the wait and signal operations go relative to the code for copying the buffers.

    The sender must enqueue a message notice in the receiver's queue. This notice will include material related to the message, plus a semaphore (owned by the sender) that the receiver will signal when the message has been received.

    The sender's semaphore can be a local variable of the send routine, as can the message notice itself, since these are only needed during the time the sender is blocked.

    The sender will signal the receiver after placing the message notice in the receiver's queue, then wait on the sender's semaphore. The receiver will wait for a message by waiting on the receiver's semaphore, dequeue it, and then copy the buffer before signalling the sender's semaphore.

    Part B: Present an appropriate data structure (or appropriate data structures) for representing a request by a sender to send a message to a receiver. This may contain some combination of data, pointers to data, links, pointers to links, and semaphores or pointers to semaphores.

    The request contains:
    A next field (pointer to message) used to make queues of messages.
    The sender's buffer address.
    The sender's buffer size in bytes.
    The sender's semaphore, to be signalled when the message is received.

    Note that the semaphore was included directly in the request, instead of including a pointer to a semaphore. This was feasible because we assumed that the the semaphore was transient, allocated and initialized anew for each send and deallocated as soon as the signal indicating message receipt was delivered. In fact, the entire request can be safely stored in the send routine's activation record.

    Notice: This is all in preparation for an upcoming programming problem.

  4. Problems from the Text:
    Problem 22 on page 580.
    Imagine a remote procedure that executes a call to open() and then tries to return the open file descriptor to the caller. This opens a file on the server, not on the client's machine, and if the client does a read() or write() using the descriptor, the result is entirely unpredictable, but it is easy to predict that the result does not depend on this open operation!

    What we must do is guarantee that each system call that is supposed to be executed on the client's machine is executed there, and this means that the client must also offer a server to execute these, so that the remote server can be passed remote procedure call handles to the client's machine for executing these system calls.

    Problem 32 on page 581.

    First, we could defer closing the connection to the web server while we examine the web page to see if there are any further materials needed from that server. On finding any, we could then use the already-open connection to pull these additional materials from the server, thus saving the cost of closing and reopening a session from client to server. Of course, we must modify the server to accept additional requests over an already open link.

    Second, we could have the server examine the HTML it serves to clients. If it finds a reference in that HTML to any graphics or style sheets on the same server, it could push those over the network to the client in anticipation of the client's need. Of course, the client would need to be modified to cache these unrequested materials and check that cache before going back to the server.