# Assignment 12, due Nov. 22

## Solutions

On all assignments, your name must be legible as it appears on your University ID card! Assignments are due at the start of class on the day indicated (usually Friday). Exceptions will be by advance arrangement unless there is what lawyers call "an act of God" (something outside your control). Homework must be turned in on paper, either in class or in the teaching assistant's mailbox. Never push late work under someone's door!

1. Background: Consider the buddy system implementation distributed in the skeleton solution to MP6, along with the mymalloc code that converts between the malloc()/free() API and the requirements of the buddy system.

a) A design choice was made here, to have the free-list index passed to the buddy system interface routines instead of passing the size of the desired block. This makes mymalloc() do extra work computing the log2 of the block size. Does this lead to an overall savings in computational effort, or does it merely move work from buddyalloc() to mymalloc(). Explain. (0.5 points)

It leads to an overall savings. If buddyalloc() was passed a size parameter, each recursive call would be required to compute the log of the size. Doing this just once in mymalloc eliminates this computation.

b) In buddyfree() the code contains a loop searching the free list. Why wasn't the loop written like this?

```/* search for budy in free list */
b = freelists[i];
for (;;) { /* loop exit by break */
if (b == NULL) {

/* put p at start of list */
*(void **)p = freelists[i];
freelists[i] = p;

/* quit loop */
break;
}
if (((intptr_t)p) == (((intptr_t)b) ^ (1 << i))) {

... etc ...

/* free result of merger */
buddyfree( p, i + 1 );
break;
}
/* walk down list */
b = *(void **)b;
}
```
(0.5 points)

The elided code (replaced above with ... etc ...) included code to unlink the block at address b from the free list. If we just know the address b and not the address of the pointer to b, then unlinking the block from the free list is difficult, to say the least. Using a pointer to a pointer makes this much simpler, while admittedly making the code more confusing to read.

2. Background: Consider these alternative ways to link two UNIX processes where one process produces data that is consumed by the other:
• The processes each use mmap() to map a shared disk file into their address spaces, and then they build a FIFO bounded-buffer queue in that shared virtual memory, using a pair of named semaphores that they connect to using sem_open() to track the amount of data and free space in the queue.

• The parent process uses mmap() to map an anonymous block of shared memory for the queue and its two semaphores, uses sem_init() to initialize the semaphores, and then calls fork() to create the other process that will communicate through this queue.

• One process uses write() to enqueue data items on the queue while the other uses read() to dequeue. The buffer sizes used by both of the above must match the size of the objects being pushed through the queue. The shell that was used to launch both processes is responsible for connecting them with a pipe, created using pipe().

a) Which of these options give (or gives) the programmer control over the size of the bounded buffer. (0.5 points)

The two solutions using mmap() give the user control of the buffer size -- because they require the user to implement the buffer. The solution using a pipe leaves the implementation of the bounded buffer in the hands of the operating system.

b) Which of these options has the potential to be more efficient, in terms of the number of system calls required to move the data? Explain. (Count only calls used to move data after the initial communication channel has been set up.) (0.5 points)

Named semaphores are guaranteed to require a system call for each semaphore because the man pages say that named semaphores are actually stored in the file system under the /dev/shm directory. Moving data to and from the queue in a mmaped file may involve page faults, but it should not involve system calls.

Anonymous semaphores probably involve system calls, but the man pages do not tell us enough to be sure. An anonymous shared memory region should not be more or less efficient than a mmaped file.

Pipes combine data movement with synchronization, so with pipes, the a single system call suffices to move a block of data from the producer onto the queue or from the queue to the consumer.

In conclusion, we do not have enough information to draw a conclusion that definitively suggests that one of these options will be more efficient.

3. Background: The Unix kernel was largely developed before semaphores were invented, and the semaphore services in the Unix kernel were added as afterthoughts. Many important Unix utilities such as sendmail were developed before before these afterthoughts were added. Named pipes are also an afterthought; in the original Unix, for two processes to communicate through a pipe, that pipe must have been created by an ancestor that forked to run the two in parallel.

Classical Unix did, however, have the following services that are relevant here:

• pause()
• setitimer()
• open( fname, O_EXCL | O_CREAT | O_RDWR , S_IRWXU )
• unlink( fname )

Classical Unix applications (still, to this day) use what are called lock files for critical section synchronization. A lock is considered to be claimed (indicating that a process is in the associated critical seciton) if the lock file exists. Busy waiting is used to enter critical sections, with the CPU demand reduced by throttling back the polling rate using timers.

a) Write a C function called delay1() that causes the calling process to wait one second. The only kernel calls required are mentioned above. (0.4 points)

```void delay1() {
struct itimerval one_second;
one_second.it_value.tv_sec = 1;
one_second.it_value.tv_usec = 0;
one_second.it_interval.tv_sec = 0;
one_second.it_interval.tv_usec = 0;

setitimer( ITIMER_REAL, &one_second, NULL );
pause(); /* wait for SIGALRM signal */
}
```

The above code is good enough for full credit on the assignment, but in fact, it doesn't work. The problem is, the default action for the SIGALRM signal is to kill the process. You need to install a handler to make the code work. Here is the minimal version that actually does the job.

```static void on_timer_expire( int signum ) {
/* do nothing -- returning is all that matters */
}

void delay1() {
struct itimerval one_second;
one_second.it_value.tv_sec = 1;
one_second.it_value.tv_usec = 0;
one_second.it_interval.tv_sec = 0;
one_second.it_interval.tv_usec = 0;

struct sigaction timer_action;
timer_action.sa_handler = on_timer_expire;
timer_action.sa_mask = 0;
timer_action.sa_flags = 0;
timer_action.sa_restorer = 0;

sigaction( SIGALRM, &timer_action, NULL );
setitimer( ITIMER_REAL, &one_second, NULL );
pause(); /* wait for SIGALRM signal */
}
```

The above code is just barely good enough -- it sets the timer, after having set up an empty timer handler, and then it pauses to wait for the timer to expire. A better version would make sure that the pause was awakened by the timer signal and not by some other signal, and it would save any previous value of the timer and then restore it when the timer expires. It would also save the previous handler's identity and restore that when the timer expires. That code is too complex for this, and the complexity of the whole thing is evidence that the Unix timer services are not particularly well designed.

b) Give a fragment of C code to enter a critical seciton guarded by the lock file /usr/mail/lock -- your code should call your delay1 routine each time it polls the lock. The only kernel calls required are mentioned above. (0.4 points)

```for (;;) { /* polling loop waiting for the lock */
int fd;
fd = open( "/usr/mail/lock", O_EXCL | O_CREAT | O_RDWR , S_IRWXU );
if (fd >= 0) break;
delay1(); /* delay a second to allow other processes to run */
}
```

The above code is adequate for this assignment, but in practice, it should also call close(fd) if the file was successfully opened.

c) Give a fragment of C code to exit the critical seciton guarded by the lock file /usr/mail/lock. The only kernel calls required are mentioned above. (0.2 points)

```unlink( "/usr/mail/lock" );
```