Homework 4 Solutions

22C:116, Spring 1997

Ioana M. Lungeanu
Problem 1 Part A

    interrupt_service_routine_for_page_fault
    {
        disable all interrupts that may cause process scheduling
	save state of process P that caused fault
           (must include info needed to get addr
	   of referenced page in this state!)
        mark process as blocked awaiting signal to P.fault_sem
	pass P to page fault handler process
        V(page_fault_process)
        reenable disabled interrupts
        return
    }

    page_fault_handler_process
    {
        Repeat
            P(page_fault_process)
            get P for process that caused fault
            if reference is to N, page not in TLB
                if reference is to page in memory
                    soft page fault for TLB update
                else
                    select frame F containing R, page to replace
                    mark it not valid
                    if frame F dirty
                        enqueue  for disk I/O
                        enable(disk_interrupt)
                        P(P.io_sem)
                    endif
                    enqueue  for disk I/O
                    enable(disk_interrupt)
                    P(disk_request_semaphore)
                    update TLB and map
                endif
                V(P.fault_sem)
            else
                access violation
                kill P
            endif
        enable(page_fault_interrupt)
    }

Problem 1 Part B

    An easy way to allow for multiple faults at the same time would be
    to queue them for the page fault process.  When the interrupt routine
    is called it enqueues the requests (containing the process ID, which
    allows access to everything else that matters) in the fault service
    processes queue.  The logic shown above allows this fairly simply.

    The problem with this solution is that it forces page faults to be
    served in the order in which they are encountered, thus eliminating
    the potential of the disk scheduler to speed service.  A better
    approach would be to enqueue the read request for the page that caused
    the fault with the semaphore P.fault_sem in the I/O request, so that
    the disk interrupt service routine would unblock the process that caused
    the fault.  This would allow faults to be served in the order that their
    disk I/O completes.

    The difficult aspect of this second solution is that it requires that
    the disk interrupt service routine be responsible for the final update
    to the memory address maps on completion of I/O scheduled as part of fault
    service.

Problem 2

    There is a queue of requests pending for service by the
    comm_line_routine.  Each request is for a block.  All blocks have
    the same size: N bytes of data.  Requests don't mix with one
    another (only one is active at any given time).

    Requests have the specified 3 fields. The routine needs to maintain
    a request queue, a counter (number of bytes received for the active
    request) and an accumulator for the checksums of blocks being
    received.  These are system variables initialized to 0.  Head_request
    is the head request in the request queue

    comm_line_input_interrupt_routine
    {
	disble(communication_interrupt)
        if count < N-1 
            count ++
            store byte read in head_request.buffer[count]
            accumulate checksum for byte read
	    enable(communication_interrupt)
        elseif count = N-1 
            if byte_read not equal to accumulated checksum
                try to correct it
                if succesful
                    set head_request.status = OK
                else
                    set head_request.status = ERR
                endif
            else
                set head_request.status = OK
            endif
            count = 0
            checksum accumulator = 0
            signal head_request.semaphore
            dequeue head_request from request queue
            if request queue not empty
                -- setup to handle next I/O request
                enable(communication_interrupt)
            else
                -- leave interrupt disabled
                -- customer must enable interrupt on enqueue of request
            endif
        endif
    }

Problem 3

    The purpose here is to minimize the number of rotations since
    the rotational latency is a big factor in the total acces time
    for a disk.

    In the case of requests for different sectors the order in which
    they are serviced is very important. If interleaving is present
    then accesing sectors in increasing order of their sector number
    minimizes the number of rotations.  So the queue of requests
    should be ordered in increasing fashion, (leaving out duplicates
    for the next round).

    When the requests are for whole tracks, the order in which we
    service them does not matter since each request will take the same
    time to service (a whole rotation).

    Another solution is to employ track-at-a-time caching technique.
    In this aproach there is no difference between requests for whole
    track or for different sectors.  We spend a whole rotation anytime.
    This if effective when there are many requests for a track or
    sectors in the same track.  We can still chose to cache the track
    with more requests for it or for its sectors first.  Hoping that
    by the time we are done we'll have more requests for some other
    track.

Problem 4

    RAM disk works best when accesses are for different disk blocks.  It
    needs to have allocated for it enough memory space to fit in all the
    disk blocks.  Request queue as a cache works best when there is
    frequent reading or writing activity at the same disk block. Request
    queue reduces very much the number of needed accesses to the disk by
    eliminating redundant request.

    In the case where the queue works best comparable performance of the
    two can be expected.  Even if RAM disk performes "bad" in this
    situation, not needing disk access at all probably keeps up with the
    best of the request queues.

    Otherwise, if resources permit RAM disks to exist they will perform
    better because all disk blocks are pre-fetched into the RAM disk at
    startup time and no real disk activity will occur from that time onward.

    However!  If fault tolerance is an issue, most caches will write data
    back to disk eventually, and the disk request queue based cache will
    write everything back to the disk whenever user processes are idle even
    briefly.  In contrast, RAM disks usually only copy back to the real
    disk when the system is shut down in an orderly way.  The potential
    for data loss is therefore greater with most RAM disk implementations.