22C:116, Lecture Notes, Sept. 22, 1995

Douglas W. Jones
University of Iowa Department of Computer Science

  1. Resource Protection and Security

    Protection mechanisms are used to control access to information and resources.

    Security policies dictate who should have access to resources and information, and what limits, if any, are placed on the uses made of these resources and information.

    Not all protection mechanisms can enforce all security policies, and not all security policies successfully achieve their goals.

  2. Security Issues

    Secrecy and Privacy are two related goals a security policy may seek to achieve. Who may keep secrets? What may be held as secret? What control is there over information which has been disclosed?

    Privacy is trivial when everything is secret. Privacy problems arise when a user discloses something to another user. Once information is disclosed, it is desirable to prevent other users from misusing the disclosed information. Such misuse is at the heart of most threats to privacy.

    Authentication is a key problem that must be faced in implementing a security policy. Just because a man says that his name is Fred Smith does not mean that he is indeed Fred Smith. This applies not only to human users, but to programs as well. Just because a program is in a file called passwd does not imply that it is indeed the program for changing passwords.

    Integrity is another problem that plays an important role in some security policies. Is the data I received from you the same as the data you sent me? This is both an issue when errors are possible and when there is a need to guard against malicious users making unauthorized modifications or substitutions.

    Availability is not usually considered a security problem, but it is tightly connected to security in many ways. The simplest attack on a system is to attempt to destroy or temporarily disable it. Furthermore, mechanisms that protect against such threats also protect against a large number of accidents that can also limit system availability.

  3. Threats to Security

    A leak occurs when data is transmitted by an authorized user to an unauthorized recipient.

    The old military doctrine of "need to know" reduces the threat of leaks by compartmentalizing access to information. This doctrine states that, even if you have the security clearance required for access to some piece of data, you cannot have it unless you need to know it for your job. Authorized users have access only to the information they need to do their job and nothing more. Thus, the amount of information one user can expose is limited.

    A Brouser or eavesdropper is a user (perhaps an unauthorized user) who explores stored information or communications channels to gain access to selected information.

    The danger of brousing is minimized by reducing the size of the publically accessible area. Brousing can be defeated by encryption of data that is publically accessible.

    Inference techniques can be used to extract information that should not have been available from information that was disclosed to some user.

    Inference is involved when information from different sources is combined to draw a conclusion, and it also covers code breaking attacks on encrypted data.

    A trojan horse attack involves inserting unauthorized software into a context where unwary users may execute it. A virus is an example of such a threat, but there are many more, for example, programs that emulate the login sequences of systems and make copies of the user names and passwords people type in.

  4. Deliberate versus Accidental Attacks

    Security and reliability are inextricably intertwined. In general, what one user may do by accident, another may do with malice and forethought. This is wonderfully illustrated by two network failures in recent years, both of which are now well known, Morris's Internet worm and the Christmas Virus (which was not technically a virus!)

    The Christmas Virus was an accident. It began as a cute idea for a Christmas card by a relatively new user of a German BITNET site. BITNET used IBM mainframe-based communications protocols, and many BITNET sites at the time ran on such mainframes. Instead of merely sending the message as "ASCII art" (more properly, EBCDIC), the user encoded it as a command language script, and he put the message "Don't read me, EXEC me" in the title. Unfortunately, at the end of the script (which generated a cute Christmas card on the recipient's screen), he put the commands for sending the card; furthermore, these commands sent the card to whoever the recipient had recently corresponded with (a list that the IBM mainframe-based E-mail system maintained in a convenient form for such use).

    Because the Christmas Virus always transmitted itself to recent correspondents of an unwary recipient who EXEC'd it, the new recipients tended to let their guard down. It appeared in their in-box as mail from someone they were likely to trust, and a cursory glance at the contents of the message didn't reveal anything that was likely to do damage if it was EXEC'd (the dangerous parts were at the end).

    The Christmas Virus didn't cause big problems on BITNET because there were enough hosts that were not IBM mainframes that the Christmas card could not be executed by most recipients. Unfortunately for IBM, a copy or copies of the card made it into IBM's internal network, where most of the machines were compatable IBM mainframes. The result was a quick descent into chaos as thousands of copies of the Christmas Virus clogged every incoming mailbox and communications channel in the network.

    Morris's Internet Worm was a deliberately crafted bit of software that that a student released on the Internet. Once the program was running on any particular machine, it searched in local network configuration files for the names of other machines, then attempted to log in on the other machines and run itself. In addition to obvious attacks, such as simply issuing an "rlogin" command and hoping the remote machine had the appropriate permissions set, the worm program also exploited a bug in the UNIX finger daemon (the process that responds to finger requests from other machines on the net) and a bug in the UNIX mailer. One of these bugs allowed remote users to send shell commands to a remote machine for execution, and the other bug allowed remote users to download machine code. Depending on how the targed machine was configured, these bugs sometimes allowed the remote user to gain access to the "superuser" or privileged mode of execution.

    Morris may not have intended his worm to cause damage -- it contained defective code that, if corrected, would have limited the replication of the worm. Also, while it frequently managed to run in superuser mode, the worm did not take advantage of its privilege to do anything malicious.

    Nonetheless, Morris's worm brought many UNIX systems on the Internet to a standstill, flooding those machines that were vulnerable to it's attack with multiple copies of the worm's code and jamming network channels with multiple messages from the infected machines to other machines. Some sites survived the attack because they were isolated from the rest of the net by gateway machines that resisted the attack, but other Internet sites were completely shut down.

  5. The Orange Book

    The United States National Security Agency and the United States Department of Defense have developed a useful taxonomy of secure computer systems. In summary, this identifies the following common categories of systems:

    By default, UNIX systems are at roughly level C1 -- they provide users the ability to control access to their own files (access control is discretionary -- at the users' own discression).

    Most vendors UNIX systems contain mechanisms for auditing access to key resources (login-logout, network resources, etc). If these are all used properly, the systems can qualify at the C2 level.

    Non discretionary security labels are things such as SECRET, TOP SECRET and so on. If each file has a label automatically attached to it, and if each user has a clearance level assigned to them by the system administration, then the classic military scheme can be used: No user may read a file that has a label at a higher level than that user's clearance, and all files written by that user are given, as a label, that user's clearance.

    This, plus discretionary controls, produces an ad-hoc system at level B1. Some vendors have added such labeling to UNIX systems, but it is clearly an afterthought.

    Higher levels in the security hierarchy require that there be a formal model and that the system be formally verified as an implementation of this model.

  6. Covert Channels

    The Orange Book identifies a useful distinction, that between the overt communications channels that are designed into a system and documented in the user's manual, and any covert communications channels that are not documented. Covert channels may be deliberately inserted into a system, but most such channels are accidents of the system design.

    B3 level systems are designed on the basis of a particular security model, a model supporting the notion of domains. Access control lists are usually used to satisfy this requirement, but capability lists should do just as well. MULTICS (as sold by Honeywell), PRIMOS, and the IBM AS 400 series of machines have all been able to satisfy this level's requirements, but there is little hope for compatible UNIX systems satisfying this. Incompatible UNIX-like systems such as the old HP/Apollo Domain operating system may satisfy this level.

    A level designs require not only the construction of a system based on a formal security model, but the verification of the design.

    At level C and above, the Orange Book requires that system designs not only have clearly defined formal communication channels, but they require that the system designers identify and document what are called "covert channels". Those are ways that one user can communicate with another outside the official channels documented in the user's manual. At level B, such channels must be blocked or audited.

    For example, a user might brouse through newly allocated files or newly allocated memory regions looking for information that was left there by previous users. This channel can be blocked by zeroing newly allocated memory regions. One effective way to do this is to return newly deallocated memory to a background process that runs a memory diagnostic on it before returning the memory to the free-space pool.

  7. An example of a covert channel

    Given a multiprocessing system that allows user programs to monitor the time of day and wait for specific times, a clever program can send data using the following program:

       Sender:
          repeat
             get a bit to send
             if the bit is 1
                wait one second (don't use CPU time)
             else
                busy wait one second (use CPU time)
             endif
          until done
    
    In effect, the sender modulates the CPU utilization level with the data stream to be transmitted. A program wishing to receive the transmitted data merely needs to monitor the CPU utilization.

    This is a noisy communication channel -- other processes also modulate the CPU utilization unintentionally, but if the sender and receiver agree on appropriate error correcting and detecting codes, they can achieve any desired degree of reliability.

    This channel is very hard to plug! If the receiver has any way of monitoring the CPU load, the receiver can use it. The channel can be made less reliable by making load monitoring less accurate (for example, by having the system service that reports the load deliberately introduce random errors into the report).

    The channel can be plugged at great cost by denying all processes access to any information about the real-time clock or the current CPU loading. It can also be plugged by giving each user a different CPU, but the same approach can be used to communicate through any shared allocatable resource (alternately allocating and deallocating a large block of memory, for example).

    Use of this channel can be audited! For example, any program that makes frequent checks of the real-time clock or of the cpu load can be reported to the administration for investigation.

  8. Protection Mechanisms

    The mechanism - policy distinction emerged with the work of Anita K. Jones on the Hydra system at Carnegie Mellon University.

    Prior to this, protection was frequently managed on an ad-hoc basis, or mechanisms were implemented to support a specific policy. The policy supported was frequently inadequate, and the fact that the mechanism could be generalized was usually accidental.

    As a rule, those who "own" information should control the policy used to protect that information, and the system should allow for any reasonable policy.

  9. Crude Protection Mechanisms

    Crude protection mechanisms are still very common! The most common such mechanism is the 2-state system.

    In User state, use of instructions that operate on input/output devices and the memory management unit are forbidden. If a forbidden instruction is executed, there is a trap.

    System state allows all instructions to be executed. This state is entered when a trap occurs, and the instruction sequence to return from the trap typically resets the system to user state.

    Such 2 state systems are typical of the marketplace. Far more sophisticated systems are possible -- for example, the Intel 80286 (and higher) chips support fully general mechanisms, which, unfortunately, are not fully exploited by commercially available operating systems for the IBM PC.

    All unprivileged user code runs in user state on a 2 state system. The current state (system or user) is saved with the program counter when there is an interrupt or trap, and then system state is entered as part of the control transfer to the interrupt service routine. Return from interrupt (or return from trap) restores the state of the interrupted program.

    Any memory management unit can be used, although the variety of policies that can be implemented depends on the unit. A simple MMU that merely limits the range of allowed addresses on fetch and store operations will effectively protect disjoint users from each other. A complex MMU that allows paged addressing will allow complex information sharing between users.

    If there is a paged MMU, either the page holding the interrupt service routine must be in the user's address space, marked read/execute only, or there must be a system address space supported by the MMU -- for example, the current state (user or system) can be given to the MMU as an extra high-order bit of the address.

  10. System calls

    On systems with no protection mechanisms, system calls are merely procedure calls. Conceptually, this remains true when there is a protection mechanism, but with protection mechanisms, the implementation of system calls is more complex.

    On systems with protection mechanisms, system calls typically involve not only a transfer of control from the user program to the code of the system, but they also involve a change of operating state from user state to system state.

    Typically, the transfer of control from user code to system code is accomplished by deliberately using an unimplemented or privileged instruction, causing a trap. This both transfers control and changes the state of the system. The trap service code that intercepts system calls gives new, software defined meanings to unimplemented or privileged instructions that are used for system calls.

    In an unprotected system, system calls are usually done with the standard calling sequence and standard parameter passing mechanisms used by user programs. The system code and all user code typically shares a single memory address space, and aside from the changes made to the library to prevent the loading of duplicate copies of the system code, linkage between user code and system code is done by the standard linkage editor.

    In a protected system, the semantics of a system call is the same as in an unprotected system. The only difference lies in the reaction of the system to errors (does it detect the error quickly, or does chaos emerge slowly as the consequences of the error propagate through the system until the whole system crashes).

    Aside. To hardware, a trap looks like an interrupt. To software a trap is a deliberate call to a system procedure from user code. An interrupt, on the other hand, is an asynchronously initiated context switch.

    To software, code executing in a trap service routine is viewed as part of the process that caused the trap. Thus, that code may call on scheduling primitives such as P and V, with all of their effects on the running process.

    In a demand paged system, it is usual to consider the page fault trap to be part of the user process on who's behalf it is called! The page-fault-service routine must schedule the reading and writing of pages through the I/O system, and while the I/O is in progress, the program that caused the page-fault must wait. This is easily implemented by the same read and write logic that is used to block user code while I/O is in progress, and wile the process that caused the page fault waits, it is natural to schedule other processes.

    From a software perspective, interrupt service routines are quite different. These are usually viewed as separate logical processes, but instead of running under the authority of the software process scheduler, they run under the authority of the external device that requested the interrupt (which is usually viewed as being at a higher priority than any priority level established by software).

    One result of this is that the normal scheduling services such as P and V are not available to the interrupt service routines. Parts of the scheduler are implemented by interrupt service routines, but if an interrupt service routine wants to wait, it must do a return from interrupt, and if it wants to do a signal operation, it may not usually relinquish control through the scheduler the same way a normal user process does.

  11. Implementing System Calls

    In UNIX, the trap instructions used to issue system calls are hidden from the user. If you read a user program and find a call to a system routine, such as the read service, it looks just like a procedure call. Inside the system code, you'll find the corresponding procedure, and it is easy to imagine that the user directly calls this procedure.

    In fact, the situation is quite different! When a user calls read, this actually calls a "system call stub" that is part of the standard system library. This stub is a tiny procedure with a body that typically containis just one trap instruction. The trap service routine is responsible for determining, from the trap instruction that was executed, what system call was requested, then making the parameters accessible in system state and calling the appropriate system procedure. This is illustrated here:

       USER STATE              /  SYSTEM STATE
                              /
           Read( F, Buf )    /
           {                /
             TRAP ---------/---> Trap Service Routine
           }              /      {
                         /         Make F, Buf addressable;
                        /          Read( F, Buf );
                       /         }
    
    The user's code and the internal code for Read may all be in a high level language (such as C). The only low level language code needed is the code to actually cause the trap and the code to transfer the arguments across the "firewall" between user state and system state. Other machine code may be needed to actually do the input/output and service interrupts, but again, this is usually carefully encapsulated so that only a few hundred lines of machine code are needed in the entire operating system.