22C:116, Lecture Notes, Feb. 22, 1995

Douglas W. Jones
University of Iowa Department of Computer Science

  1. Resource Protection and Security

    Protection mechanisms are used to control access to information and resources.

    Security policies dictate who should have access to resources and information, and what limits, if any, are placed on the uses made of these resources and information.

    Not all protection mechanisms can enforce all security policies, and not all security policies successfully achieve their goals.

  2. Focus on Security Issues

    Secrecy and Privacy are two related goals a security policy may seek to achieve. Who may keep secrets? What may be held as secrete? What control is there over information which has been disclosed?

    Privacy is trivial when everything is secret. Privacy problems arise when a user discloses something to another user. Once information is disclosed, it is desirable to prevent other users from misusing the disclosed information. Such misuse is at the heart of most threats to privacy.

    Authentication is a key problem that must be faced in implementing a security policy. Just because a man says that his name is Fred Smith does not mean that he is indeed Fred Smith. This applies not only to human users, but to programs as well. Just because a program is in a file called passwd does not imply that it is indeed the program for changing passwords.

    Integrity is another problem that plays an important role in some security policies. Is the data I received from you the same as the data you sent me? This is both an issue when errors are possible and when there is a need to guard against malicious users making unauthorized modifications or substitutions.

    Availability is not usually considered a security problem, but it is tightly connected to security in many ways. The simplest attack on a system is to attempt to destroy or temporarily disable it. Furthermore, mechanisms that protect against such threats also protect against a large number of accidents that can also limit system availability.

  3. Focus on threats to security

    A leak occurs when data is transmitted by an authorized user to an unauthorized recipient.

    The old military doctrine of "need to know" reduces the threat of leaks by compartmentalizing access to information. Authorized users have access only to the information they need to do their job and nothing more. Thus, the amount of information one user can expose is limited.

    A Brouser or eavesdropper is a user (perhaps an unauthorized user) who explores stored information (in the former case) or communications channels (in the latter case) to gain access to selected information.

    The danger of brousing is minimized by reducing the size of the publically accessible area. Brousing can be defeated by encryption of data that is publically accessible.

    Inference techniques can be used to extract information that should not have been available from information that was disclosed to some user.

    Inference is involved when information from different sources is combined to draw a conclusion, and it also covers code breaking attacks on encrypted data.

    A trojan horse attack involves inserting unauthorized software into a context where unwary users may execute it. A virus is an example of such a threat, but there are many more, for example, programs that emulate the login sequences of systems and make copies of the user names and passwords people type in.

  4. Deliberate attack versus accidental damage

    Security and reliability are inextricably intertwined with each other. In general, what one user may do by accident, another may do with malice and forethought. This is wonderfully illustrated by two network failures in recent years, both of which are now well known, Morris's Internet worm and the Christmas Virus (which was not technically a virus!)

    The Christmas Virus was an accident. It began as a cute idea for a Christmas card by a relatively new user of a German BITNET site. BITNET used IBM mainframe-based communications protocols, and many BITNET sites at the time ran on such mainframes. Instead of merely sending the message as "ASCII art" (more properly, EBCDIC), the user encoded it as a command language script, and he put the message "Don't read me, EXEC me" in the title. Unfortunately, at the end of the script (which generated a cute Christmas card on the recipient's screen), he put the commands for sending the card; furthermore, these commands sent the card to whoever the recipient had recently corresponded with (a list that the IBM mainframe-based E-mail system maintained in a convenient form for such use).

    Because the Christmas Virus always transmitted itself to recent correspondants of an unwary recipient who EXEC'd it, the new recipients tended to let their guard down. It appeared in their in-box as mail from someone they were likely to trust, and a cursory glance at the contents of the message didn't reveal anything that was likely to do damage if it was EXEC'd (the dangerous parts were at the end).

    The Christmas Virus didn't cause big problems on BITNET because there were enough hosts that were not IBM mainframes that the Christmas card could not be executed by most recipients. Unfortunately for IBM, a copy or copies of the card made it into IBM's internal network, where most of the machines were compatable IBM mainframes. The result was a quick descent into chaos as thousands of copies of the Christmas Virus clogged every incoming mailbox and communications channel in the network.

    Morris's Internet Worm was a deliberately crafted bit of software that that a student released on the Internet. Once the program was running on any particular machine, it searched in local network configuration files for the names of other machines, then attempted to log in on the other machines and run itself. In addition to obvious attacks, such as simply issuing an "rlogin" command and hoping the remote machine had the appropriate permissions set, the worm program also exploited a bug in the UNIX finger daemon (the process that responds to finger requests from other machines on the net) and a bug in the UNIX mailer. One of these bugs allowed remote users to send shell commands to a remote machine for execution, and the other bug allowed remote users to download machine code. Depending on how the machine was configured, these bugs sometimes allowed the remote user to gain access to the "superuser" or privileged mode of execution.

    Morris may not have intended his worm to cause damage -- it contained defective code that, if corrected, would have limited the replication of the worm. Also, while it frequently managed to run in superuser mode, the worm did not take advantage of its privilege to do anything malicious.

    Nonetheless, Morris's worm brought many UNIX systems on the Internet to a standstill, flooding those machines that were vulnerable to it's attack with multiple copies of the worm's code and jamming network channels with multiple messages from the infected machines to other machines. Some sites survived the attack because they were isolated from the rest of the net by gateway machines that resisted the attack, but other Internet sites were completely shut down.

  5. The Orange Book

    The United States National Security Agency and the United States Department of Defense have developed a useful taxonomy of secure computer systems. In summary, this identifies the following common categories of systems:

       D -- insecure systems
    
       C1 -- systems with discretionary access control.
       C2 -- systems with access control and auditing.
    
       B1 -- add non-discretionary security labels.
       B2 -- use a formal security policy model.
       B3 -- Security domains.
    
       A -- Verified B3 level designs.
    
    By default, UNIX systems are at roughly level C1 -- they provide users the ability to control access to their own files (access control is discretionary -- at the users' own discression).

    Most vendors UNIX systems contain mechanisms for auditing access to key resources (login-logout, network resources, etc). If these are all used properly, the systems can qualify at the C2 level.

    Non discretionary security labels are things such as SECRET, TOP SECRET and so on. If each file has a label automatically attached to it, and if each user has a clearance level assigned to them by the system administration, then the classis military scheme can be used: No user may read a file that has a label at a higher level than that user's clearance, and all files written by that user are given, as a label, that user's clearance.

    This, plus discretionary controls, produces an ad-hoc system at level B1. Some vendors have added such labeling to UNIX systems, but it is clearly an afterthought.

    Higher levels in the security hierarchy require that there be a formal model and that the system be formally verified as an implementation of this model.

  6. Covert Channels

    The Orange Book identifies a useful distinction, that between the overt communications channels that are designed into a system and documented in the user's manual, and any covert communications channels that are not documented. Covert channels may be deliberately inserted into a system, but most such channels are accidents of the system design.

    B3 level systems are designed on the basis of a particular security model, a model supporting the notion of domains. Access control lists are usually used to satisfy this requirement, but capability lists should do just as well. MULTICS (as sold by Honeywell), PRIMOS, and the IBM AS 400 series of machines should all be able to satisfy this level's requirements, but there is little hope for compatible UNIX systems satisfying this. Incompatible UNIX-like systems such as the HP/Apollo Domain operating system can satisfy this level.

    A level designs require not only the construction of a system based on a formal security model, but the verification of the design.

    At level C and above, the Orange Book requires that system designs not only have clearly defined formal communication channels, but they require that the system designers identify and document what are called "covert channels". Those are ways that one user can communicate with another outside the official channels documented in the user's manual. At level B, such channels must be blocked or audited.

    For example, a user might brouse through newly allocated files or newly allocated memory regions looking for information that was left there by previous users. This channel can be blocked by zeroing newly allocated memory regions. One effective way to do this is to return newly deallocated memory to a background process that does memory diagnostics on it before putting the memory in the free-space pool.

  7. An example of a covert channels

    Given a multiprocessing system that allows user programs to monitor the time of day and wait for specific times, a clever program can send data using the following program:

       Sender:
          repeat
             get a bit to send
             if the bit is 1
                wait one second (don't use CPU time)
             else
                busy wait one second (use CPU time)
             endif
          until done
    
    In effect, the sender modulates the CPU utilization level with the data stream to be transmitted. A program wishing to receive the transmitted data merely needs to monitor the CPU utilization.

    This is a noisy communication channel -- other processes also modulate the CPU utilization unintentionally, but if the sender and receiver agree on appropriate error correcting and detecting codes, they can achieve any desired degree of reliability.

    This channel is very hard to plug! If the receiver has any way of monitoring the CPU load, the receiver can use it. The channel can be made less reliable by making load monitoring less accurate (for example, by having the system service that reports the load deliberately introduce random errors into the report).

    The channel can be plugged at great cost by denying all processes access to any information about the real-time clock or the current CPU loading. It can also be plugged by giving each user a different CPU, but the same approach can be used to communicate through any shared allocatable resource (alternately allocating and deallocating a large block of memory, for example).

    Use of this channel can be audited! For example, any program that makes frequent checks of the real-time clock or of the cpu load can be reported to the administration for investigation.