Chapter 18, Security and Protection
Reliable and safe use of shared resources
the notes for 22C:50, Introduction to System Software
When a system has multiple resources shared by multiple users, a new problem arises, that of resource protection. Resources must be protected from both accidental and intentional misuse. Protection mechanisms contribute to both system security and to system reliability, since malicious misuse of system resources is a security problem while detection of user errors contributes to program reliability. This problem has been intentionally ignored in the previous sections because the complexity it introduces into computer systems frequently obscures many of the conceptual structures which underly input/output and interprocess communication.
In general, protection deals with the control of which users of a system may access what resources. Files were the first resources to be protected in any systematic way; most systems built in the 1960's included some kind of security mechanism for disk files. In the broadest sense, the resources controlled by a protection mechanism may include words in memory, physical devices, and procedures as well as files. Similarly, although it is natural to view protection mechanisms as protecting resources from misuse by human users, programs, processes, and procedures are also resource users.
System security involves many aspects other than protection. In general, security is a policy issue. Security policies are expressed in terms of the access each user should have to each resource. In implementing a security policy, many mechanisms are used. For example, users may be required to show identification badges to building entrance guards, users may be issued keys to terminal rooms, and users may be required to enter a password to access the computer. Only the last of these examples involves system software.
It is common to talk about the granularity of a protection mechanism.
A coarse grained protection mechanism protects relatively large objects
from relatively large users or groups of users, while one with a fine grain
protects the same resources in terms of small objects and small users or user
For example, consider two possible file system protection schemes:
A relatively coarse grained protection system might protect directories
against access by groups of users. In such a system, the access granted
to a file depends the directory holding the file and the group to which the
user belongs. A finer grained system might protect individual files on a
Clearly, very coarse grained protection mechanisms can be used to enforce
only a limited set of security policies, while very fine grained protection
mechanisms can be complex and expensive.
As an illustration of the problems raised by protection systems, consider a password-based file protection mechanism typical of many early systems. Under such a mechanism, users gain access to files by providing passwords to the open operation; if the wrong password is given, the open operation fails. The many problems with this approach are detailed in the following paragraphs.
If a user must access a group of files, that user must remember a number of
passwords. This is inconvenient, and typical users react by either
writing down the passwords or making them all the same. Both of these
reduce the effectiveness of the system. In the first case, someone might find
the paper on which the passwords are written;
in the second case, cracking one password gives
access to the entire group of files. Furthermore, if many users wish to share a file, they must share the password to that file. Unfortunately, the more people who know a secret, the more likely that secret will be accidentally disclosed. In summary, password based protection mechanisms place a large part of the security burden on the human users.
Password security is probabilistic! Any password may be cracked by simple trial and error if enough time is available. On many systems, such trials can even be automated, since trial passwords may be generated and tried by a program. One approach to reducing this problem is to make trials expensive, for example, by imposing a delay of one second whenever a bad password is tried. Prior to the widespread availability of microcomputers, a second approach to preventing automated password cracking involved having the system directly request passwords from the human user, bypassing the user program. Finally, instead of preventing password cracking, an attempt can be made to detect it, for example, by monitoring the frequency with which user's try incorrect passwords, or searching for patterns in repeated incorrect password submissions by users.
If a security violation leads to material loss or injury, it is usually desirable to hold someone responsible. In a password based system, it is natural to consider those who have been given the password as the initial list of suspects, but because passwords can be broken without the cooperation of a password holder, such a list is not very useful. A legitimate password holder who violates security can easily distract an investigation of the problem by suggesting that the password has been cracked by an outsider, while if an outsider does crack the password, legitimate password holders may be held responsible for things they did not do.
Whenever user programs may open files by submitting a name and password to the file system, a Trojan horse approach to cracking system security is possible.
During the Trojan war, the Greeks made a large statue of a horse and left it outside the city of Troy and then made a great show of departing. The Trojans took the horse inside the walls of the city, not knowing it was filled with Greek soldiers; as a result, Troy lost the war.In the field of computer security, a Trojan horse attack is an attack on a security system made by a program which performs (or at least appears to perform) a desirable function and so comes to be used by the victim. As the program is used by the victim, it has access to the victim's resources and makes improper use of them. For example, a malicious user might make a new text editor available to the public that not only edits files but steals the passwords of all files edited.
If some file is to be protected from all access except via a particular program, then use of passwords implies that the program must contain a password in its text. This is typical of database manipulation applications, where no user may directly manipulate the database, but all may access it through the database management program. Unfortunately, when passwords are encoded in the source text of a program, those passwords are only as secure as the program text itself. Furthermore, there is frequently a demand by users of such programs for an independent audit of the program to certify that it does what is claimed and does not perform any Trojan horse functions. Such an audit must involve the disclosure of the source text to the auditors, which implies disclosure of the passwords!
Many password based systems in current use allow each file to have multiple passwords, each of which conveys different access rights. For example, in early versions of the Primos file system, each file could have both an owner and a nonowner password. Typically the owner password conveys unrestricted access to a file, while the nonowner password conveys read only access. Although such additions to a password based protection system make it more flexible, they do not overcome any of the basic shortcomings of password based protection.
The consequences of misuse of main memory by a carelessly or maliciously written program can be quite severe. A program which modifies the operating system can easily bring the computer system to a halt, and if the system is shared by multiple users, a program which modifies memory allocated to a different user will cause considerable upset. Main memory protection mechanisms have been included in most modern computer systems in order to prevent such problems.
There are two basic problems solved by main memory protection mechanisms, protecting user programs from each other, and protecting the operating system from user programs. These two issues may be completely separated, or the same mechanisms may be used to solve both problems. It should be noted that many of the older solutions to these problems make it quite difficult for legitimate communication to take place between users or between a user and the system.
Conceptually, it is possible to view the user-system interface as a standardized "plug-and-socket", where each time the scheduler changes user programs, it "unplugs" the one which was running and "plugs in" the next. This is suggested graphically by Figure 18.1.
________ | system | | ____ | |_| |_| ____ ____ ____ _| |_ _| |_ _| |_ | user A | | user B | | user C | |________| |________| |________|
Figure 18.1. A plug-and-socket view of the user-system interface.
In this context, the problem of protecting the system from the user is related to the information the plug allows to flow between the two and how that information is passed. Similarly, the problem of protecting users from each other is related to how the operations of plugging and unplugging are accomplished.
Main memory protection mechanisms usually require special hardware. This is because the protection rules must be enforced with each memory reference made by a running program. Strictly speaking, memory protection mechanisms are not usually part of the system software, but the system software is responsible for acting on any violations the hardware detects. As with interrupts in the context of input/output, the particular mechanisms provided by the hardware have a great effect on the software structure of a system.
Some systems rely on software for main memory protection. There are two
ways this can be done: First, all user code can be run by an interpreter;
in this case, user programs are never reduced to machine code. The other
alternative is to use compilers which guarantee that user programs will never
access anything illegal. On such a system, users may never directly write
machine code, and the supported languages must all prevent any improper
manipulation of pointers. Many of the classic machines built by Burroughs (now
Unisys) rely on this approach to protection.
When a user program is running, it must be prevented from being able to modify the operating system. This may be accomplished by classifying each bit in memory as belonging to either the system or to the user, and by enforcing the rule that user programs may only access user bits, while system programs may access any bit. This is an easy rule to state, and it allows very fine-grained protection at a high cost. A coarse grained variant of this rule involves partitioning all of memory into two segments, one for the system and one for the user.
In some early systems, the system and user blocks were distinguished by the
most significant bit of the memory address. The disadvantage of this is that
the sizes of the two segments are fixed; if
the system grows to be one word larger than the half of memory allocated to it, it cannot be protected. Later systems had a special register holding the address of the last word of the system segment or the first word of the user segment. Although there is no standard name for such a register, it is sometimes called a system protect register. All addresses less than that stored in this register refer to system words, while all greater refer to user words. An illustration of such a system is given in Figure 18.2.
Address Memory -------- 0000 | | System Protect Register | System | -------- X | | | X |------------>|--------| -------- X+1 | | | User | | | --------
Figure 18.2. The use of a system protect register.
When the program counter is less than the system protect register, the hardware can infer that a system program is running, and thus that operand addresses less than the system protect register are legal. A common alternative is to add an extra bit to the processor status that indicates whether addresses less than the protect register are legal. The hardware must, of course, limit the right of user programs to use instructions that manipulate the system protect register, or a user program could change the protect register and gain system status. The hardware usually also prevents user programs from performing input/output, since this is a system function.
When the hardware detects a user program attempting an illegal operation, the execution of the current instruction must be terminated and the operating system must be informed of the violation by a protection violation trap. Like a page fault trap, this transfer of control is similar to the response to an interrupt. As with page-fault trap service routines, the protection violation trap service routine is free to inspect the instruction which caused the trap and to take whatever action it deems appropriate. The obvious action is to abort the program which caused the violation, but we will discuss other appropriate actions later.
An alternate approach to distinguishing the system from user areas in memory is to tag ranges of addresses to indicate which area they belong to. This approach could be used with a very fine grain, for example, by tagging each bit in memory, but it is usually done at a fairly coarse grain by tagging fixed size blocks of words. A simple tagged system might have a 1 bit tag on each block indicating whether that block is a system block or a user block. A small system based on this scheme is shown in Figure 18.3.
addresses tag memory ------------ 0000-0FFF | 0 | | |---| System | 1000-1FFF | 0 | | |---|--------| 2000-2FFF | 1 | | |---| User | 3000-3FFF | 1 | | ------------
Figure 18.3. The use of tagged memory.
In this example, each block holds 4096 memory addresses, and a tag of 0 indicates those belonging to the system, while a tag of 1 indicates those belonging to the user. The advantage of this approach over the system protect register approach is that it allows the user and system areas to be discontiguous. This can be very important if both system and user programs use dynamic storage allocation from a common pool of free space. In this context, the size of the tagged blocks directly determines the minimum size block which may be allocated.
When the most recent instruction was fetched from a segment with a tag of 0, the hardware can infer that a system program is running, but most systems avoid this work by adding 1 bit the the processor status to indicate whether the program is in system state or user state. Protection violations are detected when a program running in user state attempts to reference operands or instructions from system locations, and a violation is detected when it tries to change its own state or perform input/output.
The IBM 360 is the most widely known system which uses this
approach; on it, each block of 2K bytes in memory is tagged (later systems
also allowed 4K byte blocks).
In the terminology used with the 360, the tags are called locks, and the
processor status contains a key which will unlock (allow access to) any
block of memory with a matching lock.
From the point of view of protecting the system from users,
the important rule is
that a key equal to zero is the master key to open all locks; nonzero keys
are distributed to users.
Finally, it is possible to distinguish the user and system regions from each other by listing the addresses belonging to each region. Although this could be done on a word by word basis, it is usually done on a block by block basis, where the blocks are called pages or segments. This approach is usually combined with virtual address translation using the simple rule that the only pages or segments allowed in the address translation tables of a program are those the program is allowed to access.
Unlike the approaches described in the above sections, this approach adds a level of indirection to each memory reference. Thus, the virtual address used by a user program to describe some data structure such as an input/output buffer need not bear any relation to the address used by the operating system for the same structure. The possibility of implementing demand paged virtual memory and the flexibility of this protection mechanism largely compensate for the complexity and run-time overhead introduced by virtual address translation.
A small system based on virtual address translation is shown in Figure 18.4.
System Memory User Map -------- Map -------- --->| | -------- | |---- |--------| -----| | |--------| -->| | | |--------| | |----- |--------| | ---| | -------- | |<-- | -------- |--------| | | |<---- --------
Figure 18.4. Memory protection using virtual address translation.
In this example, the user has access to two pages and the system has access
to the other two pages; neither can address a page used by the other.
The address maps themselves are typically stored in locations accessible to
the system, so the system can change either map but the user cannot.
As with locks and keys, mapped memory protection allows convenient dynamic
memory allocation from a shared free space pool, but it has the additional
advantage that, from the point of view of a program, the allocated pages
may appear contiguous even though they may have unrelated physical addresses.
An additional advantage of this approach is that it is absolutely impossible
for a program to address a page which is not included in its map! Thus,
protection becomes a matter of giving a program the ability to access the
pages it needs instead of trying to prevent access to locations it shouldn't
With all of the above approaches to representing which memory locations belong to the user and which to the system, two problems remain: How does the system know which protection rules apply to the currently running program, and how does a user program call on a system service? All of these mechanisms typically involve the addition of a bit to the processor status to distinguish between two states, system state and user state. With a memory protect register system, this bit signifies whether or not locations below that pointed to by the protect register may be referenced. When locations are tagged by hardware locks, this new bit acts as a key, and when address maps are used, this bit determines which map is used. Note that on some machines, the terms monitor mode, supervisor state, or privileged state are used as synonyms for system state.
The protection mechanisms described above all establish a wall of sorts to separate the protected part of the system from that turned over to the user. When a user program calls a system service, it must, in effect, find a gate in this wall so that it can transfer control to the system. In a secure system, each gate in the wall corresponds to the entry point of some system service; insecure systems may have gates which provide admission to parts of the system which are not intended as system services. The calling sequence for system services must make use of some gate-crossing mechanism to actually transfer control to the system.
The gate-crossing analogy was originally used by the developers of the Multics system in the late 1960's at MIT, project MAC.
When a user program tries to perform an operation which only the system is allowed to perform, the system must gain control. This transfer of control is called a trap, and is one of the most common gate-crossing mechanisms. typically, the central processor will have one or more trap registers holding the entry points of system routines. When a trap occurs, the machine enters system state, stores the old program counter, fetches the new program counter from the appropriate trap register, and resumes execution.
It is common to think of traps as detecting illegal conditions for which the appropriate action is to print a diagnostic message and abort the user program, traps in this category include arithmetic overflow, access to illegal memory locations (such as following a null pointer), or attempts to execute illegal instructions. On many machines, a special group of instruction codes are left intentionally undefined and associated with a special trap service routine that is intended as an operating system entry point for system services; for example, the SVC (supervisor call) instructions on the IBM 360 and 370, or the TRAP and EMT (emulator trap) instructions on the DEC PDP-11 and VAX. In fact, the system may define any trap to perform a useful service; for example, the illegal memory reference trap service routine could check to see what instruction caused the trap. If the instruction was a procedure call instruction with the address of a system service as an operand, the trap service routine could simulate the effects of the procedure call instruction and then transfer control to the system service; otherwise, the trap service routine might print a warning message and abort the offending program.
Another approach to gate-crossing is to explicitly mark the entry points
of system service routines and enforce the rule that, in user state, jumps
to system locations are illegal unless those locations are marked as gates.
As with lock based protection, this is typically implemented by
associating a bit with each block of memory to indicate whether the first word in that block is a gate. The Multics system uses this approach to gate-crossing in conjunction with locks and keys for protecting the system from user programs. The Multics lock-and-key system uses 3 bit locks and keys, with the rule that a lock with value n could by unlocked by a key with value k only if k < n. This establishes a hierarchy of protection levels which is usually described as the Multics ring system. The innermost ring holds the kernel of the system, and outer rings hold successively less privileged code. Rings 0 through 2 are used by the system, while rings 3 to 7 are available to users wishing to protect parts of their programs from other parts (user programs usually start in ring 4).
The simplest approach to protecting users from each other is to apply the rule that "no two user programs or their data may simultaneously occupy main memory." This rule is easy to enforce on batch systems by simply allowing one job to finish before the next begins. It may also be enforced on timesharing systems by swapping. A swapping system uses backing storage, typically disk (drum in older systems), to hold the entire memory image of all user processes which are not currently in the running state. When the scheduler switches from one user to another, it first copies all data belonging to the first user from main memory to disk, and then copies all data belonging to the second user from disk to main memory.
Swapping allows user processes to be protected from each other by the disk protection software, since the only way the running process can access data belonging to a waiting or ready process is to access it on disk. Swapping does not require any hardware to protect users from each other, but it can waste a large amount of time copying data to and from disk. Swapping systems can perform very well if one process can run while another is being copied to or from disk, but this requires additional protection mechanisms to prevent the running process from accessing data belonging to the one being swapped in or out.
When a system uses swapping, direct input/output involving buffers in the
user's address space is difficult, since these buffers are only accessible
when the user is in main memory. Because of this, systems which use
swapping almost always hide the actual input/output buffers from the user.
Each input/output request on such a system involves copying between the user
and system buffers as well as copying between the system buffers and the
A simple extension of the hardware protect register mechanism used to protect the system from user processes can be used to protect user processes from each other as long as each process is limited to one contiguous region. All that is needed is a pair of registers indicating the high and low bounds of the region. When the system is executing in user state, the hardware only allows memory references between these two bounds; in system state, all memory addresses are legal. Whenever the system schedules a new process, it must load these bounds registers to indicate the range of memory locations accessible by the new process.
The use of limit registers describing the memory region allocated to a process makes it difficult for a process to dynamically request more memory. Such a request can be satisfied if there happens to be a fragment of free space adjacent to the region occupied by the process, but this is difficult to guarantee. If processes could be moved around in memory, it would be possible to compact the free space and guarantee memory availability, but simple limit registers do not allow a running process to be relocated.
A minor variation on limit register hardware can be used to allow
run-time relocation of running processes. All that needs to be done is
to have the hardware add the contents of the lower limit register to each
memory address issued by the process. In this case, the lower limit register
is usually called a base register, and it is formally correct to describe
the resulting system in terms
of virtual address translation; thus, the physical address is computed by adding the virtual address to the contents of the base register. When a base register is used, the upper limit register usually holds the largest virtual address allowed and is called a bound register; this allows the comparison with the bound register to be done in parallel with the addition of the base register to speed up the memory access time. It is worth noting that there are many similarities between the use of base and bound registers and the implementation of disk files using contiguous allocation as described in Sections 12.1 and 12.2.
The most well known machines with base and bound registers are those designed
by Seymour Cray, including the CDC Cyber series of machines and the Cray 1.
All of these make use of swapping, and because of the run-time relocation
provided by the use of base registers, there is no guarantee that, when a
program is swapped in, it will run in the same physical locations that it
used when it was last in main memory. When a program is swapped out, the
memory it used becomes free, to be allocated as needed between other
programs in memory or being swapped into memory. If a program needs more
memory than it was initially allocated, it can be swapped out to wait for
a sufficiently large block of memory.
Locks and keys can also be used to protect user programs from each other. All that is needed is a variety of keys, with one key assigned to each user, and a "master key" assigned to the system. The IBM 360 uses exactly this scheme. Keys on the 360 are 4 bits each, and a 4 bit lock is associated with each block of memory. Since the key valued 0 is the master key reserved for the system, this protection mechanism allows up to 15 users to share memory. At the time the 360 was designed in the mid 1960's, 15 user processes was a large number; even today, it is not a severe restriction on a batch machine, since there are very few situations where loading more than 15 batch jobs will lead to any performance improvement. On a timesharing machine, however, it is a severe restriction. This partially explains why the 360 family of machines has never been famous for timesharing applications.
It is, of course, possible to build a machine with a huge key space; extra bits can be easily added to the processor status and to the key registers on each block of memory. This has not been done because of the problems with lock and key systems. The primary problem is that locks and keys provide no mechanism for relocating user code. Furthermore, user programs frequently need contiguous groups of blocks to hold large data structures. The inability to relocate makes storage compaction difficult, but dynamic allocation of large contiguous runs of blocks seems to require compaction.
Early versions of the OS operating system for the IBM 360, known as OS/MFT,
provided a fixed set of memory partitions, each able to hold one process
(task). When a user submitted a program to be run, the user was required
to specify the maximum memory the program was allowed to request, and this
was used by the system to select a partition in which the program would be run.
A later and significantly more complex version of OS, known as OS/MVT,
supported a dynamically variable number of partitions, but the basic use of
partitions was essentially similar to that in OS/MFT. Short of introducing a
virtual addressing mechanism, as was done with the 360 model 67 and the entire
IBM 370 series, there is no convenient way to avoid locking a running program
into a particular partition of the physical address space until it terminates.
As already suggested by the use of base-bound register pairs, virtual address
translation hardware may be used to protect user processes from each other.
In a paged or segmented addressing system, all that need be done is to provide
each user with a different page or segment table. Depending on the hardware
provided, and depending on how the system is protected from users, the details
of this approach can vary.
If the address translation mechanism is used to protect the system from users, a minimum of two sets of address mapping registers are required, one for the system and one for the current user. Each user process must have an associated data structure describing the pages or segments in its virtual address space. When the system schedules a process, it must load the user address mapping registers with information from this data structure. As a result, whenever a user process is running, the user address map describes the pages or segments which that user can access, and not the pages or segments belonging to other processes.
This approach to memory mapping is very common on minicomputer systems such as the DEC PDP-11 family. The problem with this approach is that, if the granularity of the protection system is fine, the address maps become very large. The maps on the PDP-11 family of machines contain only 8 segments of up to 4K bytes each, so they are easy to load. When larger maps are involved, the time it takes to load the address mapping registers can easily become a major overhead in context switching. For example, an address map for the DEC VAX family of computers can contain as many as 8 million page descriptions of 32 bits each. When address maps are this large, the central processor cannot conveniently hold the entire map in mapping registers, so a special "map pointer register" is used. When this is done, every memory reference requires first a fetch from the address map, followed by a fetch from the appropriate page. This sounds slow, but special cache memory hardware may be added between main memory and the central processor to speed the process. A similar approach is used by the IBM 370 virtual memory hardware.
The Multics system on the Honeywell 6000 family of machines used a ring system to protect the system from the user. The hardware of this machine supported only one virtual address space, shared by the system and the current process. When the scheduler switched from one process to another, it did so by editing the address map to include the segments belonging to the new process end exclude those of the previous process. While making these changes, the segments belonging to the system remain in the virtual address space with no changes. The Primos system on the Prime family of computers used a very similar mechanism, and a somewhat simpler but comparable mechanism is used by the MVS system on the IBM 370 (more properly, OS/MVT-VS release 2 and later on IBM's Enterprise Server family).
It should be noted that, where paged or segmented address mapping is used,
it is common to
associate access rights with each map entry. Thus, a user is not
simply granted the right to use an address, but is given a list of specific
operations allowed for that address. Typically, the entry for each
page in the address map includes one bit for each access that may be
allowed. At a minimum, read and write access are controlled,
allowing read-only, read-write, and unused pages.
Many systems also provide control over the ability to execute programs
from a page, thus allowing the system to halt programs that
accidentally jumps into their data areas.
All of the approaches described can completely protect user processes from each other, but many pose problems when user processes have legitimate reasons to communicate. With all of the lock-and-key based systems which have been built, for example, a memory block cannot have more than one lock on it, so the only way for two users to share a block is to give them the same key or to change the lock at the end of each timeslice on a uniprocessor. As a result, users wishing to share anything must usually share everything.
When base and bound registers are used for protection, limited sharing
is possible. The system could set up two processes so that their
memory regions overlap; for example, the last few words of the memory
region assigned to one process could overlap the first few words on the
region assigned to another. Few systems have allowed this,
but it was allowed on the classic RC4000 system. This system
allowed a process to allocate any part of its own memory
to subprocesses; as a result, the parent had access to all memory of its
subprocess, and the parent could created overlapping subprocess
if it wanted.
The vast majority of systems which have been built using either locks and keys or base and bound registers have not allowed any memory sharing between users. Thus, from the point of view of memory management, each process on such systems could just as well be executing on a personal computer. For traditional scientific applications, inter-user communication is of limited utility, but for most other applications, it is essential. For example, many database and office automation applications require that changes made by one user be immediately visible to others. Lacking direct communication paths, indirect paths must be used; for example, users may communicate through the file system, or the operating system may provide a virtual communications network connecting user processes. Unix pipes are a simple example of the latter.
Both segmented and paged virtual memory address translation mechanisms allow much more flexible sharing of information between processes. The key to sharing in such an environment is the observation that, although it is natural to think of each page or segment as having a clearly defined owner, nothing in the basic mechanism prevents a page or segment from being included in more than one virtual address map. Thus, it is perfectly reasonable to think of two user processes sharing a segment through which they communicate.
Under the Unix operating system, each process logically has 2 segments, a code segment, and a data segment (the latter is usually subdivided into a stack and a static segment). Most versions of Unix allow all users of a particular program to share the same code segment, while each user is given a different data segment. Because the code segments are constant, users cannot use them to communicate, but this sharing has a significant performance advantage: Heavily used code segments such as the code of the text editor tend to remain in main memory. As a result, the overhead of swapping users is usually limited to the time it takes to copy data segments to and from disk.
Several modern versions of Unix include additional primitives which allow pages within the data segment to be shared between processes. These primitives allow a sequence of pages in the data segment to be marked as shared prior to a 'fork' operation. When the process forks, the new process and its parent will share the marked pages as well as the program segment, while the new process will have copies of the other pages in the data segment. As a consequence of this, shared pages will always occupy the same virtual address in each address space where they are visible. A disadvantage of this approach is that it limits sharing to processes which have a common ancestor; usually, all of the processes involved are associated with a single user of the system. The Unix System V standard includes a set of optional shared memory services that allow unrelated processes to share segments, where each segment has an integer for a name, and the right to share a segment is protected by a mechanism similar to that used for protecting Unix files.
AT&T developed the Unix System V interface definition in an effort to bring conformity to a growing number of Unix variants developed by various groups. The other dominant Unix standard has grown from work at the University of California at Berkeley, and is known as Berkeley Unix or UNIX BSD. Most modern Unix variants conform to the System V standard with extensions allowing almost full conformance to the BSD standard.
On the Multics system, a more flexible approach to shared memory is used.
When a Multics process opens a file, the sectors of the file are mapped to
the pages of a segment of the virtual address space of that process. Thus,
the file on disk becomes backing storage for that segment. When two users
have the same file open, they share the associated segment. The file system
protection mechanisms indirectly determines which processes can share memory
by determining which files each process can open. A problem with both this
and the Unix System V approach
is that two processes can open the same file at different virtual addresses;
as a result, data structures within a shared segment cannot contain pointers
to other locations in the segment, since these may be interpreted differently
in the address space of each process sharing the segment.
This situation is illustrated in Figure 18.5, where a shared page is addressed
as page 1 by user A and as page 0 by user B.
User A Memory User B Map -------- Map -------- ---->| | -------- 0| |--- |--------| ----| |0 |--------| --->| shared |<--- |--------| 1| |---- |--------| ---| |1 -------- | |<---- -------- --------
Figure 18.5. Shared and private pages in a virtual memory system.
The mechanisms discussed above share a number of faults. With the exception of the Multics ring mechanism, for example, all use the process as the unit of protection. This prevents the protection mechanism from being used to protect parts of a program from other parts or parts of the operating system from other parts. Even the Multics mechanism is limited; as long as the security policies dictate a hierarchy, they are sufficient, but the Multics mechanisms do not completely solve all non-hierarchical problems.
As an example of a policy which Multics cannot support, consider two mutually distrustful programmers who must cooperate in writing a single process to solve some problem. Each programmer has proprietary data which must be used to solve the problem, and each suspects that the other will try to steal that data. The desired policy is one where the code segments written by one programmer can access that programmer's proprietary data, but not the other programmer's proprietary data; the ring mechanism of Multics allows one programmer to be protected from the other but not both from each other.
The mutual distrust problem shows up at many levels in the Multics
system. Within the system, for example, there is no reason that
either the scheduler or the file system should have the right to modify
data belonging to the other,
but one or the other must typically be in an inner ring.
The author of a proprietary library can arrange for that
library to be in an inner ring from its users so the users can't inspect the
code, but this prevents the users from protecting themselves
from malicious code in the library.
The problems pointed out in the above sections were mostly known by the late 1960's. By this time, it was clear to many system designers that a theoretical model of protection was needed. The mechanisms which had been built prior to that time solved specific problems quite well, but they were difficult to generalize. The role of theory, in this context, is to provide a tool for describing any set of protection policies, without regard to the mechanisms used to implement them.
The classic multilevel military security model is an example of such a theory. In this model, each document classified as belonging to some level in a hierarchic series of security levels, for example, classified, secret, and top-secret. Each user has a security clearance corresponding to one of these levels. A user cleared at the some level can read documents classified at that level and lower levels, but not documents classified at higher levels. This model is clearly theoretical; it does not describe how the classification levels for documents or the security clearances for users are determined, and it does not describe any enforcement mechanism.
Unfortunately, the classic military security model is not sufficient to describe all possible demands; specifically, it cannot describe a solution to the mutual distrust problem. The access matrix model solves this problem. As is shown in Figure 18.6, an access matrix has users as labels on each row, and resources as labels on each column.
resources | W | X | Y | Z | u ----+-------+-------+-------+-------| s A | all | read | write | | e ----+-------+-------+-------+-------| r B | | write | read | all | s ------------------------------------
Figure 18.6. An access matrix.
The entry at the intersection of a row and column gives the rights that the one user has for one resource. The set of all resources which a given user may access is called the domain of that user and is given by a row of the matrix. The example in Figure 18.6 shows two users, A and B and 4 resources, W, X, Y and Z. The domain of user A consists of the 3 resources W, X and Y, while that of B consists of X, Y and Z. Each user has exclusive and unlimited use of one resource, and they each have limited rights to two other resources.
The access matrix idea is quite old; for example, they were used in the Soviet Union during the second world war to control access by political prisoners to the rooms in which they were assigned clerical work. Prison guards with paper copies of the matrix provided an enforcement mechanism. Butler W. Lampson was the first to apply access matrices to the study of computer systems in 1971.
The story of the Soviet prison is due to Vladimir A. Kostelovsky, who found copies of the access matrices when he moved into Moscow office space previously used as a work-house for political prisoners.
The users in an access matrix need not be people; they can be programs, activation records, or any other active users of resources. The choice between these depends on the granularity of the protection mechanism. Similarly, the resources can be devices, files, procedures, pages, objects or individual memory words. The access rights which a user may be granted for an object correspond to the methods applicable to that object. For typical memory resources, the operations are obviously read, write, and execute. Other resources have other operations; for example, a directory may be read, and directory entries may be created and deleted.
As an extended example consider using an access matrix to describe the protection rules for a large corporation's database system. The users include programmers, data-entry clerks, database users, and the system administrator. The resources are the production database, the production database management programs, the test database, and the test database management programs. An access matrix for these is shown in Figure 18.7:
Resources | Production | Production | Test | Test | | Database | Management | Database | Management | Users | | Programs | | Programs | ----------------+------------+------------+------------+------------| Manager | R W | R W X | R W | R X | ----------------+------------+------------+------------+------------| Database Users | R | X | | | ----------------+------------+------------+------------+------------| Clerks | W | X | | | ----------------+------------+------------+------------+------------| Programmers | | R | R W | R W X | -------------------------------------------------------------------- Access rights: R = read, W = write, X = execute
Figure 18.7. An example application of an access matrix.
The programmers develop new database features of the test database management programs; thus, as shown in Figure 18.7, they need read, write, and execute access to these programs. When the administrator approves enhanced versions of the programs, the administrator installs them as production programs; to do this, the administrator must be able to run the test programs, read them, and overwrite the old production programs with the new code. Typically, the administrator is not a programmer and thus should never modify the test database management programs.
In addition to updating the production programs, the
administrator is a user and must occasionally
fix mistakes in the database itself; thus the administrator has all of the
rights of a clerk and of a database user.
Much of the information in the database is personal
information, so security is important; specifically, programmers and clerks
should not be able to read the production database. For debugging, there
is a test database maintained by the administrator and programmers; this is
similar to the production database except that all
of the names and personal information have been changed.
The access matrix model does not describe how the protection rules of a system are to be enforced. It would, of course, be possible to store the matrix itself as a two dimensional array and check it with every access to every resource, but this would be inefficient for a number of reasons. Most entries in the matrix are likely to be empty (that is, most users usually have access to only a small fraction of the resources); as a result, direct storage of the matrix would waste memory. Adding one memory access to check each use of a resource is expensive for memory resources; it would be much better to allow access rights to be checked once before a sequence of many accesses to a resource. Finally, centralized storage of access control information creates an obvious target for someone interested in cracking the security of a system.
One way of efficiently storing the information in the access matrix involves
giving each user a list of all of the resources they can access and the
rights they have to those resources. Such a list is called a capability
list, and each entry in the list is called a capability. A capability
identifies a resource and holds a set of rights to that resource. Although
capability lists are associated with users, they are maintained by the system
and cannot be arbitrarily modified by users. When a
user wishes to perform some operation on some resource, the system checks
the user's capability list to see if the user has a capability for that
resource which permits the desired operation. This basic idea was originally
proposed in 1966 by Jack Dennis and Earl Van Horn, before the access matrix
model was formally applied to computer systems. Figure 18.8 shows a
capability list that describes the access matrix in Figure 18.6.
A ----- B ---------- --| W | ---------- ----- | all | +-- ----- | all | +-----| Z | |-------+--| ----- |-------+--| ----- | read | +-----| X |--- | read | +--- |-------+--| ----- | |-------+--| | | write | +-- ----- | | write | +- | ---------- --| Y |- | ---------- | | ----- | -------------- | ------------------Figure 18.8. Capability lists describing Figure 18.6.
As originally proposed, the capability list mechanism suggests the need to search a user's capability list with each use of an resources. This would be inefficient, but there is an alternative: Resources can be addressed indirectly, through the capability list. This scheme, called capability based addressing, is most easily understood in the context of memory addressing. Consider a paged virtual memory addressing system. When a program wishes to access some page, it does not directly specify the physical address of the page, rather, it specifies a page table entry containing the physical address. If this page table entry contains a set of allowed operations, the page table entry can be viewed as a capability, and the entire table can be viewed as the capability list for the current user. In this context, a capability can be viewed as a tuple containing a set of access rights and a pointer, as illustrated in Figure 18.8.
Any virtual memory system which assigns a different address map to each user can be viewed as a capability system, but most such systems use capabilities to protect only one class of resources, pages or segments. A few systems have been constructed using capabilities as the primary protection mechanism for all resources, including files and other resources created by software. Among these are research systems such as the CAP system at Cambridge University and the Hydra, Medusa, and StarOS systems at Carnegie-Mellon University. A small number of commercial systems have also been built with capabilities as the basis of their protection mechanism, including the IBM AS/400 and the Plessy System 250.
Some file systems can be viewed as being capability based. For example,
the Unix file system allows multiple links to a data file from different
directories. As a result, each directory can be viewed as a capability list,
and each directory entry can be viewed as a capability. This possibility
is not fully exploited on Unix, but it has been explored extensively on the
Cambridge CAP filing system and on a number of the network based file systems
built by Xerox corporation.
As an alternative to the use of capability lists, each column of the access matrix can be stored with the associated resource. In this context, the columns are known as access control lists, and each access control list entry names a user and the rights they have to the associated resource. The access control list representation of the protection policy described by Figure 18.6 is given in Figure 18.9.
W X Y Z --------- --------- --------- --------- | A | all | | A |read | | A |write| | B | all | --------- |---------| |---------| --------- | B |write| | B |read | --------- ---------
Figure 18.9. Access control lists describing Figure 18.6.
The advantage of access control lists is that it is easy to determine who has access to any particular resource, while with capability lists, this would require a search of all capability lists on the system to see which ones include capabilities for that resource. On the other hand, access control lists make it very difficult to determine the set of resources in a user's domain; this is trivial with capability lists.
Access control lists cannot be combined with address translation, as was done with capability based addressing. Thus, the user must present the actual address or name of the resource to the system before any access checking can be done. Furthermore, once the appropriate access control list is found, the list must be searched for the user's identity to determine the associated rights. As a result of this overhead, access control lists have never been employed in their full generality for applications such as main memory protection.
Had virtual memory not attracted the attention of system designers in the mid 1960's, the IBM 360 lock and key system could well have been generalized to allow multiple locks on each block of memory; in this case, the set of locks on each memory block could be viewed as an access control list.
Access control lists have been widely used in the context of file system protection. The reason for this is that the cost of searching an access control list when a file is opened is small compared with the other work which must be done when opening a file, and once the file is open, no further checking need be done. In fact, the list of open files associated with a running program may be viewed as a capability list, so most systems using access control lists are best viewed as hybrid systems.
The most well known use of access control lists is in the Multics file system. All Multics users have not only a user identifier, but a group identifier, and a Multics access control list entry can specify either an individual user or a group. The final entry in a Multics access control list provides the default access for those users who are not specifically listed. As a result, most access control lists on Multics are very short; the access control list for a typical file containing a program might allow the owner to have unlimited access, other members of the owner's group might be given read/execute access, and the general public might be given execute-only access. The Primos file system is quite similar.
The Unix and the VAX VMS file systems use limited forms of
access control lists. A Unix access control list, for example, always
consists of 3 entries, one for the owner of the file, one for other
members of the owner's group, and one for the public. Since the owner
and group associated with the file are stored elsewhere in the file
description, and since there are only 3 bits needed to specify the 3 legal
operations on a Unix file (read, write, and execute), the access control
list is only 9 bits long. The VMS system is similar, but it includes
and additional access control list entry for the system, and an additional
access right (allowing deletion).
In the simple form described up to this point, neither the access matrix model nor any of the protection mechanisms describe how users come by their rights. In the context of simple examples such as the database example of Figure 18.7, the rights are static and do not change during the life of the system. In more complex systems, where resources and users dynamically come and go, this becomes a severe problem. When a new human user is given access to a system, how is the domain of that user determined; when a new file is created, how is the set of users that can access the file determined? Most systems include a set of rights transfer mechanisms which allow programs to manipulate the state of the protection system under carefully controlled rules.
Most early proposals for rights transfer mechanisms included some concept of resource ownership. The owner of a resource could grant access to the resource to any other user, and ownership was defined outside the scope of the access control mechanism. The Unix, VAX VMS, and Multics systems all include such a notion of ownership. In later systems, it was realized that the right to modify the access rights could itself be controlled by the access control mechanism. For example, in the Primos system, an access control list entry may grant a user the right to edit that access control list. It should be noted that the notion of ownership is frequently used in accounting for resource usage, where it is natural to charge the owner for the cost of maintaining a resource. Even though this economic relationship does not have anything to do with access control, fairness suggests that users should not pay for resources they cannot use.
Each time a user of a system is removed, all references to that user in any
access control list must be deleted;
this poses only limited problems on systems where
access is granted to human users, but it limits the utility of access
control lists when access is to be granted with a finer granularity,
for example, on a process by process basis.
Because of this, fine grained systems must usually use a capability based
With capability systems, it is natural to delete a resource only when the last capability for it is deleted. This policy requires some kind of garbage collection system to determine when a resource is no longer accessible by anyone; this may appear expensive, but experience with the CAP file system suggests that the cost is reasonable. A more serious problem is that this policy can make it difficult to reclaim resources which are unneeded by still referenced by some capability. This problem has been solved in some capability systems by providing for revoke operations on some classes of objects; the revoke operation either makes all or some of the capabilities for an object invalid, thus speeding its reclamation.
When a capability system allows programs to transfer capabilities from one capability list to another, the capability lists themselves must be considered to be resources. A capability for a capability list can allow capabilities to be read from that list, written into that list, or deleted. In such a system, each user's domain is described by a capability list, but there may be many capability lists which do not describe domains. Such capability lists are typically referenced by capabilities in one or more domains, and thus, the capabilities they contain may be moved into those domains. At this point, the term domain may need clarifying, and it is sometimes useful to distinguish between the 'immediate domain' of a user, that is, the set of resources directly accessible from the capability list describing that user's domain, and the 'indirect domain', consisting of those resources accessible from all capability lists which the user can access.
When capabilities may refer to other capability lists, a number of comparisons can be made with directory structures in a hierarchic file system. The home or working directory of a file system user describes the immediate domain of that user; files in the immediate domain are referenced by simple file names. Subdirectories accessible from the working directory correspond to capability lists which do not (usually) describe the domain of any user, but the set of all files and directories accessible in the working directory and all of its subdirectories is the indirect domain. Files in the indirect domain must usually be referenced by giving a path name describing how to get to that file from the working directory. The primary differences between capability systems and most file systems are that capability structures are not constrained to be tree structured, and capability systems store access rights with the pointer to the resource.
Rights transfers within capability systems are frequently much more complex than simply copying capabilities between capability lists. It is frequently useful to reduce the rights associated with a copy of a capability, but it is sometimes necessary to add rights which were not there in the original. Operations which add rights to a capability in a capability system are called amplification operations, and must be carefully controlled if the integrity of the protection mechanism is to be preserved.
To understand the need for amplification, consider a system where the descriptors for open files are represented by capabilities that point to segments holding the associated buffer and other information. If the system policy is that user programs must not access the data referenced by the descriptor, then it is obvious that the access rights associated with the descriptor in the user's capability list must not permit either read or write operations. If the system input/output routines are to make any use of descriptors, they must be able to read or write them. This leads to the following amplification rule: When a capability representing a descriptor is passed by a user program to an input/output routine, the read and write access rights are added to that capability.
Amplification mechanisms have been built in two basic ways.
The first involves extending the memory model so that each resource
has a type; the amplify operation takes a capability and a type as
arguments, and if the type matches the type of the resource referenced by
the capability, new rights are added to the capability. Types are themselves
a new kind of resource, so different domains may have access to different
types. Note that access to a type is different from access to a resource of
that type. Access to a type confers the right to amplify the access rights
to resources of that type. In this context, the input/output problem can
be solved by putting a capability for the type 'file descriptor' in the
domain of each input/output routine. This allows the input/output routines
to amplify their rights to any file descriptors passed by users.
If users do not have the type 'file descriptor' in their domains,
they cannot abuse any file descriptors.
The other basic approach to amplification involves the introduction of a new type of resource, the locked object. Two new operations are introduced, lock and unlock. The lock operation takes two capabilities as arguments, one for the data to be locked, and another called the key. The lock operation returns a new capability for a locked object. The unlock operation takes two capabilities as arguments, one for a locked object, and another called the key. If the key given to unlock matches key used in locking a locked object, unlock will return the capability for the data that was locked. In this context, the input/output problem can be solved by having user copies of input/output descriptors represented by locked objects, the key for which is held only in the domains of the input/output routines. When a user passes a file descriptor to an input/output routine, the routine uses this key to unlock it.
The above discussion of amplification assumes that capabilities can be passed
between protection domains in the process of calling an input/output service
routine. If each domain is bound to a process, this implies that the service
routines are in a different process from the user code, and that interprocess
communication is required for each input/output operation. Although systems
have been built this way, most work on capability systems has allowed a domain
change to take place when a procedure is called, using a special protected
procedure call instruction. The details of such instructions vary considerably
between systems, but they all allow a new domain to be constructed for the
called procedure. This new domain holds an activation of
the procedure along with
capabilities for all global objects the procedure should be able to access.
As a result of introducing such mechanisms into capability systems, there is
a strong parallel between the protection rules that can be enforced by the
resulting system and the scope rules of modern languages such as Ada, Java
Neither the access matrix model, capability lists, nor access control lists can completely ensure the security of a system because they do not describe how the identities of external users are determined. Specifically, nothing has been said about how newly arrived human users are bound to a particular domain in the access control system. A user claiming to be the system administrator should not be allowed to operate in the system administrator's domain on a secure system without some additional test. This is an example of the user authentication problem. Passwords have been used for user authentication since the early days of computing, but they have only limited utility because short passwords are easily cracked while long passwords are easily forgotten.
More secure user authentication systems have been developed. For example, consider the approach taken with automatic teller machines, which are considered to be a special kind of computer terminal attached to an electronic funds transfer network. Because it is important that the identity of each user be clearly established before the terminal dispenses cash, each automatic teller machine has a card reader, and each user has a card that, among other things, identifies the user. To prevent the misuse of a lost or stolen card, each user also has a password, called a personal identification number.
Some systems which are accessible through the public telephone network use what is called 'call-back' security. Call-back systems maintain a list, for each user, of the telephone numbers where that user is allowed to work; when users want to use the system, they call the system and indicates who and where they are. At this point, the system and user hang up the phone. If the user claimed to be at a legal number, the system calls that number and a normal session begins. Again, a password is requested just in case someone else calls from the authorized telephone.
The access rights which an enforcement mechanism can explicitly control and
which are intended by the system designer establish a set of of possible overt
communications channels between users. In addition to these overt channels,
most systems contain a number of covert channels which clever users can use
to communicate despite the barriers imposed by the enforcement
mechanism. For example, on some systems, newly allocated memory contains whatever the previous user of that memory left there; a program can use this channel by repeatedly allocating blocks of memory and inspecting them to search for interesting data. This covert channel can be blocked by having the memory manager clear memory before allocating it.
A pair of cooperating processes can establish a subtle covert channel on a uniprocessor by careful use of the real-time clock. The sending program can send a one by becoming processor bound for some time interval, and it can send a zero by waiting or relinquishing for an interval. The receiving program can read this data by repeatedly relinquishing and then reading the time of day; if it immediately gets a new time slice, the other program must be sending a zero, if there is a long interval between time slices, the other process must be sending a one. This covert channel is hard to block because many programs have legitimate reasons to inspect the time of day. The capacity of this channel can reduced by adding errors to each request for the time of day, or it can be blocked by limiting the number of time requests a program is allowed to make.
Finally, nothing in the theory of protection deals with the physical security of the computer system. No matter what the operating system or hardware try to do to enforce the protection policies of a system, if someone can take a disk or tape from the system and mount it on another system, they can read every bit. Similarly, if someone taps a communications line, they can copy everything sent over it. Encryption of stored or transmitted data is the primary tool used to ensure privacy in this context.
Copy protection of floppy disks is a related problem; at one time, this was commonly solved by using a nonstandard layout of tracks and sectors for protected data. Part of the disk usually remained in standard format so that the software needed to read the remainder could be loaded by the system loader. This only worked on unprotected systems where applications were able to replace the disk drivers with their own special drivers.
A related physical security problem has to do with 'foreign' tapes or disks; those which were written on some other system. These can pose a threat if they contain system code or data structures. On large systems, the most common threat of this type is posed by backup copies of file systems containing not only the files themselves but also access control lists or capability lists; if a foreign tape or disk of this sort is mounted, the system can be fooled into granting undesired access rights to users; this can even be a problem with backup copies created on the same system if users have come and gone since the time the backup was made.
Foreign floppy disks can introduce programs called 'viruses' into personal
computers. A virus is a program which replicates itself using the resources
of a host computer system. Viruses on personal computers make use of the
fact that most floppy disks contain a copy of the operating system in their
first few tracks so that the system can be started from any disk. A disk
that is 'infected' with a virus has a modified operating systems that
automatically copies the modification into the operating system portions of
all floppy disks it reads or writes. Such a virus can quickly 'infect' all
disks owned by a community of microcomputer users who frequently share disks
with each other. Viruses can be quite dangerous; for example,
viruses have been created which, after a preset date, automatically delete
all files on every disk accessible to the infected operating system.
Although most architectural features of a computer have little or no impact
on the design of operating systems for that computer, the hardware support
provided for protection has an extremely large impact. For example, only
minor details of a system design are likely to depend on whether a system has
a single accumulator or general purpose registers, but the presence or absence
of features such as a protected system state or hardware supported capability
lists can have a great impact.
Most systems have been designed to operate only on one particular family of processors; when this is done, the structure of the system can be directly matched to the protection mechanisms of the hardware. The protection facilities of many major families of processors were designed in cooperation with the operating system designers. This the case for the IBM 360 family, which was designed in parallel with the OS system, for the GE 600 (later Honeywell 6000 family), which was designed in parallel with the Multics system, and for the DEC VAX family, which was designed in parallel with the VMS system. More recently, it is clear that Intel never intended the 8086 to support secure systems, and as the 80x86 family came to dominate the market, there was a period of groping, when Intel was adding random seeming support for security mechanisms that Microsoft never utilized. Since the release of Windows NT, however, this situation seems to have changed.
When hardware and the operating system are designed in parallel, the designers must first determine the desired granularity of the protection mechanism. Will it protect procedure activation records, processes, human users, or something else? If a very fine granularity is chosen, transfers between domains must be very fast; this requires that many of the operations related to protection be performed directly by hardware. If a coarse granularity is chosen, interdomain transfers will be much less common, so software can perform more of the work.
If the operating system designers are given a fixed hardware design, the particular protection mechanisms in the hardware dictate a minimum granularity. For example, the first version of the Unix system was implemented on a DEC PDP-7; this small machine had a supervisor state and a user state, and swapping was the only available tool to protect users from each other. The use of swapping made it impractical to support shared memory between domains. The overhead of swapping made it impractical to associate domains with anything smaller than a process, and main memory was too small for multiple processes to share a domain. Since user processes were intended to be created and deleted with relative ease, and because access control lists were the best known approach for file systems protection at the time, the right to open a file could not be efficiently protected on a process by process basis, but only at the coarser granularity of human users.
The available main memory protection mechanisms can have a significant influence on the design of apparently unrelated parts of an operating system. For example, the use of swapping in early versions of Unix required that input/output operations use system-provided buffers instead of buffers in the user address space. This implied a copy operation to or from user buffers in addition to the actual input/output operation. As a result, all input/output operations are more expensive than they would have been had they directly used the user buffers, but the additional expense of device-independent blocking and deblocking primitives is small. Had the developers of Unix not used swapping, the expense of these device independent primitives may have precluded their incorporation into the system.
One of the reasons that Unix has proven to be so easy to move to different machines is that it makes such simple demands on the underlying hardware protection mechanism. Conversely, a problem with the growing acceptance of Unix as a standard operating system is that it is unable to fully exploit many of the interesting protection mechanisms which have been developed since the basic design decisions for Unix were made in 1969.
The reader should be familiar with the following general terms after reading this section:
protection traps security gate crossing reliability hierarchic protection structures granularity capabilities password based protection domain Trojan horse attacks access matrix swapping access rights virtual address mapping capability lists pages capabilities base and bound registers access control lists access keys rights transfer mechanisms system protect registers amplification system state foreign objects user state user authentication
a) Each user has an identity badge which must be shown to a guard to enter a secure area. Each guard has a list of who is allowed in the secure area that guard controls.
b) Each secure room has a lock, and each user has a collection of keys. No key opens more than one lock.
a) What protection problem remains unsolved by this mechanism.
b) How many bits of additional memory are needed to represent the protection constraints for a memory consisting of 64k words of 16 bits each?
a) From the user's viewpoint, what protection mechanism does this implement?
b) What problems with lock-and-key hardware does this overcome, and which remain unsolved?
An introductory discussion of many of the issues discussed here is given in Section 7.1 of
Operating System Elements by Peter Calingaert (Prentice-Hall, 1982).
A more advanced discussion (in the context of file systems) is contained in Section 5.4 of
Operating Systems by Andrew Tannenbaum (Prentice-Hall, 1987);
this section also contains some good examples of security failures. Tannenbaum's book contains a detailed discussion of the Unix mechanisms, including executable code.
Good summaries of many of the issues in capability based addressing are contained in
"Capability-Based Addressing" by R. S. Fabry, in Communications of the ACM, 17, 7 (July 1974).
"Fault Tolerant Operating Systems" by Peter Denning, in Computing Reviews, 8, 4 (December 1976).
The original description of the use of capabilities was in the paper
"Programming Semantics for Multiprogrammed Computations" by Jack Dennis and Earl Van Horn, in Communications of the ACM, 9, 3 (March 1966).
Capability-Based Computer Systems by Henry Levy (Digital Press, 1984)
contains descriptions of many experimental capability based systems, along with a discussion of the general idea and a summary of the problems raised by these architectures.
The shared memory services of System V compatible Unix systems are described in Chapters 9 and 11 of
System V Interface Definition, published by AT&T, Volume I
Chapter 4 of the same document describes the access rights applying to files, and the documentation of the chmod service in Chapter 6 describes how these are used.
The resource sharing and protection mechanisms of the Multics system are described in detail in
The Multics System: An Examination of Its Structure by Elliott I. Organick (MIT press, 1972).
Every chapter of Organic's book but the last covers material related to the subject matter of this chapter. This book also covers the early history of Multics from 1964 to the date of publication, but it does not cover the commercialization of Multics by Honeywell between 1973 and the demise of the last Multics system in 2000.