34. Security in Distributed Systems

Part of the 22C:116 Lecture Notes for Fall 2002
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Introduction

The classical way to build secure systems has centered on the use of an enforcement mechanism to limit the domain of an application process. The most widely studied enforcement mechanism for this purpose has been the capability list.

A Capability names a specific resource and holds the access rights that will be granted to users of that resource who access it with a particular capability. For example:

Access Rights on Demos Links

The Demos interprocess communication model clearly fits into this description of capabilities. In the case of Demos, each interprocess communication link is a capability allowing the holder of that link to send messages to the recipient designated by the link. Demos supported the following special access rights on links:

When a new link is created, it has all access rights; typically, links for sending requests to servers are reusable and copyable. Whenever a process passes a link in a message or copies a link from one link-table entry to another, the process may restrict the set of rights. Return links that are passed with a request to a server are typical non-copyable single use links. This guarantees that even a misbehaving server will be unable to send multiple replies to a request from a client.

As with classical capability-based systems, the Demos enforcement mechanism requires that the link table be held by the kernel, securely outside the memory address space of the process. The only way a process may manipulate its links is through kernel calls, and these are written to guarantee that the user cannot create forgeries of links, but can only creat new links to the user's own incoming queues, send links using links it already has, and receive links sent to it by processes it has already sent links to. These restrictions enforced by the kernel are essential to guaranteeing the integrety of Demos's security.

Alternatives to Capability lists

The classical approach to capability lists has a high cost. Conceptually, a capability is a pointer to an object, with added access rights, and we are used to being able to incorporate pointers into any data structure. Forcing users to hold all of their pointers in a C-list and manipulate them only using indices into that C-list imposes awkward structures on user programs. Thinking in C syntax, for example, the impact is comparable to forcing programmers to rewrite a.x->y as clist[a.xi]->y. Not only does this force the programmer to replace pointers with indices into the capability list, but it also forces the programmer to manage the free space in the c-list independently from space on the heap.

Tagged Architectures

Hardware designers have attacked this problem by building tagged architectures. The classical tagged architectures were built by Burroughs corporation in the 1960's. Each word in memory had a tag field added, giving the type of the value in that word. This allowed the hardware to prevent programmers from, for example, adding an integer to a pointer. In its most developed form, the Burroughs architectures were, in effect, strongly typed machine languages where type rules as strict as those of Java were enforced by the hardware.

This idea naturally extends to capability based addressing, using tag bits to distinguish capabilities from other data types. Once such tag bits are present, we can have one set of operations permitted on capabilities, and a completely different set of operations permitted on other data objects. This allows users to store capabilities anywhere in their address space instead of segregating them from other data, thus allowing capabilities to be used exactly like pointers.

Today, the best known example of this approach is the IBM System 38 (ancestor of the IBM AS400). In that machine, each memory location had a tag bit. Normal store operations to memory always set the tag bit of the destination to zero. Capabilities occupy multiple locations, and all locations making up the capability must have their tag bits set to one for that capability to be valid. The only way to get ones in the tag bits is by moving an existing capability or by using the create-capability operation.

Cryptographic Protection

If we don not have hardware help, and we do not want to segregate capabilities from other data, we must find some algorithmic way to protect capabilities even though they are mixed with other data. The key to this is to use cryptographic methods.

The fundamental tools of cryptography are functions for encryption, decryption and computing checksums. The classic model of the use of encryption is:

     cyphertext = encrypt( plaintext, encrypt-key )
     plaintext = decrypt( cyphertext, decrypt-key )

The term encypher and encrypt are synonyms. The meaning of the cyphertext is obscured by the encryption, but it can be recovered by any user with the appropriate decryption key. When classical or secret-key cyphers are used, the encrypt and decrypt keys are equal. Private key cryptography uses encrypt and decrypt keys that differ.

Checksums become essential because without them, the user may be unable to determine whether the message has been correctly decoded. For example, the most secure known secret key cypher uses the exclusive or function for both encryption and decryption. This has the disadvantage that the secret key must be the same size (in bits) as the message, but it has the interesting property that any message of that same length may be derived from the cyphertext by use of an appropriately selected key. Suppose you wanted create evidence that someone was trafficing in pornography? Just capture some cyphertext they'd sent, exclusive-or it with the appropriate pornographic image, and then claim that as the plaintext content of the message, using the key you derived as proof -- of course, you claim it was the key the victim had used, and you claim you got that key by cracking their cypher.

So, using the [] operator to mean concatenation, what a user should do is:

     cypher = encrypt( plain [] checksum(plain), encrypt-key )

     plain [] c = decrypt( cypher, decrypt-key )
     plain is valid only if c = checksum(plain)

This allows anyone to check the correctness of a particular candidate for the encryption key, so we go one step farther, using a cryptographic checksum algorithm:

     cypher = encrypt( plain [] checksum(plain, check-key), encrypt-key )

     plain [] c = decrypt( cypher, decrypt-key )
     plain is valid only if c = checksum(plain, check-key)

Here, the checksum algorithm itself uses a key; in the simplest form, this might just be the initial value of the accumulator used to accumulate the checksum, but in the most complex forms, this becomes a cryptographically secure signature of the plaintext. Only a user posessing the appropriate checksum keys can verify that the checksum is correct.

Getting back to the original issue, we can imagine giving users free access to capabilities by encrypting them, with an appropriate checksum, whenever the kernel gives a user a capability, and by decripting and verifying the capability whenever the user gives the capability back to the kernel. This works, but there is a danger! If a user ever figures out the kernel's encryption keys, all is lost!

Server-Side Protection

The server-side model of protection was the first successful application of cryptography to the protection of capabilities in distributed systems. The best example of the use of this model lies in Andrew Tannenbaum's Amoeba system.

With server-side protection, the kernel no-longer tries to guarantee that capabilities are valid. It merely helps detect the most blatant violations, and since most programmer errors are blatant, this model detects most of those very quickly. The server-side model puts the final responsibility for protection on the server processes themselves!

the basic idea is simple: On receipt of a message, a process is responsible for determining, using the content of that message, that the message was sent using an intact valid capability, that is, one that has not been corrupted or forged. The tool we use for this purpose has come to be called a cookie.

The term cookie was apparently coined by the developers of the C input/output stream package. Specifically, the first known use of the term occurs in the documentation for the fseek() function, which was defined as returning a cookie, an opaque value that only had meaning when passed back to fseek() to position the same stream back to the same position. Since that time, the term cookie has come to refer to any opaque value or token given to a user that should be held by the user and passed back to its originator when the user wishes a certain function.

When a process creates a capability to be used for sending messages back to that process, it includes, in the capability, a cookie that is used to authenticate the capability. When the capability is used to send a message to the originating process, the recipient must check the cookie. Posession of the cookie itself guarantees that the sender had the right to send the message!

Servier Side Authentication in the Spirit of Demos

If we try to apply this idea for server-side authentication to a Demos-style system, we get the following structure for each link:

      link = < process-id, queue-number, cookie >

Each process has a queue table containing incoming message queues, and each process has a cookie table holding the cookie associated with each incoming message queue. As each process begins, it initializes its queue table with random numbers:

      for each q, cookies[ q ] = random();

The make-link operator is now trivial:

      make_link( q ):
          return < caller's Id, q, cookies[ q ] >

The send operation over a link must pass, to the receiving process not only the queue number but also the cookie from the link used to send the message; after receipt of a message, the recipient must check that the cookie is the correct one, for example, as follows:

      < bytecount, cookie > = receive( q, buf, len )
      if cookie != cookies[q]
          sender used an invalid link
      else if bytecount > len
          receiver's buffer was too small
      else
          we received all of a genuine message

At any time, a process using this scheme may invalidate all links referring to some particular incoming message queue by assigning a new cookie for that queue!

With this scheme, the kernel need not keep a link table on behalf of each process. So long as the processes can trust the memory protection mechanisms that prevent other processes from peeking at their data structures, processes may store their links and cookie arrays anywhere within their own data structures.

If a process tries to forge a link to an incoming message queue it is not authorized to use, it must successfully guess the cookie that the owner of that queue has assigned to that queue. If it cannot guess this, the most damage it can do is to send masses of easily ignored invalid messages to that particular recipient.

Of course, there is a chance that some process will pass one of its cookies, stripped from the link it came with, to some other process, where that process will then use that cookie to construct a forged link. This is not a serious security threat, since the process that passed just the cookie could have passed the entire link. Therefore, this kind of abuse does not involve any new security risks.

Note that this Demos-like model does not include any access rights associated with links! The solution to including access rights within capabilities protected by server-side authentication requires a more complex model, the topic of the next lecture.

Probabilistic Versus Absolute Protection

Note that all server-side authentication rests on a probabilistic model of security. When we segregated capabilities from data using capability lists maintained by the kernel, or when we used tag bits to prevent programs from corrupting the representations of capabilities, we had an absolute guarantee that a user program could not corrupt or forge capabilities. When we protect the capability by including a cookie within its representation, there is always a chance that a user will accidentally create the correct value.

Therefore, the strongest guarantee we can make with server-side authentication is that a user will have, on any particular trial, only a certain probability of success with a forgery. If we know that, given the speed of the server, the user can only make t trials per second, and if we want our system to stand up to an attack for s seconds, then we must use:

bits-per-cookie > log2ts

Thus, for example, if our server can handle 1,000,000 trials per second, where each trial involves receipt of and discarding one illegal message, and if we want the system to survive an attack lasting 1,000,000 seconds (over 11 days), we need at least 40 bit cookies. Seen from another perspective, 40 bit cookies on this system will be more than adequate to protect objects with lifetimes shorter than 11 days, but they may be inadequate to protect longer-lived objects.

Of course, in a real system, if some client started saturating some server with trials, we would probably notice that the service offered by this server was being degraded, and we might start investigating and find the attacker. A real attacker seriously intent on cracking a particular server's security could not generally afford to saturate that server with trials for an extended period. Therefore, in reality, our 40 bit cookie is probabily good for a significantly longer period than 11 days.

A clever server that wanted to prevent such random trials might defend itself in several ways. For example, it might track the return addresses sent with recent rejected messages. If it receives more than some number of invalid messages from the same process, it could report a probable attack. This would force the attacker to create multiple processes for the attack, increasing the total load placed by the attack on the system and slowing the attacker by the time taken to create these new processes.

Another serious threat is that an adversary process could guess the cookies on one link from known cookies on other links. The key to this attack is the fact that genuine random numbers are hard to find, but pseudorandom numbers, generated using an algorithm, are easy to use.

If the sender knows what pseudorandom number generator was used by a server to assign cookies to its links, and the sender knows the cookie on one link, it may be able to infer the seed used with that pseudorandom number generator, and from that seed, it may be able to infer the cookies assigned to the other links serviced by that server. This danger is very real if a simple pseudorandom number generator is used by the server (something like the old Unix rand() function).

Therefore, the server should use what is known as a cryptographically secure pseudorandom number generator to generate the cookies. Cryptographically secure generators have the property that, where Pi is the ith pseudorandom number returned in sequence by the generator, knowing Pi and the algorithm used by the generator does not help an observer guess Pi+1. With a classical multiplicative congruence pseudorandom number generator such as the old Unix rand() function, exactly the opposite was true! Each Pi uniquely determined the next Pi+1.

One easy way to increase the security of a pseudorandom number generator is to make sure that the generator generates many more bits than required. Thus, if we wanted 40 bits, we might use a generator that maintained an 80 bit seed, and so had a period of about 280. This generates a very predictable sequence of 80 bit pseudorandom numbers, but if we return only the least significant n bits, then each number has 280-n possible successors. This is not enough, though! Some widely used pseudorandom number generators have very predictable patterns in their least significant bits, and the old Unix rand() was one of them. A full treatment of pseudorandom number generation algorithms is far beyond what we can give here!