5. Discrete Event Simulation

Part of CS:2820 Object Oriented Software Development Notes, Spring 2017
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

 

The Discrete-Event Simulation Algorithm

The basic discrete-event simulation algorithm has been known since the mid 1960s, and the algorithm applies to just about any discrete event model. The central feature of the model is a data structure, the pending event set. This is, as its name suggests, a set of pending events, that is, events that have been caused by events in the past but have not yet occurred.

The basic operations on the pending event are:

schedule e
The event e is inserted into the pending event set. The event e.time is the time at which the event is scheduled to occur.

e = getNext
The event e is extracted from the pending event set; e.time is less than or equal to the times of all other events in the set, and if other events in the set are scheduled to occur at exactly the same time, the model may be nondeterministic.

Some discrete-event models allow previously scheduled events to be deleted. Some higher level models consider events that extend over time, but we will ignore these. (Events that take time can be modeled by a sequence of instantaneous events, for example, one that starts the high-level event and another that marks its end).

The basic algorithm can be stated in a Java-like language as:

// for all x from the set of initial events
eventSet.schedule( x )
repeat {
	e = eventSet.getNext
	// simulate event e at time e.time
	// this may involve scheduling new events
}

There are two ways to terminate such a simulation: Either one of the initially scheduled events is a "terminate" event marking the end of time, or the model runs until the event set is empty.

In either case, within any iteration of the loop, e.time is the current time, and time lurches forward from event to event in an irregular manner. This is quite different from simulation models in which time advances in uniform-sized ticks; that type of model is more commonly used to deal with systems that are described by differential equations.

Note however that within the simulation of any event, the model may use arbitrarily complex code to predict the times of future events. In extreme cases, this prediction might involve running other simulation models, for example, involving differential equations.

A Classic Example

Consider the problem of modelling a bank. We begin with an informal description of one customer's visit to the bank: You arrive at the bank, wait in line for an available teller, interact with that teller for a while, and then leave. Tellers may need to do some paperwork or computer work after a customer departs before they are ready for the next customer. From this description, we can determine that our model will have the following objects and classes:

A little more thought allows us to refine this by identifying events:

Terenary Logic

There is a proof, first given in the late 1950s, that base 3, ternary, is the optimal number base. Actually, the proof is that base e is optimal, where e, Euler's number, is approximately 2.71828. Of course, we want an integer number base, and 3 is the closest integer to e. 2 is the second closest, so base 2 is not a horrible choice. The entire proof is somewhat suspect because it is based on assumptions that may not be correct. Nonetheless, there has been longstanding interest in base 3 computers, either as a joke or as practical machines.

Between 1959 and 1965, the Russians put a ternary into what then counted as mass production. 50 Setun computers, a design developed at Moscow State University, were built. A successor, the Setun-70 was designed as a successor in 1970, and various ternary computers have been proposed in the decades that followed.

Boolean logic, as forulated by George Boole, uses the values false and true. Ternary logic, as formulated by Stephen Kleene in 1950, uses 3 values. The three values are:

With Boolean logic, the general approach is to represent false as 0 and true as 1. With Kleene logic, there are two sensible representations:

Both of these are the basis of number systems. The idea of a digit having a negative value is strange, but it works perfectly well. Setun used the balanced ternary number system. The unbalanced number system is more familiar. Digit values ranging from zero up to the radix minus one is how we work in both binary and base ten.

There are a huge number of potential ternary logic functions, but we need only a few of them to construct any possible logic function:

A mathematician is happy with the concept of a logic function, but electrical engineers who design computers need to worry about the speed of light and about the speed of logic. Logicians think of logical functions as establishing equations where time is not an issue. Electrical engineers think in terms of building devices that evaluate logic functions.

We call a device that evaluates a logic function a gate. Each gate has inputs and an output, where the hardware of that gate sets the value of its output as a function of the values of its input. Unlike the mathematician's idealized functions, however, gates have time delays. If an input to the gate changes, the output will only change some time later. In the general case, the delay can depend on which input changed and how, but for most design of digital circuitry, it suffices to say that the delay between input changes and output changes does not depend on which input changed or what the changed value was.

In slightly more sophisticated models, the delay from input to output of a gate has a slight random factor (simulating the effect of electrical noise on the circuit), and if two output changes would occur too closely together, the intermediate output value that the second change cancels is never seen. We'll ignore this complexity.

Finally, in real digital logic, physical wires of some kind connect the outputs of gates to the inputs of other gates. These wires do not transmit signals instantly, but rather, signals injected into one end of a wire take a finite but small time to reach the other end of the wire. Modern computers are fast enough that the speed of light (about 1 foot per nanosecond) does not look infinite. Electrical signals traveling through buried conductors inside a silicon integrated-circuit chip travel at speeds significantly slower than the speed of light, and modern microprocessor chips can be big enough that a signal cannot necessarily cross the chip in the time it takes for a single instruction to execute.