5. Simulation, Digital Logic and Operators
Part of
CS:2820 Object Oriented Software Development Notes, Spring 2019
|
In a discrete-event simulation model, the state of the system being simulated is described by a set of state variables. For example, variables describing where the vehicles are in a highway network, or variables describing the state of each wire in a digital computer as either true or false. Any changes to these variables are described as events. For example, when a vehicle arrives at an intersection, this is an event, or when a logic gate changes its output from true to false, this is an event.
In a discrete-event model, events are instantaneous and nothing at all happens between events. This is obviously a matter of abstraction or simplification. In the real world, all changes above the quantum level take time.
The basic discrete-event simulation algorithm has been known since the mid 1960s, and the algorithm applies to just about any discrete event model. The central feature of the model is a data structure, the pending event set. This is, as its name suggests, a set of pending events, that is, events that have been caused by events in the past but have not yet occurred.
The basic operations on the pending event are:
Some discrete-event models allow previously scheduled events to be deleted. Some higher level models consider events that extend over time, but we will ignore these. Events that take time can be modeled by a sequence of instantaneous events, for example, one marking the start of a long event and another to mark its end, and if an event may need to be cancelled, this can be indicated by adding a state variable to indicate whether that event should do its usual work or do nothing.
Before the simulation starts, initial conditions are set up. These include the initial state of the model, and they include initial events. Without the initial events, nothing would happen ever. Initial events might, for example, mark the arrival of vehicles from the outside world and, for intersections with stop lights, start those lights on their red-green cycles.
The basic algorithm for discrete-event simulation can be stated in a Java-like language as follows:
// initialize the model
eventSet.schedule( x ); // for all x in the set of initial events
// run the simulation
repeat {
e = eventSet.getNext();
// simulate event e at time e.time
// this may involve scheduling new events
}
There are two ways to terminate such a simulation: Either one of the initially scheduled events is a "terminate" event marking the end of time, or the model runs until the event set is empty.
In either case, within any iteration of the loop, e.time is the current time, and time lurches forward from event to event in an irregular manner. This is quite different from simulation models in which time advances in uniform-sized ticks; that type of model is more commonly used to deal with systems that are described by differential equations.
Note however that within the simulation of any event, the model may use arbitrarily complex code to predict the times of future events. In extreme cases, this prediction might involve running other simulation models, for example, involving differential equations.
Consider the problem of modelling a bank. We begin with an informal description of one customer's visit to the bank: You arrive at the bank, wait in line for an available teller, interact with that teller for a while, and then leave. Tellers may need to do some paperwork or computer work after a customer departs before they are ready for the next customer. From this description, we can determine that our model will have the following objects and classes:
A little more thought allows us to refine this by identifying events:
We need a subject to use for the large-scale simulation exercise that will motivate the bulk of the machine problems assigned this semester. While, in the lectures, we will focus on the simulation of a road network, the machine problems will focus on something else.
This semester, we will focus on conventional boolean digital logic, the kind that underlies the computers inside just about everything these days. In previous semesters, we have worked with ternary (3-valued) logic and neural networks. In some previous offerings of this course, we worked with digital logic, but don't worry, the logic simulator we write this semester will be totally incompatable with the simulators from those offerings.
All solutions to programming assignments from my prior offerings of this course are available on line. Feel free to study them.
Boolean logic is called that because it was formulated by George Boole (1815-1864). In 1854, Boole published The Laws of Thought. In this work, he demonstrated that logic obeyed algebraic laws and was, therefore, properly considered to be a branch of mathematics. Previously, logic had been seen purely as a branch of philosophy. In medieval times, the seven libral arts were music, arithmetic, geometry and astronomy (the quadrivium or scientific arts), plus grammar, logic and rhetoric (the trivium or humanities). Boole's algebra moved logic from the humanities to the sciences. We get the word trivial from the trivium.
Boolean logic was not considered to have any practical engineering uses until Claude Shannon showed, in 1937, how it could be used to analyze systems of electrical relays. Dial telephones based on relay technology had been around since the 1890s, but the design of relay-based systems was ad hoc until Shannon, using the work of Boole, demonstrated that relays implement logical operations and therefore are subject to mathematical analysis.
You all know a good amount of boolean logic, since this is taught in Discrete Structures, a prerequisite for this course, and since boolean variables and expressions are a common feature of programming languages. Nonetheless, a bit of review is appropriate. The basic boolean values are:
Ternary logic adds a third value, unknown, and quatrenary logic divides unknown into two values, overdefined meaning neither true nor false, and underfined meaning could be either true or false. These may be interesting, but we'll ignore them this semester.
George Boole represented false as 0 and true as 1; and he noticed that, with these two values, the logical and operator behaved like multiplication, and the logical or operator behaved sort of like addition if you take any nonzero results and replace it with 1. We continue to use essentially the same conventions in modern computer languages such as C, C++ and Java, but nothing requires us to do so.
If we limit ourselves to two-argument operators, where each argument has 2 possible values, any boolean function f(a,b) can be described by a truth table with the following structure:
| ||||||||||||||||||||
In the above table, the outputs are given as unknowns, w, x, y and z. Since there are 4 values in the output column and each value may take exactly 2 values, there are 16 possible boolean functions of 2 inputs. We can enumerate all 16 of these functions and give many of them names:
|
The above table only lists the boolean functions that digital logic designers commonly name. Logicians like to name additional functions such as implication (1101).
The difference between the boolean value true and the boolean function true(a,b) may not be obvious. This function has the value true regardless of the values of its arguments. Similarly, the function a(a,b) has the value of its first argument, regardless of the value of the second argument.
Note that there are some alternative ways of describing logic functions. In threshold logic, for example, the basic logic operator is described by its threshold t and polarity p. If t or more inputs are true then the output is p; otherwise, the output is not(p). Threshold gates may have any number of inputs. So, for example, a 3-input gate with a threshold of 2 and polarity true will output true if 2 or more of its inputs are true and false otherwise.
If we limit ourselves to two-input logic functions, threshold logic can compute 6 of the 16 logic functions in the table above.
A mathematician is happy with the concept of a logic function, but electrical engineers who design computers need to worry about the speed of light and about the speed of logic. Logicians think of logical functions as establishing equations where time is not an issue. Electrical engineers think in terms of building devices that evaluate logic functions.
We call a device that evaluates a logic function a gate. Each gate has inputs and an output, where the hardware of that gate sets the value of its output as a function of the values of its input. Unlike the mathematician's idealized functions, however, gates have time delays. If an input to the gate changes, the output will only change some time later. In the general case, the delay can depend on which input changed and how.
For most design of digital circuitry, we can use a more simplistic model that assigns a fixed delay to each gate. This delay does not depend on the values that change. Instead, if an input to a gate changes in a way that will cause the output of that gate to change, this change is only visible after the delay.
In slightly more sophisticated models, the delay from input to output of a gate has a slight random factor that simulates the effect of electrical noise, and the gate has a minimum response time so that, if the output would go from true to false to true or from false to true to false in a time shorter than the gate delay, the intermediate value is never seen and an outside observer looking at the output of the gate never sees any change. To start with, we'll ignore this, but note that this complexity begins to matter when there is feedback in a digital circuit.
Finally, in real digital logic, physical wires of some kind connect the outputs of gates to the inputs of other gates. These wires do not transmit signals instantly, but rather, signals injected into one end of a wire take a finite but small time to reach the other end of the wire. Modern computers are fast enough that the speed of light (about 1 foot per nanosecond) does not look infinite. When an electrical signal travels down a straight perfectly conducting wire surrounded by vacuum with no other conductors nearby, it travels at the speed of light.
None of the wires in a computer are infinitely far from other conductors, and none of them are perfect conductors. As a result, signals travel slower than the speed of light. In typical cables, signals travel at about 2/3 to 1/2 the speed of light. For signals traveling on traces on the surface of a printed circuit board, 1/2 the speed of light is a good guess. Signals traveling through buried layers are slower, and signals traveling on buried layers within a silicon integrated circuit chip can be much slower.
This is not a digital logic course. You will not be expected to be able to design digital logic circuits nor to understand how they are used to build computers! You will, however, be building a logic simulator. To do this, you will have to extract the relevant details from the description above and convert them to classes and objects in much the way we are describing road networks using classes and objects.