One way to take advantage of parallelism in an architecture is to directly expose that parallelism to the programmer. This is most common in DSP systems, particularly those with what are known as VLIW or Very Long Instruction Word architectures.
The basic idea is to simply allow each instruction to directly utilize each of a number of functional units within the CPU. Consider, for example, the following instruction format:
_____ _____ _____ ______ _____ _____ _____ ______ |_____|_____|_____|______|_____|_____|_____|______| ... | aS1 | aS2 | aOP | aDST | bS1 | bS2 | bOP | bDST | | ALUa | ALUb | ___ _____ _____ _________ _____ _____ _____ _________ ... |___|_____|_____|_________|_____|_____|_____|_________| |mOP| mR | mX | mDISP | cOP | cR | cX | bDISP | | memory | control | aOP, bOP: add, subtract, multiply, divide, and, or, etc S1 and S2 are source registers DST is the destination register mOP: load, store, load immediate R is the register to load or store X is the index register cOP: branch, call, branch if positive, branch if zero, etc. R is the register to test for conditional ops X is the index registerHere, each instruction has 4 major fields: The ALUa and ALUb fields control the function of two ALUs. In one instruction cycle, each ALU may take two operands (S1 and S2), combine them using an operation OP, and deliver the result to a destination DST.
Each instruction may also perform a memory operation, loading or storing the contents of register R in a memory loaction computed by adding the contents of register X to the displacement DISP. Other memory field operations might include load immediate, using the combination X|DISP as an immediate constant.
It takes a fair number of registers to allow efficient use of an architecture such as this. Assuming the register fields are all 6 bits each, allowing 64 registers, that the op fields are all 4 bits each, and that the addressing displacements are all 12 bits, we have 10×6 bits for registers, 4×4 bits for operation specification, and 2×12 bits for displacement, or 96 bits per instruction! This is big, but that is what the name VLIW suggests!
Note that each instruction in the example format may read from as many as 8 registers (aS1, aS2, bS1, bS2, mR on store, mX, cR, cX) and each instruction may update as many as 4 registers (aDST, bDST, mR on load, cR on call, assuming that a call instruction saves the return address in the designated register). Thus, we need an extraordinary number of parallel accesses to the registers in the CPU, but this problem is a solved problem!
This instruction set also assumes that every instruction will be a memory reference instruction, although it is likely that there will be no-op opcodes in each functional unit's field of the instruction word. Therefore, we will only achieve peak performance if we can perform two simultaneous memory references for each instruction.
Several aspects of VLIW instruction set design can be evaluated in exactly the same way as a conventional architecture. For example, the question of how many registers should be included works out in the same way, as do questions about the division of the register set between general purpose registers and special purpose registers such as index registers, floating point registers, and so on.
The one aspect of a VLIW instruciton set that requires new approaches to evaluation is the question of how many distinct functional units should be included in the machine!
With conventional architectures, the presence of anything like a functional unit is hidden from the programmer; such units may lurk inside the machine, but their number may change from one implementation of the architecture to the next. With a VLIW architecture, their presence is exposed, so their number must be fixed for all implementations of the architecture.
It is clear that not all functional units will be needed in every instruction. Therefore, there must be at least one "no-op" operation available to each functional unit. If this is an explicit no-op, evaluation is somewhat simplified, but pointless computations on otherwise unneeded instructions are effective no-ops, as are branches to the next instruction in sequence.
The question an evaluator can ask is: In well optimized code, what fraction of instructions for each functional unit are no-ops? If the fraction is high, the instruction set is out of balance. If the fraction is low, that functional unit is fully utilized. If the fraction of no-ops is uniformly distributed between the different functional units, the architecture is well balanced.
This is probably because our design has too many ALUs relative to the other classes of functional units.
This may be because the design has too few ALUs; sequences of instructions are being used instead of simple instructions.
This may be because the design has too few ALUs, as above.
The set of functions each functional unit is able to perform may be more varied than an initial inspection suggests. For example, the memory reference functional unit of the example VLIW architeture will certainly include these operations:
The branch functional unit is also likely to include indexed addressing, because a general indexed branch instruction will do many operations, including case-select and function return.
Consider the following problem:
t = 0 for i = 1 to 10 do t = t + a[i]*b[i]Here, t is the vector dot product of the 10-vectors a and b. Translating this to our VLIW instruction set is not easy! A first effort might be:
-- ALUa ALUb memory control - - Rt = 0; - - - Ri = 1; - LP: - - Ra = a[Ri]; - - - Rb = b[Ri]; - Ra = Ra * Rb; - - - Rt = Rt + Ra; - - - - - Ri = Ri + 1; - - - Ra = Ri + -10; - - - - if Ra < 0, goto LP;Here, we simply did a brute-force literal translation of the original to machine code, and in the process, we never used more than one functional unit in any instruction. Optimization of this kind of code is difficult but can have a huge payoff! Consider the following equivalent code:
-- ALUa ALUb memory control - - Rt = 0; - - - Ri = 1; - - - Rj = 10; - - - R1 = 1; - - Rj = Rj - R1; Ra = a[Ri]; - - Ri = Ri + R1; Rb = b[Ri]; - LP:Rc = Ra * Rb; Rj = Rj - R1; Ra = a[Ri]; - Rt = Rt + Rc; Ri = Ri + R1; Rb = b[Ri]; if Rj > 0, goto LP; Rc = Ra * Rb; - - - Rt = Rt + Rc; - - -In the above, we added a new loop control variable, Rj, used to count from 10 down to 0. This is because testing for zero is a very common feature of conditional branches, while comparison with an immediate constant is difficult. The second thing we did was to "pipeline" the iteration, so that each iteration of the main loop fetches the operands from one vector element while it multiplies the previous vector elements and adds them to the sum.
This reduces the loop to just two instructions, with only one no-op! Note that the 4 functional units operate in parallel! Therefore, the results from a computation in one functional unit are not available to another functional unit until the next instruction. Incrementing Ri, for example, in the same instruction as a fetch from b[Ri], will use the old value for indexed addressing while computing a new value for use in the next iteration.
Because of the pipelined execution of the loop, with Ra and Rb serving as "interstage registers" for this software pipeline, we needed to add a pair of instructions to the loop prologue to start the first iteration, filling the pipeline before the loop begins, and we had to add a pair of instructions making up a loop epilogue to finish up what was in the pipeline after the loop terminates.
In a typical program, there would be many blocks of code like this; when splicing one such block of code to the next, it is usually possible to overlap the loop prologue of one block with the loop epilogue of the next. The above example illustrates this possibility nicely, because the set of functional units used in the prologue does not overlap the set of functional units used in the epilogue.
Our example architecture included no provisions for small immediate constants as operands for the two ALUs, so the amount by which the array index and loop counter are adjusted for each iteration is stored in a register, R1, holding the constant 1 for the duration of the loop. A set of ALU operations that interpreted one of the operand register select fields as a small constant instead of a register number whould have been more elegant.
Also, notice in the above that one of the ALUs was used for loop control variable updates, an operation that is likely to involve short integers, while the other ALU is used for all computations involved in producing the actual result, operations likely to involve long integers or floating point variables. This strongly suggests that it would be quite reasonable to give the two ALU's different word lengths and different sets of operators! Of course, it would be wrong to judge the architecture on just this one problem! The decision to move forward with such a design should rest on examination of a large suite of programs!
This example illustrates an important problem with all high performance architectures! Naive machine-language code rarely makes full use of the parallelism inherent in the CPU design. Clear easy-to-read assembly language code for such machines rarely performs well, while code that performs well is frequently very convoluted and hard to understand and maintain. As a result, it is quite common to find that the output of a good high-quality compiler outperforms hand crafted assembly language, because the compiler is not responsible for generating clear and easy to maintain object code, while the assembly language programmer is almost always under this obligation.