11. Floating Point
Part of
CS:2630, Computer Organization Notes
|
The Hawk architecture includes two instructions reserved for communication with coprocessors. A coprocessor is a special purpose processor that operates in conjunction with the central processor. Coprocessors may be physically separate from the central processor, on a separate chip, or they may be integrated on the same chip with it. It is the logical separation that is essential. Coprocessors are commonly used for floating point, but they have also been used for graphics, cryptography, and other specialized operations.
The Hawk coprocessor instructions, COSET and COGET move data between general purpose registers and registers inside the selected coprocessor. Our focus is on coprocessor number one, the floating point coprocessor.
|
|
In these instructions, dst always refers to a CPU register, while x refers to a register in the active coprocessor, as selected by the value in coprocessor register zero, the coprocessor status register, COSTAT. Fields of this register determine which coprocessors are enabled (powered up), which coprocessor is currently selected, and in some cases, the operating mode of that coprocessor. See the Hawk manual for the full details of the fields in COSTAT. Assuming that there is a USE directive for the float.h file, so that the right definitions are avalable, the following code enables the floating point coprocessor for short (32-bit) floating point operands:
LIL R1, FPENAB + FPSEL COSET R1, COSTAT |
Once the floating point coprocessor is enabled and selected, addressing coprocessor registers 1 to 15 refers specifically to registers inside the floating point coprocessor and not any other. When operating in short format, there are only two useful registers in the floating-point coprocessor, floating-point accumulators zero and one, FPA0 and FPA1, which corespond to coprocessor registers 2 and 3. Coprocessor register 1 will be ignored, for now. It is used for access to the least significant halfword of long (64-bit) floating point operands.
Floating point coprocessor registers 4 to 15 are not really registers. Rather, the operations that set these pseudo-registers are used to operate on the floating point accumulators. Another way to think about this is to say that the least significant bit of the x field is the coprocessor register select bit, while the high 3 bits of the x field are the coprocessor operation code.
The available operations are floating point add, subtract, multiply and divide, as well as square root and integer to floating conversion. Setting even floating point registers 4 to 14 causes operations on FPA0 and setting odd registers 5 to 15 operates on FPA1. For example, setting coprocessor register number 5 converts the integer stored in a general purpose register into a floating point value in FPA1. The complete set of short (32-bit) floating point operations on FPA0 is illustrated below using R3 as a source operand; the same operations are available on FPA1.
COSET R3, FPA0 ; 2 FPA0 = R3 COSET R3, FPINT+FPA0 ; 4 FPA0 = (float) R3 COSET R3, FPSQRT+FPA0 ; 6 FPA0 = sqrt( R3 ) COSET R3, FPADD+FPA0 ; 8 FPA0 = FPA0 + R3 COSET R3, FPSUB+FPA0 ; 10 FPA0 = FPA0 - R3 COSET R3, FPMUL+FPA0 ; 12 FPA0 = FPA0 * R3 COSET R3, FPDIV+FPA0 ; 14 FPA0 = FPA0 / R3 |
Floating point operations do not directly set the condition codes. Instead, When the coprocessor get instruction COGET is used to get the contents of a floating point accumulator, it sets the N and Z condition codes to show whether the floating point value is negative or zero. In addition, the C condition code is used to report floating point values that are infinite or non numeric. So, the following code would work to get the contents of FPA0 into R3 and check that it is valid:
COGET R3, FPA0 ; R3 = FPA0 BCS INVALID ; -- go deal with invalid result |
Exercises
a) Give appropriate defines for the symbols FPA0, FPA1, FPSQRT, FPADD, FPSUB, FPMUL and FPDIV that are used as operands on the COSET instruction.
b) Given 32-bit floating point values x in R4 and y in R5, give Hawk code to enable the coprocessor, compute sqrt(x2 + y2), put the result in R3 and then disable the coprocessor.
To fully specify our floating-point coprocessor, we must define not only the operations but also the data formats it uses. The Hawk, like most modern computers, uses the floating point format the Institute for Electrical and Electronic Engineers (IEEE) has defined. This follows a general outline very similar to most floating point formats developed since since the early 1960's, but it has some eccentric features.
Binary floating point numbers are closely related to decimal numbers expressed in scientific notation. Consider the number 6.02 × 1023, Avagadro's number. This number is composed of a mantissa, 6.02, and an exponent, 23. The number base of the mantissa, 10, is also the base to which the exponent is applied, and the mantissa in scientific notation is always normalized, as shown below:
60221419.9 | × | 1016 | \ | ||
60221.4199 | × | 1019 | not normalized | ||
60.2214199 | × | 1022 | / | ||
6.02214199 | × | 1023 | normalized | ||
0.602214199 | × | 1024 | not normalized |
Of these, only 6.02... × 1023 is considered to be properly in scientific notation. The rule is that the mantissa must always be a decimal number from 1.000 to 9.999... The only exception is zero. When a number has a mantissa that does not satisfy this rule, we normalize it by moving the point and adjusting the exponent until it satisfies this rule.
The IEEE standard includes both 32 and 64-bit floating point numbers. Here,
we will ignore the latter and focus the 32-bit
format. As in scientific notation, binary floating point numbers have
exponent and mantissa fields, but they are in binary, so the mantissa
is in base two and the exponent is a power of two.
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
S | exponent | mantissa |
In the IEEE floating point formats, like most others,
the most significant bit holds the sign
of the mantissa, with zero meaning positive. The mantissa is stored in signed
magnitude form. The
magnitude of the mantissa of a 32-bit floating-point number is given to 24 bits
of precision, while the exponent is stored in the 8 remaining
bits. Notice that this adds up to 33 bits of sign, exponent and mantissa,
evidence of some exceptional trickery.
IEEE double-precision numbers differ from the above in that each number is
64 bits. This allows 11-bits for the exponent instead
of 8 bits, and 53 bits for the mantissa, including one extra bit
obtained by the same kind of trickery.
The IEEE format gets an extra bit for the mantissa as a consequence
of the mantissa normalization rule it uses. The mantissa
in an IEEE format number is a binary fixed point number
with one place to the left of the point. The
normalization rule is similar to that used for scientific notation, so
the smallest normalized mantissa value is 1.0, while the largest normalized
value is 1.1111...2, a value just less than 2.0.
This means that all normalized mantissas have
a most significant bit that is one. In general, if a bit
is always the same, there is no point in storing it since we can take
the constant value for granted. We call such a bit that does not need to be
stored a hidden bit. Consider the IEEE floating point value
represented as 1234567816:
|
|
| The IEEE format supports non-normalized mantissas only for the smallest possible exponent. Zero is represented by the smallest possible exponent with and a non-normalized mantissa of zero. Conveniently, this is represented as 0000000016.
The Biased ExponentMost floating point formats designed since the 1960s have used a biased representation for the exponent. In the case of the IEEE format, the bias used is an odd one, 127, and two exponent values have been singled out for special treatment. Exponents in the range from 000000012 to 111111102 are interpreted as biased values, so 000000012 means –126 and 111111102 means +127. The exponent represented as 000000002 is special. It also means –126, but it implies that the hidden bit is zero, which is to say, the mantissa is not normalized. The exponent 111111112 is reserved for values that the IEEE calls NaNs, where NaN stands for not a number. The value of the mantissa tells what kind of NaN. The hardware uses a mantissa of zero to mean infinity. Software may use nonzero values for other meanings.
Because of the odd bias of 127 for exponents, an exponent of
one is represented as 100000002, zero is
011111112, and negative one is
011111102. There is a competing but equivalent
explanation of the IEEE format that presents the bias as 128 and places
the point in the mantissa one place farther to the right.
The different presentations of the IEEE system make no difference in the
number representations, but they can be confusing when comparing
different textbook descriptions of the system.
The following table shows IEEE floating-point numbers,
given in binary, along with their interpretations.
In the above table, note that 2–126 is the smallest positive normalized value, with a mantissa that is all zero except the hidden bit. Everything smaller is non-normalized, with a zero exponent implying a hidden bit of zero.
Software Floating PointSome computers do not have a floating point hardware. Floating point arithmetic on those machines is done by software. Such software forces us to examine floating point arithmetic algorithms, and it gives an extended implementation example. Here, we ignore compatibility with the IEEE floating point format until the end, so we can focus on clear code with simple data representations.
The interface specification for a class should list all of the methods applicable to objects of that class, and for each method, it should specify the parameters, constraints on those parameters, and the nature of the result. The class implementation adds the details of object representation and the specific algorithms for each method. It is good practice to write the interface specification so it serves as a user manual for the class. For our floating-point class, we will define add, subtract, multiply and divide, plus tools for conversion to and from integers. Of course, other operations can be added. Since users of this class need not know how numbers are represented, we pass object handles (pointers) as parameters. Thus, users never need to know how many registers a number might occupy. Finally, the interface specificaiton for a class should indicate how to allocate storage for variables in the class. The only thing a user of an object needs to know is the size of the object. The above interface specification assumes that each floating point number is stored in two words of memory, enough for an exponent and a mantissa of one word each, although the user need not know how the words are used.
A floating point representationThe simplest floating point representation from a software perspective is a record containing two fields of one word each, the exponent and the mantissa. Of course, we must specify which word is which and how each is represented. Since our computer uses two's complement arithmetic, it makes sense to use that for both the exponent and mantissa. We can represent zero using a mantissa of zero; in this case, the exponent does not matter, but by convention, we will set it to the smallest (most negative) value. Note that this format has no special cases for non-normalized numbers or non-numeric values.
The more difficult question is, where is the point in the
mantissa? We could put it anywhere, but
there are two good choices: The mantissa could be an integer,
or the point could be just right of the
sign bit. We will do the latter, with the mantissa normalized
so the bit just right of the point is always one. Note that this format
has no hidden bit.
Normalizing a floating point numberMany operations on floating point numbers produce unnormalized results. If we do not normalize them before performing the next operation, we risk loss of precision. This is why classical scientific notation requires all values to be normalized after each operation. To normalize a number, we must first check for zero, since zero cannot be normalized. Our decision to use two's complement for the mantiss requires a special case for negative numbers. This is why many hardware floating-point systems use signed magnitude for their mantissas. The normalize routine is not part of the public interface to our floating point package; rather, it a private component, used as the final step of most floating point operations. Therefore, we can write it with the assumption that operands are passed in registers instead of using object handles. We will use R3 and R4 to hold the exponent and mantissa of the number to be normalized, on both entrance and exit.
We used BITTST to test bit 30 of the mantissa above. This macro moves the indicated bit to one of the condition codes with a shift instruction that depends on the bit being tested. BITTST sets different condition codes depending on the bit being tested, so it defines two macros, BBS and BBR as the appropriate conditional branches to check if the bit was set or reset. In C, C++ or Java, bits are usually tested by anding with a constant with just that bit set. To normalize negative numbers, we converted to one's complement form. In this form, bit 30 is always zero when the number is normalized. It would have been just as easy to negate the number, normalized it as a positive value, and then negate it again.
Integer to Floating ConversionConversion from integer to floating is easy, we simply set the exponent field to 31 and then normalize. This is because our mantissa representation can can be thought of as an integer in units of 2–31. The code below is optimized, it makes use of the known register use of NORMALIZE and it saves the return address in a register.
Floating to Integer ConversionConversion of floating-point numbers to integer is a bit more complex, but only because we have no pre-written denormalize routine that will set the exponent field to 31. Instead, we need to write this ourselves. Where the normalize routine shifted the mantissa left and decremented the exponent, floating to integer conversion involves shifting the mantissa right and incrementing the exponent until the exponent reaches 31. This leaves open the question of what happens if the initial value of the exponent was greater than 31. The answer is, in that case, the integer part of the number is too large to represent in 32 bits. In this case, we should raise an exception, or, lacking a model of how to write exception handlers, we could set the overflow condition code. Here, this is left as an exercise for the reader.
Floating Point AdditionWe can now explore the implementation of more interesting operations. These follow naturally from the standard rules for working with numbers in scientific notation. Consider the problem of adding 9.92×103 to 9.25×101. We begin by denormalizing the numbers so that they have the same exponents; this allows us to add the mantissas, after which we renormalize the result and round it to the appropriate number of places:
The final rounding step is one many students forget, particularly in the age of pocket calculators. In scientific notation, we have the convention that the number of digits given indicates the precision of the measurements from which a number wase taken. As a result, if two numbers are given in scientific notation and then added or subtracted, the result should not be expressed to greater precision than the least precise operand. When throwing away the less significant digits of the result, we always round to minimise the information loss and avoid introducing systematic errors. An important question is which operand do we denormalize before adding? The the answer is, we never want to lose the most significant digits of the sum, so we always increase the smaller of the exponents while shifting the corresponding mantissa right. To do this, we need to sort the arguments into a standard order before proceeding. There are many ways to exchange the contents of two registers. We will use the most straightforward approach, setting the value in one register aside, moving the other register, and then moving the first value into its final place. This takes three move instructions and a spare register. There are ways to do this without an extra register. The most famous and cryptic of these uses the exclusive or operator: a=a⊕b;b=a⊕b;a=a⊕b. Also note that when we add two numbers, there will sometimes be a carry into the sign bit, which is to say, an overflow. On pencil and paper, we stop propagating the carry before it changes the sign and instead, wedge in an extra bit and note the need to normalize the result. In software, we must undo the damage done by the overflow and then normalize. The following floating point add subroutine solves these problems:
Most of the above code follows simply from the logic of adding that we demonstrated with the addition of two numbers using scientific notation. There are some points, however, that are worthy of note. The above code uses saves R1 in its activation record to allow the call to NORMALIZE, and it saves R8 so that can be used as a temporary during the exchange operations. This code does not adjust the stack pointer before and after the call to NORMALIZE because that routine does not use its activation record. Finally, there is the issue of dealing with overflow during addition. After addition, when the sign is wrong, interpreted as a sign bit, it does have the correct value as the most significant bit of the magnitude, as if there were an invisible sign bit to the left of it. Therefore, after a signed right shift to make space for the new sign bit (incrementing the exponent to compensate for this) we can complement the sign by adding one to it. There are many ways to do this. The approach used here using the ADJUST instruction has the advantage that it does not use any extra registers.
Floating Point MultiplicationStarting with a working integer multiply routine, floating point multiplication is simpler than floating point addition. This simplicity is apparent in the algorithm for multiplying in scientific notation: Add the exponents, multiply the mantissas and normalize the result, as illustrated below:
Unlike addition, we need not denormalize anything before the operation. The one new issue we face is the matter of precision. Multiplying two 32-bit mantissas gives a 64-bit result. We will assume a signed multiply routine that delivers this result, with the following calling sequence:
If the multiplier and multiplicand have 31 places after the point in each, then the 64-bit product has 62 places after the point. Therefore, to normalize the result, we will always shift it one place. If the multiplier and multiplicand are normalized to have minimum absolute values of 0.5, the product will have a minimum absolute value of 0.25. Normalizing such a small product will require an additional shift, but never more than one. We must use 64-bit shifts for thiese normalize steps in order to avoid loss of precision, so we cannot use the normalize code we used with addition, subtraction and conversion from binary to floating point. As a result, most of the following is concerned with normalization.
This code is oversimplified: What if the product is zero? In this case, we should set the exponent to the most negative value. Also, this code does not detect overflow or underflow in the exponent field.
Other OperationsMultiply and divide are not enough. Our commitment to strong abstraction discourages users from examining the representations of floating point numbers. Floating point hardware designers do not face this constraint. They advertise the exact number format and encourage users to take advantage of it. If we do not do this, we must provide tools for operations that may be trivial, such as testing the sign of numbers, or testing for zero. We also need to support import and export of floating point numbers. Users need to convert to and from textual and IEEE standard format. We convert from our format to IEEE format by picking off the sign, aligning the point in the mantissa, and then dealing with the exponent. Our 32-bit exponent has an extraordinary range, so overflow and underflow are problems. The following somewhat oversimplified C code does this:
The bias of the IEEE format is 127, yet in the above code, we used a bias of 126. This is because the code is doing two things with one operation. First, it is decrementing the original exponent by one to account for the fact that our numbers were normalized in the range 0.5 to 1.0, while IEEE numbers are normalized in the range 1.0 to 2.0. Second, it is adding 127, the actual bias. The range check on the exponent in the above code uses the bounds –148, –125 and 128 for denormalized and normalized numbers instead of the bounds –149, –126 and 127 specified for the IEEE format. Again, this difference is because of the change in normalization rule between the IEEE model and our word-based model. The above code is not perfect. It truncates, discarding the insignificant bits instead of rounding to the nearest IEEE floating point value. Rounding is preferred because it halves the maximum error in conversion.
This code makes extensive use of shifting to clear fields within the number. Thus, instead of writing n&0xFFFFFF00, we write (n>>8)<<8. This trick is useful on many machines where loading a large constant is significantly slower than a shift instruction. By doing this, we avoid both loading a long constant into a register and using an extra register to hold it. On the Hawk, this optimization is effective only for shift counts of 16 or less. Shifting by a greater amount takes two consecutive shift instructions, in which case, it may be faster to load a long constant using LIL followed by an AND instruction. Another trick is used in the above code to set the implicit one bit. Instead of or-ing a long constant with the mantissa, the code uses a subtract instruction to set the carry bit, and then it uses an ADJUST instruction to or the carry bit into the most significant bit.
Conversion to DecimalA well designed floating point package will include a complete set of tools for conversion to and from decimal textual representations, but our purpose here is to use the conversion problem to illustrate the use of our floating point package, so we will write our conversion code as user-level code, making no use of any details of the floating point abstraction that are not described in the header file for the package. First, consider the problem of printing a floating point number using only the operations we have defined, ignoring the complexity of assembly language and focusing on the algorithm. We can begin by taking the non-rounded integer part of the number and printing that, followed by a point, but the question is, how do we continue from there, printing the digits after the point? To print the fractional part of a number, we will take the integer part of the number and subtract it from the number, leaving just the fraction; this is equivalent to taking the number mod one. Multiplying the fractional part by ten brings one decimal digit of the fraction above the point. This can be repeated to extract each successive digit. This is not particularly efficient, but it works. Here is C code for this:
Before writing assembly code, we will rewrite this so the code passes handles to floating point numbers, not the numbers themselves, and so it calls the right subroutines to do arithmetic.
The above code shows some of the problems we forced on ourselves by insisting on having no knowledge of the floating-point representation when writing our print routine. Where a C or Java programmer would write 10.0, relying on the compiler to create the floating point representation and put it in memory, we have been forced to use the integer constant 10 and then call float() to do the conversion. This is a common result of strict object oriented encapsulation, although we could have the interface definition export a compile time or assembly time macro to process constants into their internal representations. If we deny ourselves knowledge of the size of the representation of floating point numbers, we cannot allocate space in our activation records for local floating point variables the way we have up to this point. We solve this by having the interface definition in float.h export the size of floating point numbers as the constant FLOATSIZE. We can adopt the general convention that we always define a symbol for the size of each object, record or structure.
Before this point, we manually added the sizes of the fields in the
activation record when we built our activation records, but we can use
the assembler to do this clerical work.
To do this, we begin by setting the
activation record size to zero, and then we increment it by the
object size as we define each field.
The need to save registers 8 to 10 was discovered only after writing the code on the next page. This code could have been written using fewer registers and more local variables in the activation record, but it is more efficient to use registers, and our standard calling conventions require that if we use registers 8 and up, we must save and then restore whatever was in them. Of course, if any of the subroutines called by our code use registers 8 to 10, those routines must save and restore that data for us.
The code on the next page contains one significant optimization.
With all of the subroutine calls, we could have incremented and decremented
the stack pointer many times. Instead, we increment it just once at the
start of the print routine and decrement it just once at the end; in between,
we always subtract ARSIZE from every displacement into the activation
record in order to correct for this.
|