11. Floating Point
Part of
22C:60, Computer Organization Notes
|
The Hawk architecture includes two instructions reserved for communication with coprocessors. A coprocessor is a special purpose processor that operates in conjunction with the central processor. Coprocessors may be physically separate from the central processor, on a separate chip, or they may be integrated on the same chip with it. It is the logical separation that is essential. Coprocessors are commonly used for floating point, but they have also been used for graphics, cryptography, and more.
The Hawk coprocessor instructions, COSET and COGET transfer data between the general purpose registers and specialized registers inside one or more coprocessors. Here, we will only discuss coprocessor number one, the floating point coprocessor.
|
|
In these instructions, dst always refers to a CPU register, while x refers to a register in the active coprocessor, as selected by the value in coprocessor register zero, the coprocessor status register, COSTAT. See the Hawk manual for details of the fields in COSTAT. Assuming that there is a USE directive for the float.h file, so that the right definitions are avalable, the following code enables the floating point coprocessor for short (32-bit) floating point operands:
LIL R1, FPENAB + FPSEL COSET R1, COSTAT |
Once the floating point coprocessor is enabled, addressing coprocessor registers 1 to 15 refers specifically to registers inside the floating point coprocessor. When operating in short format, there are only two useful registers in the floating-point coprocessor, floating-point accumulators zero and one, FPA0 and FPA1, which corespond to coprocessor registers 2 and 3. Coprocessor register 1 will be ignored, for now. It is used for access to the least significant halfword of long (64-bit) floating point operands.
Floating point coprocessor registers 4 to 15 are not really registers. Rather, the operations that set these pseudo-registers are used to operate on the floating point accumulators. The available operations are floating point add, subtract, multiply and divide, as well as square root and integer to floating conversion. Setting even floating point registers 4 to 14 causes operations on FPA0 and setting odd registers 5 to 15 operates on FPA1. For example, setting coprocessor register number 5 converts the integer stored in a general purpose register into a floating point value in FPA1. The complete set of short (32-bit) floating point operations on FPA0 is illustrated below; the same operations are available on FPA1.
COSET R1, FPA0 ; 2 FPA0 = R1 COSET R1, FPINT+FPA0 ; 4 FPA0 = (float) R1 COSET R1, FPSQRT+FPA0 ; 6 FPA0 = sqrt( R1 ) COSET R1, FPADD+FPA0 ; 8 FPA0 = FPA0 + R1 COSET R1, FPSUB+FPA0 ; 10 FPA0 = FPA0 - R1 COSET R1, FPMUL+FPA0 ; 12 FPA0 = FPA0 * R1 COSET R1, FPDIV+FPA0 ; 14 FPA0 = FPA0 / R1 |
Floating point operations do not directly set the condition codes. Instead, When the coprocessor get instruction COGET is used to get the contents of a floating point accumulator, it sets the N and Z condition codes to show whether the floating point value is negative or zero. In addition, the C condition code is used to report floating point values that are infinite or non numeric.
Exercises
a) Give appropriate defines for the symbols FPA0, FPA1, FPSQRT, FPADD, FPSUB, FPMUL and FPDIV that are used as operands on the COSET instruction.
b) Given 32-bit floating point values x in R4 and y in R5, give Hawk code to enable the coprocessor, compute sqrt(x2 + y2), put the result in R3 and then disable the coprocessor.
To fully specify our floating-point coprocessor, we must define not only the operations but also the data formats it uses. The Hawk, like most modern computers, uses the floating point format the Institute for Electrical and Electronic Engineers (IEEE) has defined. This follows a general outline very similar to most floating point formats developed since since the early 1960's, but it has some eccentric features.
Binary floating point numbers are closely related to decimal numbers expressed in scientific notation. Consider the number 6.02 × 1023, Avagadro's number. This number is composed of a mantissa, 6.02, and an exponent, 23. The number base of the mantissa, 10, is also the base to which the exponent is applied, and the mantissa in scientific notation is always normalized, as shown below:
60221419.9 | × | 1016 | \ | ||
60221.4199 | × | 1019 | not normalized | ||
60.2214199 | × | 1022 | / | ||
6.02214199 | × | 1023 | normalized | ||
0.602214199 | × | 1024 | not normalized |
Of these, only 6.02... × 1023 is considered to be properly in scientific notation. The rule is that the mantissa must always be a decimal number from 1.000 to 9.999... The only exception is zero. When a number has a mantissa that does not satisfy this rule, we normalize it by moving the point and adjusting the exponent until it satisfies this rule.
The IEEE standard includes both 32 and 64-bit floating point numbers. Here,
we will ignore the latter and focus the 32-bit
format. As in scientific notation, binary floating point numbers have
exponent and mantissa fields, but they are in binary, so the mantissa
is in base two and the exponent is a power of two.
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 |
S | exponent | mantissa |
In the IEEE floating point formats, like most others,
the most significant bit holds the sign
of the mantissa, with zero meaning positive. The mantissa is stored in signed
magnitude form. The
magnitude of the mantissa of a 32-bit floating-point number is given to 24 bits
of precision, while the exponent is stored in the 8 remaining
bits. Notice that this adds up to 33 bits of sign, exponent and mantissa,
evidence of some exceptional trickery in this floating
point representation.
IEEE double-precision numbers differ from the above in that each number is
64 bits. This allows 11-bits for the exponent instead
of 8 bits, and 53 bits for the mantissa, including one extra bit
obtained by the same kind of trickery.
The IEEE format gets an extra bit for the mantissa as a consequence
of the mantissa normalization rule it uses. The mantissa
in an IEEE format number is a binary fixed point number
with one place to the left of the point. The
normalization rule is similar to that used for scientific notation, so
the smallest normalized mantissa value is 1.0, while the largest normalized
value is 1.1111...2. This means that all normalized mantissas have
a most significant bit that is one. In general, if a bit
is always the same, there is no point in storing it since we can take
the constant value for granted. We call such a bit that does not need to be
stored a hidden bit. Consider the IEEE floating point value
represented as 1234567816:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The IEEE format supports non-normalized mantissas only for the smallest possible exponent. Zero is represented by the smallest possible exponent with and a non-normalized mantissa of zero.
The second odd feature of the IEEE format is that the exponent is given as a biased signed integer with the eccentric bias of 127. The normal range of exponents runs from 000000012, meaning -126, to 111111102, meaning +127. The exponent represented as 000000002 also means -126, and is used for unnormalized mantissas. In this case, the hidden bit is zero. The exponent 111111112 is reserved for values that the IEEE calls NaNs, where NaN stands for not a number. The value of the mantissa tells what kind of NaN. The hardware uses a mantissa of zero for infinity. Software may set nonzero values.
Because of the odd bias of 127 for exponents, an exponent of
one is represented as 100000002, zero is
011111112, and negative one is
011111102. There is a competing but equivalent
explanation of the IEEE format that presents the bias as 128 and places
the point in the mantissa one place farther to the right.
The different presentations of the IEEE system make no difference in the
number representations, but they can be confusing when comparing
different textbook descriptions of the system.
The following table shows IEEE floating-point numbers,
given in binary, along with their interpretations.
Infinity and NaN | ||||||
0 11111111 00000000000000000000000 | = | Infinity | ||||
1 11111111 00000000000000000000000 | = | -Infinity | ||||
0 11111111 00000000000010000000000 | = | NaN | ||||
1 11111111 00000010001111101000110 | = | NaN | ||||
Normalized numbers | Sign | Exp. | Mant. | Value | ||
---|---|---|---|---|---|---|
0 10000000 00000000000000000000000 | = | +1.0 | × | 21 | × 1.002 | = 2 |
0 01111111 00000000000000000000000 | = | +1.0 | × | 20 | × 1.002 | = 1.0 |
0 01111111 10000000000000000000000 | = | +1.0 | × | 20 | × 1.102 | = 1.5 |
0 01111111 11000000000000000000000 | = | +1.0 | × | 20 | × 1.112 | = 1.75 |
1 01111111 11000000000000000000000 | = | -1.0 | × | 20 | × 1.112 | = -1.75 |
0 00000001 00000000000000000000000 | = | +1.0 | × | 2-126 | × 1.002 | = 2-126 |
Unnormalized numbers | Sign | Exp. | Mant. | Value | ||
0 00000000 10000000000000000000000 | = | +1.0 | × | 2-126 | × 0.102 | = 2-127 |
0 00000000 01000000000000000000000 | = | +1.0 | × | 2-126 | × 0.012 | = 2-128 |
0 00000000 00000000000000000000000 | = | +1.0 | × | 2-126 | × 0.002 | = 0 |
1 00000000 00000000000000000000000 | = | -1.0 | × | 2-126 | × 0.002 | = 0 |
Exercises
c) Give the 32-bit IEEE representation for 10010.
d) Give the 32-bit IEEE representation for 100010.
e) Give the 32-bit IEEE representation for 0.110.
f) Give the 32-bit IEEE representation for 0.0110.
g) Consider the following infinite loop on the Hawk:
LIL R3, FPENAB + FPSEL COSET COSTAT LIS R3, 1 L: COSET R3, FPINT+FPA0 COGET R3, FPA0 BR LDescribe the sequence of values in R3, with an emphasis on the value after many iterations. An approximate upper bound on the value of R3 is a strong answer to this question.
Some computers do not have a floating point coprocessor. Floating point arithmetic on those machines must be done by software. This lets us to examine the algorithms used for floating point arithmetic, and it gives an extended example a complicated class implementation. Initially ignore compatibility with the IEEE floating point format so we can focus on clear code with simple data representations. At the end we will discuss conversion to and from IEEE format.
The interface specification for a class should list all of the methods applicable to objects of that class, and for each method, it should specify the parameters, constraints on those parameters, and the nature of the result. The implementation of the class adds the details of object representation and the specific algorithms for each method. It is good practice to write the interface specification so it serves as a manual for users of the class as well as a formal interface.
For our floating-point class, the set of operations is fairly obvious: At minimum, add, subtract, multiply and divide, plus tools for conversion to and from integers. We will ignore other operations for now.
In most object-oriented programming languages, a strong effort is made to avoid copying objects from place to place. Instead, objects sit in memory and object handles are used to refer to them. The handle for an object is a pointer to the object, that is, a word holding the address of the object. We will use this approach, so parameters to our floating point operators will be the addresses of their operands.
Finally, the interface specificaiton for a class should indicate how to allocate storage for variables in the class. The only thing a user of an object needs to know is the size of the object. The following interface specification assumes that each floating point number is stored in two words of memory, enough for an exponent and a mantissa of one word each, although the user need not know how the words are used.
TITLE float.h, interface specification for float.a FLOATSIZE = 8 ; size of a floating point number, in bytes. ; the user gets no other information about the ; number representation ; for all calling sequences here: ; R1 = return address ; R2 = pointer to activation record ; R3-7 = parameters and temporaries ; R8-15 = guaranteed to be saved and restored ; functions that return floating values use: ; R3 = pointer to place to put return value ; the caller must pass this pointer! EXT FLOAT ; convert integer to floating ; on entry, R3 = pointer to floating result ; R4 = integer to convert EXT FLTINT ; convert floating to integer ; on entry, R3 = pointer to floating value ; on exit, R3 = integer return value EXT FLTCPY ; copy a floating point number ; on entry, R3 = pointer to floating result ; R4 = pointer to floating operand EXT FLTTST ; test sign and zeroness of floating number ; on entry, R3 = pointer to floating value ; on exit, R3 = integer -1, 0 or 1 EXT FLTADD ; add floating-point numbers ; on entry, R3 = pointer to floating result ; R4 = pointer to addend ; R5 = pointer to augend EXT FLTSUB ; subtract floating-point numbers ; on entry, R3 = pointer to floating result ; R4 = pointer to subtrahend ; R5 = pointer to minuend EXT FLTNEG ; negate a floating-point number ; on entry, R3 = pointer to floating result ; R4 = pointer to operand EXT FLTMUL ; multiply floating-point numbers ; on entry, R3 = pointer to floating result ; R4 = pointer to multiplicand ; R5 = pointer to multiplier EXT FLTDIV ; divide floating-point numbers ; on entry, R3 = pointer to floating result ; R4 = pointer to multiplicand ; R5 = pointer to multiplier |
Exercises
h) Write a main program that uses a separate common block of size FLOATSIZE to hold each floating point variable used in the computation of the floating point representation of 0.1, computed by converting the integer constants 1 and 10 to floating point and then dividing one by the other. This should call FLOAT twice and then call to FLTDIV.
i) Write a separately compilable subroutine called SQUARE that takes 2 pointers to floating point numbers as parameters and returns the square of the second number in the first. Also write the interface specification and comment everyting appropriately.
The simplest floating point representation from a software perspective is a record containing two fields of one word each, the exponent and the mantissa. This is not enough detail. Which word is which? How is each represented? What is the range of exponent values? How do we represent the sign of the exponent? How is the mantissa normalized? How do we represent non-normalized values such as zero?
Since our computer uses two's complement arithmetic, it makes sense to use that for the exponent and mantissa. We can represent zero using a mantissa of zero; technically, when the mantissa is zero, the exponent does not matter, but for zero, the exponent will always be the smallest (most negative) value.
The more difficult question is, where is the point in the
mantissa? We could put it anywhere, but
there are two good choices: The mantissa could be an integer,
or the point could be just right of the
sign bit. We will do the latter, with the mantissa normalized
so the bit just right of the point is always one.
exponent | 00000000000000000000000000000000 | +0.5 × 20 = 0.5 |
mantissa | 01000000000000000000000000000000 | |
exponent | 00000000000000000000000000000001 | +0.5 × 21 = 1.0 |
mantissa | 01000000000000000000000000000000 | |
exponent | 00000000000000000000000000000001 | +0.75 × 21 = 1.5 |
mantissa | 01100000000000000000000000000000 | |
exponent | 00000000000000000000000000000001 | -0.75 × 21 = -1.5 |
mantissa | 10100000000000000000000000000000 | |
exponent | 11111111111111111111111111111111 | +0.5 × 2-1 = 0.25 |
mantissa | 01000000000000000000000000000000 | |
exponent | 11111111111111111111111111111101 | +0.5 × 2-3 = 0.0625 |
mantissa | 01000000000000000000000000000000 | |
exponent | 11111111111111111111111111111101 | ~8/10 × 2-3 =~ 0.1 |
mantissa | 01100110011001100110011001100110 |
Exercises
j) In this number system, what is the largest possible positive value (in binary).
k) In this number system, what is the smallest possible positive nonzero normalized value?
l) In this number system, how is 10.010 represented.
m) In this number system, how is 100.010 represented.
Many operations on floating point numbers produce results that are unnormalized, and these must be normalized before performing additional operations on them. If this is not done, there will be a loss of precision in the results. Classical scientific notation is always presented in normalized form for the same reason. To normalize a floating point number, we must distinguish some special cases: First, is the number zero? Zero cannot be normalized! Second, is the number negative? Because we have opted to represent our mantissa in two's complement form, negative numbers are slightly more difficult to normalize; this is why many hardware floating-point systems use signed magnitude for their floating point numbers.
The normalize subroutine is not part of the public interface to our floating point package, but rather, it a private component, used as the final step of just about every floating point operation. Therefore, we can write it with the assumption that operands are passed in registers instead of using pointers to memory locations. We will code this here using registers 3 and 4 to hold the exponent and mantissa of the number to be normalized, and we will use this convention both on entrance and exit.
SUBTITLE normalize NORMALIZE: ; normalize floating point number ; link through R1 ; R3 = exponent on entry and exit ; R4 = mantissa on entry and exit ; no other registers used TESTR R4 BZR NRMNZ ; if (mantissa == 0) { LIL R3,#800000 SL R3,8 ; exponent = 0x80000000; JUMPS R1 ; return; NRMNZ: BNS NRMNEG ; } /* else */ if (mantissa > 0) { NRMPLP: ; while BITTST R4,30 BBS NRMPRT ; ((mantissa & 0x40000000) == 0) { SL R4,1 ; mantissa = mantissa << 1; ADDSI R3,-1 ; exponent = exponent - 1; BR NRMPLP ; } NRMPRT: JUMPS R1 ; return; |
NRMNEG: ; } /* else if */ { /* mantissa < 0 */ ADDSI R4,-1 ; mantissa = mantissa - 1; ; /* mantissa now in one's complement form */ NRMNLP: ; while BITTST R4,30 BBR NRMNRT ; ((mantissa & 0x40000000) != 0) { SL R4,1 ; mantissa = mantissa << 1; ADDSI R3,-1 ; exponent = exponent - 1; BR NRMPLP ; } NRMNRT: ADDSI R4,1 ; mantissa = mantissa + 1; ; /* mantissa now in two's complement form */ JUMPS R1 ; return; ; } |
We used BITTST to test bit 30 of the mantissa above. This macro moves the indicated bit to one of the condition codes with a shift instruction that depends on the bit being tested. BITTST sets different condition codes depending on the bit being tested, so it defines two macros, BBS and BBR as the appropriate conditional branches to check if the bit was set or reset. In C, C++ or Java, bits are usually tested by anding with a constant with just that bit set. Note also, we converted to one's complement form to normalize negative numbers. In this form, bit 30 is always zero when the number is normalized.
Exercises
n) The above code does not detect underflow. Decrementing the exponent below the smallest legal value gives the highest legal value. Fix this error.
Conversion from integer to floating is easy, we simply set the exponent field to 31 and then normalize. This is because our mantissa representation can can be thought of as an integer in units of 2-31.
; format of a floating point number stored in memory EXPONENT = 0 MANTISSA = 4 FLOATSIZE = 8 SUBTITLE integer to floating conversion FLOAT: ; on entry, R3 = pointer to floating result ; R4 = integer to convert MOVE R5,R1 ; R5 = return address MOVE R6,R3 ; R6 = pointer to floating result LIS R3,31 ; exponent = 31; /* R3-4 is now floating */ JSR R1,NORMALIZE ; normalize( R3-4 ); STORES R3,R6 ; result->exponent = exponent; STORE R4,R6 MANTISSA; result->mantissa = mantissa; JSRS R5 ; return; /* uses saved return address! */ |
Conversion of floating-point numbers to integer is a bit more complex, but only because we have no pre-written denormalize routine that will set the exponent field to 31. Instead, we need to write this ourselves. Where the normalize routine shifted the mantissa left and decremented the exponent until the number was normalized, the floating to integer conversion routine will have to shift the mantissa right and increment the exponent until the exponent has the value 31.
This leaves open the question of what happens if the initial value of the exponent was greater than 31. The answer is, in that case, the integer part of the number is too large to represent in 32 bits. In this case, we should raise an exception, or, lacking a model of how to write exception handlers, we could set the overflow condition code. Here, this is left as an exercise for the reader.
SUBTITLE floating to integer conversion FLTINT: ; on entry, R3 = pointer to floating value ; on exit R3 = integer result LOADS R4,R3 ; R4 = argument->exponent LOAD R3,R3,MANTISSA ; R3 = argument->mantissa FINTLP: ; while CMPI R4,31 BGE FINTLX ; (exponent < 31) { SR R3,1 ; mantissa = mantissa >> 1 ADDSI R4,1 ; exponent = exponent + 1; BR FINTLP ; } FINTLX: ; unchecked error condition: exponent > 31 implies overflow JUMPS R1 ; return denormalized mantissa |
Exercises
o) The above code for floating to integer conversion truncates the result in an odd way for negative numbers. If the floating point input value is -1.5, what integer does it return? Why?
p) The above code for floating to integer conversion truncates the result in an odd way for negative numbers. Fix the code so that it truncates the way a naive programmer would expect.
q) The above code for floating to integer conversion truncates, but sometimes, it is desirable to have a version that rounds a number to the nearest integer. Binary numbers can be rounded by adding one in the most significant digit that will be discarded, that is, in the 0.5's place. Write code for FLTROUND that does this.
r) The above code for floating to integer conversion could do thousands of right shifts for numbers with very negative exponents! This is an utter waste. Modify the code so that it automatically recognizes these extreme cases and returns a value of zero whenever more than 32 shifts would be required.
We are now ready to explore the implementation of some of the floating point
operations. These follow quite naturally from the standard rules for working
with numbers in scientific notation. Consider the problem of adding
9.92×103 to 9.25×101.
We begin by denormalizing the numbers so that they have the same exponents;
this allows us to add the mantissas, after which we renormalize the result
and round it to the appropriate number of decimal places:
given | 9.92 × 103 | + | 9.25 | × | 101 | |
---|---|---|---|---|---|---|
denormalized | 9.92 × 103 | + | 0.0925 | × | 103 | |
rearranged | (9.92 + 0.0925) | × | 103 | |||
added | 10.0125 | × | 103 | |||
normalized | 1.00125 | × | 104 | |||
rounded | 1.00 | × | 104 |
The final rounding step is one many students forget, particularly in this
era of scientific calculators. For numbers given in scientific notation,
we have the convention that the number of digits given is an indication of the
precision of the measurements from which the numbers were taken. As a result,
if two numbers are given in scientific notation and then added or
subtracted, the result should not be expressed to greater precision than the
least precise of the operands! When throwing away the less significant digits
of the result, we always round in order to minimise the loss of information
and introduction of systematic error that would result from truncation.
An important question arises here: Which number do we denormalize prior to adding? The the answer is, we never want to lose the most significant digits of the sum, so we always increase the smaller of the two exponents while shifting the corresponding mantissa to the right. To do this, we will exchange the arguments into a standard order before proceeding.
There are many ways to exchange the contents of two registers We will use the most straightforward approach, setting the value in one register aside, moving the other register, and then moving the set-aside value into its final resting place. This takes three move instructions and a spare register. There are ways to do this without an extra register. The most famous and cryptic of these uses the exclusive or operator: a=a⊕b;b=a⊕b;a=a⊕b.
Also note that when we add two normalized numbers, there will sometimes be a carry into the sign bit, which is to say, an overflow. With pencil and paper, we stop propagating the carry before it changes the sign and instead, wedge in an extra bit and note the need to normalize the result. In software, we must undo the damage done by the overflow and then fix the normalization of the result. The following floating point add subroutine solves these problems:
SUBTITLE floating add ; activation record format RETAD = 0 ; return address R8SAVE = 4 ; place to save R8 FLTADD: ; on entry, R3 = pointer to floating sum ; R4 = pointer to addend ; R5 = pointer to augend STORES R1,R2 ; save return address STORE R8,R2,R8SAVE ; save R8 MOVE R7,R3 ; R7 = saved pointer to sum LOADS R3,R4 ; R3 = addend.exponent LOAD R4,R4,MANTISSA ; R4 = addend.mantissa LOAD R6,R5,MANTISSA ; R6 = augend.mantissa LOADS R5,R5 ; R5 = augend.exponent CMP R3,R5 BLE FADDEI ; if (addend.exponent > augend.exponent) { MOVE R8,R3 MOVE R3,R5 MOVE R5,R8 ; exchange exponents MOVE R8,R4 MOVE R4,R6 MOVE R6,R8 ; exchange mantissas FADDEI: ; } ; assert (addend.exponent <= augend.exponent) FADDDL: ; while CMP R3,R5 BGE FADDDX ; (addend.exponent < augend.exponent) { ADDSI R3,1 ; increment addend.exponent SR R4,1 ; shift addend.mantissa BR FADDDL FADDDX: ; } ; assert (addend.exponent = augend.exponent) ADD R4,R6 ; add mantissas BVR FADDNO ; if (overflow) { /* we need one more bit */ ADDSI R3,1 ; increment result.exponent SR R4,1 ; shift result.mantissa SUB R0,R0,R0 ; set carry bit in order to ... ADJUST R4,CMSB ; flip sign bit of result (overflow repaired!) FADDNO: ; } JSR R1,NORMALIZE ; normalize( result ) STORES R3,R7 ; save result.exponent STORE R4,R7,MANTISSA ; save result.mantissa LOAD R8,R2,R8SAVE ; restore R8 LOADS R1,R2 ; restore return address JUMPS R1 ; return! |
Most of this code follows simply from the logic of adding that we demonstrated with the addition of two numbers using scientific notation. There are some points, however, that are worthy of note.
The above code uses its activation record to save R1 allowing the call to NORMALIZE, and to save R8, freeing it for use as a temporary during exchange operations. NORMALIZE does not use an activation record, so this code has been optimized by eliminating stack pointer adjustment before and after the call.
Finally, there is the issue of dealing with overflow during addition. After addition, when the sign is wrong, interpreted as a sign bit, it does have the correct value as the most significant bit of the magnitude, as if there were an invisible sign bit to the left of it. Therefore, after a signed right shift to make space for the new sign bit (incrementing the exponent to compensate for this) we can complement the sign by adding one to it, for example, using the ADJUST instruction.
Exercises
s) The floating point add code given here is rather stupid about shifting. It should never right-shift the lesser of the two addends more than 32 times! Fix this!
t) Fix this code so that the denormalize step rounds the lesser of the two addends by adding one to the least significant bit just prior to the final right shift operation.
Starting with a working integer multiply routine, floating point multiplication is simpler than floating point addition. This simplicity is apparent in the algorithm for multiplying in scientific notation: Add the exponents, multiply the mantissas and normalize the result, as illustrated below:
given | 1.02 × 103 × 9.85 × 101 | |
---|---|---|
rearranged | (1.02 × 9.85) × 10(3 + 1) | |
multiplied | 10.047 × 104 | |
normalized | 1.0047 × 105 | |
rounded | 1.00 × 105 |
Unlike addition, we need not denormalize anything before the operation. The one new issue we face is the matter of precision. Multiplying two 32-bit mantissas gives a 64-bit result. We will assume a signed multiply routine that delivers this result, with the following calling sequence:
MULTIPLYS: ; link through R1 ; on entry, R3 = multiplier ; R4 = multiplicand ; on exit, R3 = product, low bits ; R4 = product, high bits ; destroys R5, R6 ; uses no other registers |
If the multiplier and multiplicand have 31 places after the point in each, then the 64-bit product has 62 places after the point. Therefore, to normalize the result, we will always shift it one place. If the multiplier and multiplicand are normalized to have minimum absolute values of 0.5, the product will have a minimum absolute value of 0.25. Normalizing such a small product will require an additional shift, but never more than one. We must use 64-bit shifts for thiese normalize steps in order to avoid loss of precision, so we cannot use the normalize code we used with addition, subtraction and conversion from binary to floating point.
SUBTITLE floating multiply ; activation record format RETAD = 0 ; return address PRODUCT = 0 ; pointer to floating product FLTMUL: ; on entry, R3 = pointer to floating product ; R4 = pointer to multiplier ; R5 = pointer to multiplicand STORES R1,R2 ; save return address STORE R3,R2,PRODUCT ; save pointer to product LOADS R6,R4 ; R6 = multiplier.exponent LOADS R7,R5 ; R7 = multiplicand.exponent ADD R7,R6,R7 ; R7 = product.exponent LOAD R3,R4,MANTISSA ; R3 = multiplier.mantissa LOAD R4,R5,MANTISSA ; R4 = multiplicand.mantissa LIL R1,MULTIPLYS JSRS R1,R1 ; R3-4 = product.mantissa ; assert (R3-4 has 2 bits left of the point) SL R3,1 ADDC R4,R4 ; shift product.mantissa 1 place ; assert (R3-4 has 1 bit left of the point) BNS FMULN ; if (product.mantissa > 0) { BITTST R4,30 BBS FMULOK ; if (product.mantissa not normalized) { SL R3,1 ADDC R4,R4 ; shift product.mantissa 1 place ADDSI R7,-1 ; decrement product.exponent BR FMULOK ; } FMULN: ; } else { negative mantissa ADDSI R3,-1 BCS FMULNC ADDSI R4,-1 ; decrement product.mantissa FMULNC: ; mantissa is now in one's complement form BITTST R4,30 BBR FMULNOK ; if (product.mantissa not normalized) { SL R3,1 ADDC R4,R4 ; shift product.mantissa 1 place ADDSI R7,-1 ; decrement product.exponent FMULNOK: ; } ADDSI R3,1 ADDC R4,R0 ; increment product.mantissa FMULOK: ; } mantissa now normalized LOAD R5,R2,PRODUCT STORES R7,R5 ; store product.exponent STORE R4,R5 ; store product.mantissa LOADS R1,R2 ; restore return address JUMPS R1 ; return |
Most of the above code is involved with normalizing the result. This code is oversimplified! What if the product is zero? Our normalization rule is that a product of zero should have the most negative possible value. This code does not test for overflow or underflow, that is, no test for exponent out of bounds.
Exercises
u) Fix this code so it returns zero on underflow and so it returns the largest possible value, with the right sign for overflow.
v) Fix this floating point multiply code so it correctly handles the value zero.
w) Write code for a floating point divide routine.
Multiply and divide routines do not finish the story. Our commitment to strong abstraction means that users of our floating point numbers may not examine their representations. The designers of floating point hardware do not face this constraint. They advertise the exact format they use and users are free to use this information. If we do not disclose such detail, we must provide tools for comparing numbers, for testing the sign of numbers, for testing for zero, and other operations that might otherwise be trivial.
Another issue we face is the import and export of floating point numbers. We need tools to convert numbers to and from textual and IEEE standard format. The routine to convert from our eccentric format to IEEE format begins by dealing with the range of exponent values. Our 32-bit exponent field has an extraordinary range. Second, it converts the exponent and mantissa to the appropriate form, and finally, it packs the pieces must be packed together. The following somewhat oversimplified code does this:
unsigned int ieeepack( int exponent, int mantissa ) { int sign = 0; /* first split off the sign */ if (mantissa < 0) { mantissa = -mantissa; sign = 0x80000000; } /* put the mantissa in IEEE normalized form */ mantissa = mantissa >> 7; /* convert */ if (exponent > 128) { /* convert overflow to infinity */ mantissa = 0; exponent = 0x7F800000; } else if (exponent < -125) { /* convert underflow to zero */ mantissa = 0; exponent = 0; } else { /* conversion is possible */ mantissa = mantissa & 0x007FFFFF; exponent = (exponent + 126) << } return sign | exponent | mantissa; } |
Note in the above code that the advertised bias of the IEEE format is 127, yet we used a bias of 126! This is because we also subtracted one from the original exponent to account for the fact that our numbers were normalized in the range 0.5 to 1.0, while IEEE numbers are normalized in the range 1.0 to 2.0. This is also why we compared with 128 and -125 instead of 127 and -126 when checking for the maximum and minimum legal exponents in the IEEE format. We have omitted one significant detail in the above! All underflows were simply forced to zero when some of them ought to have resulted in denormalized numbers.
SUBTITLE unpack an IEEE-format floating point number FLTIEEE: ; on entry, R3 points to the return floating value ; R4 is the number in IEEE format. ; R5 is used as a temporary MOVE R5,R4 ; R5 = exponent SL R5,1 ; throw away the bit left of the exponent SR R5,12 SR R5,12 ; pull the exponent field all the way right ADDI R5,R5,-126 ; unbias the exponent STORES R5,R3 ; save converted exponent MOVE R5,R4 ; R5 = mantissa SL R5,9 ; push mantissa all the way left SR R5,1 ; and then pull it back for missing one bit SUB R0,R0,R0 ; set carry ADJUST R5,CMSB ; and use it to put missing one into mantissa TESTR R4 BNR FIEEEPOS ; if (number < 0) { NET R5,R5 ; negate mantissa FIEEEPOS: ; } STORE R5,R3,MANTISSA ; save converted mantissa JUMPS R1 ; return |
Conversion from IEEE format to our eccentric software format is fairly easy because our exponent and mantissa fields are larger than those of the single-precision IEEE format. Thus, we can convert with no loss of precision. This code presented above ignores the possibility that the value might be a NaN or infinity.
This code makes extensive use of shifting to clear fields within the number. Thus, instead of writing n&0xFFFFFF00, we write (n>>8)<<8. This trick is useful on many machines where loading a large constant is significantly slower than a shift instruction. By doing this, we avoid both loading a long constant into a register and using an extra register to hold it. We used a related trick to set the implicit one bit, using a subtract instruction to set the carry bit and then adding this bit into the number using an adjust instruction.
A well designed floating point package will include a complete set of tools for conversion to and from decimal textual representations, but our purpose here is to use the conversion problem to illustrate the use of our floating point package, so we will write our conversion code as user-level code, making no use of any details of the floating point abstraction that are not described in the header file for the package.
First, consider the problem of printing a floating point number using only the operations we have defined, ignoring the complexity of assembly language and focusing on the algorithm. We can begin by taking the integer part of the number and printing that, followed by a point, but the question is, how do we continue from there, printing the digits after the point?
To print the fractional part of a number, we will take the integer part of the number and subtract it from the number, leaving just the fraction; this is equivalent to taking the number mod one. Multiplying the fractional part by ten brings one decimal digit of the fraction above the point. This can be repeated to extract each successive digit. This is not particularly efficient, but it works.
void fltprint( float num, int places ) { int inum; /* the integer part */ if (num < 0) { /* make it positive and print the sign */ num = -num; dspch( '-' ); } /* first put out integer part */ inum = fltint( num ); dspnum( inum ); dspch( '.' ); /* second put out digits of the fractional part */ for (; places > 0; places--) { num = (num - float(inum)) * 10.0; inum = fltint( num ); dspch( inum + '0' ); } } |
Before writing assembly code, we will rewrite this so the code passes handles to floating point numbers, not the numbers themselves, and so it calls the right subroutines to do arithmetic.
void fltprint( float *pnum, int places ) { float num; /* a copy of the number */ float tmp; /* a temporary floating point number */ float ten; /* a constant floating value */ int inum; /* the integer part */ int i; /* loop counter */ float( &ten, 10 ); if (flttst( &num ) < 0) { /* make it positive, print the sign */ fltneg( &num, pnum ); dspch( '-' ); } else { fltcpy( &num, pnum ); } /* first put out integer part */ inum = fltint( &num ); dspnum( inum ); dspch( '.' ); /* second put out digits of the fractional part */ while (places > 0) { float( &tmp, inum ); fltsub( &num, &num, &tmp ); fltmul( &num, &num, &ten ); inum = fltint( &num ); dspch( inum + '0' ); places = places - 1; } } |
The above code shows some of the problems we forced on ourselves by insisting on having no knowledge of the floating-point representation when writing our print routine. Where a C or Java programmer would write 10.0, relying on the compiler to create the floating point representation and put it in memory, we have been forced to use the integer constant 10 and then call float() to do the conversion. This is a common result of strict object oriented encapsulation, although we could have the interface definition export a compile time or assembly time macro to process constants into their internal representations.
If we deny ourselves knowledge of the size of the representation of floating point numbers, we cannot allocate space in our activation records for local floating point variables. We solve this by having the interface definition in float.h export the size of floating point numbers as the constant FLOATSIZE. We have adopted a general convention here: For each object, record or structure, we always have a symbol defined to hold its size.
Second, we can use the assembler to sum up the sizes of the fields of the activation record instead of adding them by hand, as we have up to this point. To do this, we begin by setting the activation record size to zero, and then we increment it by the object size as we define each field.
TITLE fltprint.a -- floating print routine USE "float.h" INT FLTPRINT MACRO LOCAL X, =SIZE X = ARSIZE ARSIZE = ARSIZE + SIZE ENDMAC INTSIZE = 4 ; size of an integer ; activation record format ARSIZE = 0 ; initial size of activation record LOCAL RETAD, INTSIZE ; return address LOCAL NUM, FLOATSIZE ; copy of the floating point number LOCAL TMP, FLOATSIZE ; a temporary floating point number LOCAL TEN, FLOATSIZE ; the constant ten LOCAL R8SAVE, INTSIZE ; save area for register 8 LOCAL R9SAVE, INTSIZE ; save area for register 9 |
Had we allowed ourselves to use knowledge about the size of a floating point number, we could have defined NUM=4, TMP=12 and TEN=20. If we did this, changes in the floating point package might require us to rewrite the code. If we had not defined the macro LOCAL, each local variable declaration would have required two lines of code. For example, the declaration of the local variable NUM would begin with NUM=ARSIZE, and then it would add to the activation record size with ARSIZE=ARSIZE+FLOATSIZE.
The local variables for saving registers 8 and 9 were allocated so that the integer variables in our code can use these registers over and over again instead of being loaded and stored in order to survive each call to a routine in the floating point package. Of course, if those routines need registers 8 and 9, they will be saved and restored anyway, but we leave that to them.
The following code contains one significant optimization. With all of the subroutine calls, we could have incremented and decremented the stack pointer many times. Instead, we increment it just once at the start of the print routine and decrement it just once at the end; in between, we always subtract ARSIZE from every displacement into the activation record in order to correct for this.
FLTPRINT: ; on entry: R3 = pointer to floating point number to print ; R4 = number of places to print after the point STORES R1,R2 STORE R8,R2,R8SAVE STORE R9,R2,R9SAVE ; saved return address, R8, R9 MOVE R8,R3 ; R8 = pointer to number MOVE R9,R4 ; R9 = places ADDI R2,R2,ARSIZE ; from here on, R2 points to end of AR LEA R3,R2,TEN-ARSIZE LIS R4,10 LIL R1,FLOAT JSRS R1,R1 ; float( &ten, 10 ); MOVE R3,R8 LIL R1,FLTTST JSRS R1,R1 TESTR R3 BNR FPRNNEG ; if (flttst( pnum ) < 0) { LEA R3,R2,NUM-ARSIZE MOVE R4,R8 LIL R1,FLTNEG JSRS R1,R1 ; fltneg( &num, pnum ); LIS R3,'-' LIL R1,DSPCH JSRS R1,R1 ; dspch( '-' ); BR FPRABS FPRNNEG: ; } else { LEA R3,R2,NUM-ARSIZE MOVE R4,R8 LIL R1,FLTCPY JSRS R1,R1 ; fltcpy( &num, pnum ); FPRABS: ; } ; /* first put out the integer part */ LEA R3,R2,NUM-ARSIZE LIL R1,FLTINT JSRS R1,R1 MOVE R8,R3 ; R8 = inum = fltint( num ); LIL R1,DSPNUM JSRS R1,R1 ; dspnum( inum ); LIS R3,'.' LIL R1,DSPCH JSRS R1,R1 ; dspch( '.' ); |
FPRLP: TESTR R9 BLE FPRLX ; while (places > 0) { LEA R3,R2,TMP-ARSIZE MOVE R4,R8 LIL R1,FLOAT JSRS R1,R1 ; float( &tmp, inum ); LEA R3,R2,NUM-ARSIZE MOVE R4,R3 LEA R5,R2,TMP-ARSIZE LIL R1,FLTSUB JSRS R1,R1 ; fltsub( &num, &num, &tmp ); LEA R3,R2,NUM-ARSIZE MOVE R4,R3 LEA R5,R2,TEN-ARSIZE LIL R1,FLTMUL JSRS R1,R1 ; fltmul( &num, &num, &ten ); LEA R3,R2,NUM-ARSIZE LIL R1,FLTINT JSRS R1,R1 MOVE R8,R3 ; R8 = inum = fltint( &num ); ADDI R3,R3,'0' LIL R1,DSPCH JSRS R1,R1 ; dspch( inum + '0' ); ADDSI R9,-1 ; places = places - 1; BR FPRLP FPRLX: ; } ADDI R2,R2,-ARSIZE LOAD R8,R2,R8SAVE LOAD R9,R2,R9SAVE LOADS R1,R2 ; restore return address, R8, R9 JUMPS R1 ; return |
Exercises
x) Write a floating print routine that produces its output in scientific notation, for example, using the format 6.02E23 where the E stands for times ten to the. To do this, you will have to first do a decimal normalize, counting the number of times you have to multiply or divide by ten in order to bring the mantissa into the range from 1 to just under 10, and then you will have to print the mantissa (using the floating print routine we just discussed), and finally print the exponent.