Lecture 34, Knitting Revisited

Part of the notes for 22C:196:002 (CS:4908:0002)
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

A Tiny Example

Consider this code fragment:

    widget: type ...
    widp: type @widget
    x: var widget
    y: var widp

This is in an anonymous block. One attribute of each block is a list of bindings between identifiers and their attributes.

Recall that the lexical analyzer classifies lexemes, and that lexemes of class identifier have an attribute that is a string handle. How string handles work is not an issue here. The string pool gives out string handles in exchange for the actual text of the string passed to it by the lexical analyzer, and later, if you give the string pool the handle and a file name, it can emit the string into that file. Outside the string pool, string handles are an abstract data type. Inside the string pool, string handles may be interpreted as indices into an array of strings, or as pointers to the first characters of strings, or some such thing. All we care is that string handles uniquely identify strings and may be compared at very low cost. For our purposes, since we are outside the lexical analyzer and string pool, bindings can relate string handles to their attributes.

Given the above caveat, a straight parse-tree approach to representing the list of bindings in the above block might look like this as a concrete data structure:

In this data structure, the type of widp is given by a type object, and this type object has a kind field set to pointer and a type field (the type of object referred to by the pointer) pointing to a second type object. This type object, in turn, says the type is defined by the type identifier widget.

Similarly, the type of the variable x is a new type object. This type object, in turn, says the type is defined by the type identifier widget. The variable binding also declares that x is stored at offset 8 from the base address of whatever block instance is currently being addressed.

Knitting

"Knitting the parse tree," as applied to this tree, could produce something like the following:

Knitting does not need to be a separate operation. Instead, for example, when parsing a type definition, when a type identifier is encountered, instead of generating a new type object of kind identifier, to be returned as the return value for the parse_type() routine, the parser can return the type object to which the identifier is bound. The result is that all variables of the same type are bound to the identical same type object.

There is a potential downside to knitting the parse tree. Information about the source code is lost that might be of value in error messages. An error message that might have mentioned the type name must now be content to mention attributes of the type. It is possible to retain the symbolic information by a simple expedient: Instead of replacing the symbolic name with the type denoted by that name, we can supplement the symbolic name by the type it denotes.

Storage Allocation

A key attribute of each type is the size of an object of that type. Looking at the types defined in the Falcon standard prologue, the sizes might be the following:

type size (bytes)
boolean1
char 1
int8 1
uint8 1
int16 2
uint16 2
int32 4
uint32 4

Of course, most architectures impose a performance penalty on non-aligned memory operations, so by default, storage for blocks (activation records, records, objects) should be aligned. The Pascal programming language even allows type declarations to be prefixed with the keyword packed; if not prefixed, Pascal compilers would align all the fields of a record or array, while if packed, the compiler would generate more compact but slower data structures.

During the declaration of any block, there are two jobs that must be done: First, the displacement of every variable in the block must be computed relative to the base of the block, and second, the size of the block, as a whole, must be computed, for use when allocating instances of the block (records, objects, stack frames). This computation is straightforward.

Initially, the size of a block (with no fields declared) is determined by the default attributes of every block. Optimization could eliminate some of these attributes, but in the absence of such optimization, every Falcon block has an up-link used for up-level addressing.

As each variable in a block is declared, the displacement of that variable within the block is the size of the block, prior to adding that variable to the block, and the size of the block is then incremented by the size of the variable -- that is, the size associated with the type of the variable. This is the information needed by the code generator to generate references to the variable.

A Complication

On the ARM and many other computers, the stack grows down, and in the C code we have examined for the ARM, the frame pointer points to the highest address in the frame. This means that displacements in stack frames are negative. In contrast, the heap manager, when asked to allocate an object of size x, returns a pointer to the first byte of that object, and displacements from that pointer to fields of the object are positive. This is disturbing, but it is not a new problem.

The PDP-11 memory management unit had to deal with variable-sized segments. Each segment could be a program sement, a static data segment, or a stack segment. Because stacks on the PDP-11 grew down, an attempt to resize a stack segment had to keep the top of the segment at a fixed address while growing the segment downward. In contrast, resizing static data segments had to hold the bottom of the segment at a fixed address while growing the segment upward. To solve this, the PDP-11 MMU included a bit in each segment descriptor indicating whether the segment was to grow up or down.

A compiler for the ARM can deal with this by including in the definition of each block a bit indicating which way that block grows. Blocks that make up the bodies of subroutines (procedures or functions) can be marked as growing down, while records can be marked as growing up. This adds complexity (and opportunities for error) but does not pose insurmountable problems.

Another option is to change the way the frame pointer register is used. In our examination of the cc compiler output, we saw the frame pointer being initialized to point to the topmost address of the activation record, being a simple copy of the stack pointer at the time the function is entered. We are free to do otherwise because each procedure or function saves its caller's frame pointer on entry, sets its own during its own execution, and then restores the caller's frame pointer. It is never necessary to actually use the frame pointer of the caller, and a subroutine can rest assured that functions it uses will never use its frame pointer. Therefore, alternative entry and return sequences for subroutines can set the frame pointer to the lowest address in the activation record in order to allow all displacements to be positive.

Built-in Types

Consider the built-in types int32 and uint32. What data structure should the compiler produce for these? And for that matter, when the expression evaluator encounters the lexeme 15, what is its type? The answers to these questions are all interrelated.

The standard prologue defines int32 and uint32 as range thypes:

int32: type -16#80000000 .. 16#7FFFFFFF;
uint32: type 0 .. 16#FFFFFFFF;
This implies that they are both subrange types. The attributes of a subrange type are its upper bound, its lower bound, and its base type. So, the pre-definition of these types might occur with code sort of like this:
int32 = new range_type;
int32->upper = 0x7FFFFFFF;
int32->lower = -0x80000000;
int32->base = generic_integer;

The above code will only work if the compiler has a larger range of integers than 32 bits. In general, to compile for an N-bit machine, you either have to have an N+1 bit word size, or you need to make special cases of all predefined types that require N bits. Unfortunately, the latter is the course of least resistance in our environment.

And, of course, the generic_integer type -- the type of integer constants when the lexical analyzer encounters them, must itself be a special case.