As a simple example, consider a function to add up the elements of a list.
(defun f (x) (let ((s 0)) (dolist (y x s) (setf s (+ s y)))))Applying the interpreted function to a list of 1000 numbers produces
> (def x (iseq 1 1000)) X > (time (dotimes (i 10) (f x))) The evaluation took 2.10 secondsWith compilation we get
> (compile 'f) F > (time (dotimes (i 10) (f x))) The evaluation took 0.20 secondsThis is an improvement of a factor of 10 in speed. Optimized native C code using a native C array of integers can be another 100 times faster, so there is room for further improvement, but the byte code compilation has made a substantial difference. There are examples where byte code compilation improves performance even more, but typically improvements are a bit more modest, on the order of a factor of two to five.
In any high-level language for scientific and statistical computing there will always be a need to implement some code in a lower level language like C to obtain acceptable performance. But developing code in the higher language itself is usually much faster and the resulting code is typically easier to understand and maintain. It is therefore advantageous to move the point where performance demands use of a low level language as far down as possible. Byte code compilation helps significantly in this respect.
It is possible to translate the byte codes produced by the compiler into C code, which can then be compiled with a native C compiler and linked into the system dynamically or statically. The naive approach does indeed produce additional speedup of about a factor of two by eliminating the byte code interpretation component, but there is a significant cost: the resulting object code is huge. As a result this option has not been pursued. However a more sophisticated approach based on a higher level intermediate representation appears to be promising and will be considered as part of the development of a new virtual machine discussed in Section 4.1.