3 -- A brief history of operating systems22C:112 Notes, Spring 2012
Part of
the 22C:112, Operating Systems Notes
|
Computers were a new idea. Typical general purpose computer designs in this era were closely modeled on the paper design presented at the Princeton summer school in 1946. That means a single-accumulator machine with 40 bits per word, with two 20 bit instructions packed into each word. Memory addresses were typically 12 bits, allowing addressing of 4k words of 40 bits each. Half word addressing was supported in only the most minimal way by having two distinct jump instructions, jump to high halfword and jump to low halfword.
Subroutine call instructions had not yet been invented. To call a subroutine took, in the typical case, 3 instructions.
With machines like this, it was hard to imagine any kind of operating system.
This decade saw a number of innovations in computer architecture. These included index registers and subroutine call instructions, both crucial to the development of modern ideas of programming. This was also the decade when the first compilers were developed, most notably, Fortran.
In the 1950's, magnetic tape drives became commonplace. Typical tape drives for computers stored data as a sequence of records. Each record was typically 80 characters (based on the record format of punched cards) or 120 characters (the number of characters per line on many early line printers), although the hardware did not limit the record format.
Computers of the 1950's rapidly standardized on 6 bits per character, after the IBM 701 computer introduced this standard (the most common word-size in this era was 36 bits, but 48-bit words were also used on some computers). The standard magnetic tape format that emerged in this era, also introduced by IBM, stored 7 data tracks on 1/2 inch wide tape. 7-tracks were used so that 6 bits of data (one character) could be stored in parallel along with a parity bit.
Data was typically stored at just 100 characters per inch (but higher density tapes came out, firs 200 characters per inch, and then 400), and tape reels typically held 1200 feet of tape.
The tape drive could not read or write fractional records, and the drive hardware automatically computed (on output) or checked (on input) the parity of each character. The drive also computed or checked the checksum of the entire block, which was stored at the end of the block.
Typical tape drive commands were:
By mid decade, computers with whole banks of tape drives became common. In such a mix, one tape drive was frequently designated the system drive, and the tape on that drive contained system programs. Main memories were still small, but it was common for a small loader to be permanently resident in main memory, able to load and run the n'th file from tape drive d (you'd jump to the loader with parameters n and d loaded in agreed upon memory locations or registers).
This system is a sufficient foundation to make something that users would begin to call a tape operating system. At the end of execution, typical programs would exit by jumping to the loader, asking it to load and run the command language interpreter from the system tape. The command language interpreter would remember (in just a few words of dedicated memory) the drive number from which it was reading a command file. Commands in the command file would set up parameters to programs and then launch them.
These systems were very fragile. Any program that accidentally damaged the loader would force a complete system restart. Any program that accidentally damaged the memory location used to remember the current input file could lead to wild and unpredictable system actions.
Nonetheless, these systems were flexible enough to support assemblers and compilers, linkers and subroutine libraries. Fortran grew in such an environment, and the developers thinking about the new languages of 1960, Cobol and Algol, had extensive experience with such systems.
Rotating magnetic memory was also important in this era. Not disk drives, but rather, drums. Some low performance computers had drums for main memory. Typically, drum main memory stored data in word parallel form, so a computer with a 40 bit word would have a 40 track drum, or perhaps 44 tracks, so it could include a parity bit and some tracks for addressing (typically, one track for counting words, and a track holding a start mark so it could tell where on each revolution the word count should be reset to zero).
File systems on drum computers had yet to be developed, but subroutine libraries typically included routines to read a block n words of data in from drum address d to main memory address m, or to write a block of data back out to the drum. Clever programmers could use these to move subroutines or data structures out of main memory when they were not needed, reading them back in only when needed. This was called overlay management, and it was very difficult to get it right.
During the 1960's, the major developments in computer architecture were condition codes, byte addressing, memory management units, and various forms of parallel processing. Parallel processing ranged from attaching multiple co-equal CPUs to a single memory through using dedicated small general purpose processors for input-output to special purpose coprocessors. The first graphics coprocessors emerged in this era, but the most common use of coprocessors was to speed input-output to the newly developed high performance moving-head disk drives.
The first real operating systems emerged in the 1960s. Some of these were very crude. The acronym DOS, standing for disk operating system, first emerged in this era.
A typical DOS involved just one change to the tape operating system described above. The system subroutine library included a file system, so that programs could use a disk drive as if it were multiple tape drives, where disk files each had textual names. The command language interpreter could read commands from a disk file, launch programs from disk files, and for each program launched, tell it what files to use.
In the DOS era, tapes were used for backups and for files that were too large to fit on the disk drives.
The first networking efforts emerged in this era, the and the dial-up modem came into being.
Memory management units were introduced very early in the 1960s by Feranti corporation on their Atlas computer. The Atlas system had paged virtual memory, and the Atlas operating system used it for both memory protection and to create the illusion of a large address space implemented using a small main memory and what was, at the time, a large magnetic drum.
By the end of the decade, IBM would release a computer, the IBM 360 model 67, that supported this technology, but several manufacturers got there first, including Scientific Data Systems (the SDS 940), General Electric (the GE 600) and Digital Equipment Corporation (the PDP-10). All of the latter virtual memory systems were built by customers (the University of California at Berkeley, MIT and Bolt, Beraneck and Newman), but were then sold commercially.
Memory management units required real operating systems. The University of California at Berkeley developed the Berkeley Timesharing System for the SDS 940. General Electric, in conjunction with Bell Labs and MIT developed Multics for the GE 600, and BB&N developed TENEX for the PDP-10. IBMs first attempt at an operating system for the 360/67, TSS 360, was a failure. They never got it working. Their followup, however, led to success in the 1970's.
Multics was, undoubtedly, the single most important operating system developed in the 1970's. After Honeywell bought GE's computer division, it became the flagship operating system of the H 6000 series of mainframes, the successor to the GE 600. Multics introduced the following ideas:
Multics also built on some ideas that first came to market in the Berkeley Timesharing System, including
Multics was definitely not modern in some ways: The GE 600 and the H 6000 had 36-bit words, and the machine supported two character sets, GE's 6-bit code, packed 6 characters per word, and 7-bit ASCII, usually packed 4 characters per word, meaning that (at least internally) 9 bits could be allocated per character.
What most of the world saw in the 1970's computer market was a steep plunge in the price of computing. This actually began with the introduction of minicomputers in the 1960's, with the least expensive general purpose computer systems selling for under $10,000 by 1970, but the trend quickly accelerated, until by the mid 1970's, a fully functional microcomputer kit could be purchased for under $1000 (the Altair 8800).
When Bell Labs quit the Multics project, some of the programmers working on that project decided to take the best ideas they'd encountered in that project and scale them down, building a little operating system suitable for a departmental timesharing system running on a minicomputer. The result was Unix. Even the name is a pun on Multics.
It is fair to say that Unix had only one new idea -- the SUID and SGID bits on files. Everything else had been done before. What Unix did was do all of it better, integrating a number of really good ideas from multiple sources (but mostly Multics and the Berkeley Timesharing System) into one system and doing it very well.
Another system from the 1970's is largely forgotten outside of corporate datacenters. That is VM-370. This is the system that IBM developed out of the ashes of the TSS 360 project. What VM did that had never been done before is virtualize everything, so that a user could run any operating system they wanted as a user program. The idea of being to run, say, the horrible old DOS 360 as a user program on a computer without threatening any other user of that computer was extraordinary. The IBM 360 was the first 32-bit computer of any consequence, and IBM's current 64-bit Enterprise Systems Architecture is compatible with 360 and 370.
Networking became commonplace in thew 1970s. Most of the larger computer science departments were linked by the ARPANET (an experimental defense department network linking research projects funded by ARPA, the defense advanced research projects agency). Many Unix sites joined UUNET, an informal network of Unix sites, originally linked by dial-up lines.
Personal computers such as the Apple II and the first IBM PCs came with systems that were typical of the early disk operating systems. IBM even called its system PCDOS, and many PC users just called it DOS, as if no other system had ever had this name. Eventually, it emerged that this was a Microsoft product, although IBM originally sold it without this identification.
In the 1980's, Unix was pried free from Bell Labs and AT&T. The University of California developed BSD Unix, originally under AT&T license, but they reimplemented enough of it that, as time passed, BSD Unix was wrested free of AT&T. Linus Torvalds, a Finnish hacker, developed another Unix clone, Linux, and many manufacturers, under AT&T license, commercialized their own UNIX variants. IBM developed AIX. HP developed HPUX. Sun developed Solaris.
Unix was adapted to run on multiprocessors by two competing vendors in this decade, Sequent and Encore. In general, Unix worked very well on machines with on the order of 16 CPUs. Earlier operating systems from Burroughs Corporation, the University of Michigan and Carnegie Mellon Univeristy had demonstrated similar performance in earlier decades.
Independently of all this, Carnegie Mellon university developed a system called Mach that was supposed to be used as a replacement kernel under Unix, but was in fact far more. Mach would be seen as an academic curiosity for many years, but eventually, BSD was rebuilt on top of a Mach kernel, and eventually, Apple would chose BSD/Mach as the foundation for MacOS X.
Window managers, first developed in the 1970's, came into maturity in the 1980's. Window managers don't need to rest on sophisticated operating system technology. Prior to Windows NT and 95 from Microsoft and MacOS X from Apple, the dominant commercial window managers sat on top of rather primitive disk operating systems. In contrast, however, the X window system from MIT, developed under BSD Unix, took complete advantage of the available operating system technology.
In the 1980's, many of the existing networks merged into the Internet. By this time, operating systems such as Unix provided a good suite of network access primitives.
In the 1990's, the personal computing field finally rose above the level of DOS. Windows NT emerged when Microsoft bought the ashes of DEC's VAX VMS operating system development group. Windows 95 was a reaction to this, but both are real operating systems in the sense that emerged in the 1960's. MacOS X is another solid operating sysem to reach the desktop in this era. These systems were finally mature enough to incorporate decent support for network connectivity.
By the end of the 1990s, the operating systems running on typical desktop computers were as complex as any mainframe operating system of the 1960's. Essentially all of the innovative ideas from systems such as Multics were to be found on desktop and laptop computers, complete with full support for networking and window management.
As with the minicomputer revolution and the microcomputer revolutions before, the mobile communications revolution created a wide open niche in which, at least initially, competition flourished. Initially, each cellphone and PDA vendor based their product on proprietary systems, but as the market grew and the function of PDAs and Cellphones began to merge, two systems emerged as the primary competitors in this new niche, Windows CE and Android.
Neither of these represents any kind of revolutionary new approach to operating systems. Windows CE is a direct descendant of Windows, freed from dependency on the Intel x86 family and stripped of the baggage of "integration" with a full suite of office productivity tools. Android, in turn, is based on Linux, stripped of the assumption that the shell or a window manager will be the primary application launchers.
A third thread has woven into both of these systems, and that is the desire of many major players to lock down the system, controlling what applications the user is permitted to launch, what file the user is permitted to store, and where control is not possible, allowing for pervasive monitoring of the actions taken by users of mobile platforms. This has led to innvations such as trusted platform monitors, but it also raises serious questions about the relationship between operating system developers and civil liberties. Is it ethical for programmers to write code that permits pervasive surveilance of cellphone users? Probably not.