Interface Evolution (English Version)
Interface Evolution (English Version)
Buy on Leanpub

Table of Contents

Imprint and notes on further use

Interface Evolution

The History of the Computer From the Perspective of its User Interface.

Text: Felix Winkelnkemper

Editor of the original German version: Maria Scherzog

I would like to thank everyone those who contributed to the success of this book with hints and suggestions!

The text of this book can be used under the conditions of the Creative Commons Share Alike Licence (CC BY-SA 2.0). Please note that some pictures snd illustrations are not subject to this licence!

The copy and distribution rights of the images remain with the respective authors. Images without indication are subject to a Creative Commons Share Alike Licence (CC BY-SA 2.0). Please refer to Felix Winkelnkemper as the autor.

If you want to re-use parts of this work, I would be very pleased if you sent an e-mail to winkelnkemper@googlemail.com!

Visit the website for the book!

On www.computergeschichte.net you can find additional content, including all references to videos and emulators mentioned in this book - and a few more.

Preface

I am pleased that you are interested in this book and thus in my view of computer history. Maybe this is your first introduction to the history of computers, or maybe you are already a real connoisseur and are now wondering why there needs to be another book about it. Aren’t there enough of them already? To answer this question, let me talk a little about the motivations behind this book.

The idea for this book arose in the context of seminars and lectures on the topics of digital media, the history of interactive systems and software ergonomics that I have held at the University of Paderborn in recent years. I had to deal with inquisitive, interested students there, but, as it is with male and female students, they were all quite young. The students of 2021 came into the world around the year 2000 and have never experienced a time without ubiquitous computers. In their world, there have always been PCs, laptops and probably mobile phones and smartphones. For these young people, it is quite normal that these devices have elaborate, graphical user interfaces that largely hide the underlying technology from them. If a computer only shows white text on a black background after it has been started, this means for them that there must be something wrong with the computer. Now I don’t want to misrepresent our students and make fun of them. Of course, they know that today’s user interfaces did not always exist. Most of them have even heard of something like punched cards, but when you worked with punched cards and what programming with punched cards meant, how complicated it was, they usually didn’t realise.

There are many arguments - not only for our students - for taking a look at computer history. It is almost always helpful to know why things have become the way they are today. Not only does one then appreciate the achievement of those who have been instrumental in the development and improvement of modern computer technology. With current user interfaces, it is then also possible to assess whether the potential offered by current interfaces is also being exploited or whether they are falling short of the possibilities. So being interested in computer history is a worthwhile thing. But where should you start and what should you focus on?

Among the large amount of books on computer history is one by Sean Mahoney with the exciting title “Histories of Computing”1. That is not a typo. Mahoney deliberately uses the word “histories” in the plural here. In fact, there is not the one coherent computer story that one could tell, but very different aspects that one can look at. One approach, for example, can be to tell company stories, such as the stories of IBM, Apple or Microsoft. Another approach is to illuminate the history of the protagonists of computer history, such as Konrad Zuse, Bill Gates or Linus Torvalds, to pick three protagonists relatively arbitrarily. When looking at computer history, one can of course also take a purely technical perspective and tell the story of the devices, the development of memory technology, processor technology and software technology. You will find many examples of all these approaches on the market. Also worth looking at, and in my opinion very interesting, would be a look at the visions and hopes that were associated with computer technology. Why, for example, did people still talk about the electron brain in the 1960s, but no longer today? This quite remarkable perspective is unfortunately quite rare in today’s computer history literature.

Many popular documentaries of computer history, whether in book form, as television documentaries or as internet videos, are not just sober, scientific looks at the past, but often very subjective collections of stories. Some stories and legends surrounding the historical events are taken up, retold and spun with provable historical facts. Many of these often exciting stories were created by the protagonists of the developments themselves, others originate from the marketing departments of the computer and software manufacturers.

If one looks around the market of documentaries on computer history, one can roughly identify three genres:

Historical overviews attempt to look at many of the aforementioned historical perspectives at the same time, in an attempt to give as good an overall view as possible. Of course, these overviews can never be complete, but they are certainly a good introduction. Worth reading here are the historical observations that manage to present larger strands of development and their backgrounds alongside the storytelling. Recommended are, for example, the books by Paul Ceruzzi (“A History of Modern Computing” and “Computing: A Concise History”) and the ZDF documentary “Secrets of the Digital Revolution”, which you can also find online on YouTube.

In Detailed Documentation, individual computers or special developments are examined selectively and in great detail. Examples are books about ENIAC[^eniac_book] or about the early history of the internet2. Such documentaries are often well worth reading and watching, as they often dispel popular prejudices. My personal tip with such documentaries is to avoid books penned by protagonists directly involved, or at least to bear in mind when reading that someone is looking back on their own exploits.

A large part of the popular books and video documentaries available on the market belong to the nostalgic retrospectives. They mostly deal with the home computers of the 1980s and 1990s. Often, the focus of consideration is on the suitability of the devices as a gaming platform and on the authors’ own experiences with the computers in their youth.

My approach in this book is another perspective about which I have not yet found a work in this form. I am not looking at the protagonists or the companies, nor primarily at the technical advances in computing technology, but am concentrating on the evolution of the interfaces, i.e. the user interfaces. More generally, in this book I am interested in how the way of using computers has developed over time and what was behind the developments. The term “evolution” in the title is not chosen lightly. In biology, evolution is not simply an arbitrary development, but a noticeably better adaptation of a life form to the environmental conditions and the dangers arising from them. If you want to explain the evolution of a species, you always have to consider its environment as well. Of course, it must be clear that the evolution described here cannot behave in quite the same way as in biology, because technical devices are not alive and do not change of their own accord, but are further developed by their users and producers or adapted to user needs. Nevertheless, I think the term evolution is appropriate, because what the development of the user interfaces of computers has in common with biological development is their interaction with the environment. In the case of biological evolution, it is the natural environment of a species that must be considered, and in the case of the evolution of user interfaces, it is the state of computing technology on the one hand and the user requirements placed on the computer and its operation on the other.

Completely incomplete

In this book, I give you a historical overview of computer history from my specific informatics-technical perspective of the user interface. I try to do that as correctly as possible. Please point out any errors or mistakes. One claim that I cannot fulfil here is that of completeness. Not every important device in computer history and not every important software is also interesting for my point of view and therefore worth mentioning in this book. So please don’t be angry with me if your favourite device or software doesn’t get a mention. Some prominent examples that are not considered here, or at best only in passing, are:

  • The operating system OS/2 was first developed by Microsoft and IBM together and later by IBM alone as a successor to MS-DOS and Windows. Late versions have a very interesting user interface, but the number of users was vanishingly small compared to Windows.
  • Markets outside the USA and Germany. Britain had its very own ecosystem of computers in the 1980s and 1990s with companies like Sinclair, Amstrad and Acorn. The most sustainable product of this latter company is not a user interface, but a processor architecture. An Acorn RISC Machine (ARM) can be found in almost every smartphone today. Japan also had an interesting computer scene of its own.

There is something else that I am quite explicitly not doing here: many people interested in computer history seem to be driven by the question of clarifying which computer was the first or who was the first to have a certain idea. These questions seem nonsensical to me, because history is not a competition! In the quest to find the first, what made the devices, the technical innovations or the interface ideas, is quickly lost sight of. But that’s not all: finding the first, the one or the first one, usually simply doesn’t work at all. In retrospect, it is almost always possible to unearth a device that, under certain circumstances, can be described as even earlier, to find a document in which someone hinted at an idea years before it was implemented, or even to track down a visionary who, decades before a development, predicted, usually alongside a lot of nonsense, exactly what then came to pass. You can always find a predecessor or precursor to every first. Worse still, the search for a first suggests a kind of singularity. The first copy appears to be a stroke of genius, as if someone had thought of something almost unthinkable and realised it after all. However, developments as complex as the computer always have an equally complex history and do not just appear out of nowhere as a flash of inspiration. At this point, I can recommend a visit to the Heinz Nixdorf MuseumsForum in Paderborn. The museum describes itself as the largest computer museum in the world. The really exciting thing about the museum, however, is not its sheer size or the number of exhibits on display, but that at least half of the exhibition tells the history of computers before the first computers appeared, thus embedding the computer in the millennia-long history of computing, writing, technical communication support and many other aspects.

What is a user interface?

I describe to you in this book the evolution of the user interface, but what actually is a user interface? In trying to define it, Friedrich Nietzsche gets in our way, who quite rightly stated in his “Genealogy of Morals”:

All concepts in which an entire process is semiotically summarised elude definition; only that which has no history is definable.

Terms like “user interface” and “user interface” have a history. So I will probably not succeed in clearly defining them. Different developers, scientists, journalists and users have put very different things into the terms. Therefore, I cannot and do not want to give you the ultimate definitions here, but rather explain my view on the concept of “user interface” very briefly.

Bruce Tognazzini, in his 1992 book “Tog on Interface”3, describes the user interface of the Apple Macintosh as a “fanciful illusion”. He writes that the world of the Macintosh’s interface is quite different from that of the architecture of the underlying operating system. What Tognazzini explained here for the Macintosh is by no means only valid for it. His idea applies to interactive user interfaces in general. Let’s take a brief look at this “fanciful illusion”. What is it about? When you look at the usage interface of a computer, you are not dealing with an interface for the technical architecture of the computer. You do not see a visualisation of the processor operations, have no direct insight into the working memory and cannot send commands directly to connected devices. What you see on the screen when you operate a file manager such as Windows Explorer or a Finder on a Mac, for example, is quite different from this technical reality of the device. The computer creates its own world for you on the screen through its programming. In this world, the world of use, there are, for example, files that are displayed on the screen in the form of small images. You can select and manipulate these objects on the screen, for example rename or even delete them. The files in the Explorer or Finder window are objects that exist only through the usage interface. If you were to take the computer apart, you would not find any files, even if it were possible for you to directly perceive the magnetisations on a hard disk or the states of the flash memory of an SSD.

The operating system and file manager software lies as an intermediate layer between the user and the technical reality of the machine. This intermediate layer, the user interface, ensures that users do not have to worry about something like controlling the hard disk and that stored texts are not addressed by specifying a cryptic address, but are named with readable names and, in the case described, can even be selected spatially. The usage interface also ensures that such a text object can be viewed by opening it on the screen. The user does not have to enter a programme bit by bit into the working memory or copy it from the hard disk into the memory. Also, you do not have to set a programme counter to the start address of the programme and start it manually. All these actions are not part of the world of use and are therefore not (any longer) accessible to the user. On one side of the interface, towards the outside, there are only the objects of the world of use and on the inward side there are only the physical realities of the technical world. The generated world of use is Tognazzini’s “fanciful illusion”, that is, if you will, an illusory world, but for the user it is the world with which the computer presents itself to him, that is, the only true world. Such “illusory worlds” of the various user interfaces are what I am concerned with in this book. How did they come into being? What were the usage requirements behind their design? What technical problems had to be solved? What was the background behind user interfaces that seem curious today, and to what extent do design decisions from past decades still have an impact today on the way we use our computers? I will explore questions of this kind in the coming chapters. I hope you will follow me on this!

From the ENIAC to the Minicomputer

Compared to other technical devices we use in everyday life, such as lifts, cars, torches or pens, computers are a very recent phenomenon. The first electric and electronic computers were built in the late 1930s and 1940s. It took another forty years before computers could be found in every office and many households. This first section of the book looks at the development of the computer and its interface of use from its beginnings to the advent of personal computers in the late 1970s. You will see that the personal computer was not a revolution that came completely out of nowhere, as some computer histories suggest, but that PCs and their user interfaces are in the tradition of an increasing miniaturisation of computing and storage technology and an increasingly direct interaction of the user with virtual objects generated by the user interface.

The Early Computers

When technology stories are told, as I indicated in the introduction, there seems to be a need to find out the first copy of a particular device. This desire is also not unknown in the world of computers and calculating machines. In a society characterised by competition, it seems important to find out which computer was the first of all. Unfortunately, this question cannot be answered easily, because one would first have to know what a computer is in the first place. The fact that a computer is something to be reckoned with can perhaps still be agreed on fairly quickly. But this description also applies to an abacus, a slide rule or an old mechanical cash register from the museum. However, these objects and devices are not usually called computers. So what characteristics does a device need to have in order to be called a computer and enter the race for the first computer? Depending on how you answer this question, you can identify another device as the first computer. For example, it is quite understandable to regard the American ENIAC (Electronic Numerical Integrator and Computer) from 1945 as the first computer. However, the German engineer Konrad Zuse had already designed and built automatic calculating devices years earlier. If we leave aside his mechanical calculators, which were not yet really operational, Zuse’s computer Z3 from 1941 can just as well be considered the first computer. However, unlike the ENIAC, the Z3 was not electronic, but electrical or electromechanical1, because it worked with telephone relays. Nor was the Z3 designed as a Turing-powered machine. “Turing-powerful” is a term from theoretical computer science. In simple terms, it means that if you have enough memory and enough time, you can calculate with the computer everything that you can calculate with a modern PC. With computers that are not Turing-powered, you cannot perform some calculations. The Zuse Z3, for example, had no conditional command execution. They could therefore not automatically calculate anything with it that required a case distinction.

In a way, the Z3 was really the first, but (only) the first electromechanical, programmable, fully automatic computer, while the ENIAC was also the first, but just the first electronic, universally programmable computer. Other computing devices are also in the running for the first computer, such as the British Colossus from 1943, which, like the ENIAC, worked with tubes and was therefore electronic. However, this machine was not a universal computer, but a special computer for cracking encrypted text messages. It was used during the Second World War to decode messages from the German Admiralty. His programming possibilities were very limited. Nevertheless, the Colossus was the first computer, namely the first electronic, partially programmable, digital special computer.

Which computer is now described as the first computer, provided with the respective attributes, often depends - absurdly - on which country the person making the attribution comes from. In the USA, only the ENIAC was considered the first computer for a long time, in Germany the Zuse Z3 was more likely to be considered the first computer and in England, since the end of secrecy about the cracking of the German codes during the Second World War, the Colossus was celebrated as the first computer. I don’t want to play the referee in this question and choose a first computer, because in the end it is quite irrelevant which device you give the title to, and the obvious connection of the search for the first with national pride makes the question entirely unappealing to me. From the competition over who was first, we can draw a perhaps reconciliatory conclusion: In the late 1930s to early 1940s, independent efforts were made in several places around the world to build fully automated, electronic computing systems. The Second World War played a central role in the development, but apparently - regardless of this - the time was simply ripe for the invention of the computer.

Programming through wiring

If you have ever seen or read anything about the operation of early computers, you may have seen a picture of the American ENIAC. The computer was built for the American military from 1943 to 1945 and was used, among other things, to perform complex calculations for ballistic trajectories. The computer weighed thirty tonnes, filled an entire hall and had a power consumption of no less than 150 kW. But its most striking peculiarity was probably that it was programmed by wiring and that input values for the calculations were entered by setting rotary switches, among other things.

ENIAC - Image: Public Domain (US Army Photo)
ENIAC - Image: Public Domain (US Army Photo)

Above you can see a typical view of the ENIAC. On the left side you see the programme in the form of the wiring of the modules of the computer. On the right side, arrangements of rotary switches mounted on mobile racks can be seen, with which numerical values could be set. With ENIAC in its original configuration, shown here, programming meant something quite different from what we think of today. Even the “programme” at ENIAC was not at all comparable to what was later called a programme and is still called that today. The ENIAC was only hardware and the programme was also part of this hardware. Without the cables plugged in, the ENIAC was simply a collection of modules such as accumulators (adders), multipliers, dividers, adjustment panels as well as printers, punch card readers and corresponding punches for input and output. The connections between the modules formed the programme. Programming the ENIAC meant connecting the modules together according to what you wanted to calculate. Nowadays, a programme is generally understood to be a sequence of instructions used to control the computer. The programme is processed by the computer instruction by instruction. With ENIAC, you couldn’t say that. The computer did not process the programme and the programme did not control the computer, but was part of the computer. The ENIAC was a kind of room-filling kit from which the programmer assembled a new computer for each problem to be solved. The ENIAC that was able to solve problem A was, strictly speaking, not the same computer as the one that was capable of solving problem B.

An extract from an ENIAC programme - Image: Public Domain (US Army Photo)
An extract from an ENIAC programme - Image: Public Domain (US Army Photo)

A programme for the ENIAC, i.e. its wiring for solving a special, usually complex, computing task, was planned on paper. Above is a section of such a “panel diagram”. The creation of such plans often took weeks and the subsequent programming of the computer by plugging in cables then took several days again. The actual calculation was then done within a few minutes or at most hours, provided the computer was working properly and no mistakes were made in the planning and wiring.

Pictures of the ENIAC are often shown to show how far computer technology has advanced in the meantime; after all, the ENIAC looks so beautifully primitive and unusual. But this somewhat contemptuous view does not really do justice to either ENIAC or the computers that followed. The way the ENIAC was programmed did have an advantage, namely that of comparatively high computing speed. Large parts of the computing process ran in parallel on the ENIAC. In addition, the computer could start the calculation immediately after the start, because the programme was completely available in the form of the wiring and the positions of the adjustment elements. It did not have to be read in at length, as was necessary with many computers afterwards. Both made the ENIAC extremely fast for the time. However, this advantage was countered by the very difficult programming. It quickly became apparent that a serialised and thus slower mode of operation could well be accepted in favour of better programmability.

Programmes as independent artefacts

A programme of the ENIAC was not a sequence of instructions to the computer in today’s sense, but the current hardware state of the computer, i.e. the current wiring and settings. It was not a separate physical artefact that could be fed to the computer from the outside. Reprogramming the ENIAC meant completely reconfiguring and rewiring it. As far as I know, ENIAC is pretty much alone in this way of programming. All the later computers I know of, but also earlier ones like Konrad Zuse’s, did not have to be completely rewired for programming. Instead, programmes existed as an artefact in their own right, fed to the computer as input from outside. The actual hardware of the computer remained unchanged when the programme was changed. As a borderline case, one can look at some IBM devices that definitely had programmes on plug-in boards and thus as wiring. Here IBM continued the tradition of its machines that could tabulate and process data from punched cards. These units were configured by plugging in cables. Cable connections determined, for example, which column of a data set should be added and what should be done with the result at the end. Early IBM computers adopted this configuration option to some extent, but were not committed to necessarily specifying programs in this way.

Let’s look at a computer that was developed almost simultaneously with the ENIAC: Pictured below is Konrad Zuse’s Z4 calculator. Development on this computer began in 1942 and was completed in 1945. Before that, Zuse had built the Z3 computer mentioned briefly above, which was completed in 1941. But the Z3 was quite limited by today’s standards. Although it was of course quite large, it was more comparable to a kind of automatable pocket calculator than a computer in the modern sense, because the calculator lacked something quite fundamental without which we cannot even imagine computers and programming today. The instruction set of the Z3 did not contain any conditional instruction executions, as already indicated above. The consequence: the programs of the Z3, called “calculation plans” by Zuse, could perform the same calculation steps for different inputs, but could not select different calculation paths based on intermediate results. The Z4 was very similar to the Z3 in terms of basic operation, but unlike its predecessor, it had these same conditional command executions.

Zuse Z4 in the German Museum - Image: floheinstein (CC BY-SA 2.0) on flickr
Zuse Z4 in the German Museum - Image: floheinstein (CC BY-SA 2.0) on flickr
Hole strip - Image: TedColes (CC0)
Hole strip - Image: TedColes (CC0)

Programmes were available on Zuse’s computers Z3 to Z11 as punched tape [^Z3_hole tape]. The illustration on the right shows a short perforated strip. Punched tape of this type was used as an input medium in computers until the 1970s. It is a simple strip of paper. Rows of holes were punched on these strips. Such a series was in each case a binary coding of a character or a number. Binary means that it is an encoding with two states. So each number corresponds to a sequence of yes and no, 1 and 0, or here hole and non-hole. Typical punched tape usually contained either 5 or 8 holes per line, the code correspondingly 5 or 8 bits.

These punched tapes were not invented for the computer, but were used long before for teleprinters in communications technology. In principle, a teleprinter was nothing more than a typewriter that could be connected to another typewriter - another teleprinter. This connection could be made via explicit telegraph lines or also via a telephone line using a modem. As soon as two teleprinters were connected to each other, everything that was written on one of the teleprinters also appeared on the other device. Teleprinters thus allowed the simultaneous transmission of text messages over long distances - in other words, what we call “chatting” today. Many teletype machines had punch tape readers and punch tape punches. If a punch was switched on when a message was received, the characters received and entered were not only printed on paper, but also punched onto the strip in the corresponding code. Via a reader, a teletype machine could electronically read in a punched tape and then behaved as if the coded characters were being typed in at that very moment. Punched tapes were used to temporarily store texts in order to be able to send them several times or to pass on a text received from a remote station to another station. We will meet teleprinters again in the chapter Time-Sharing. At the moment, their storage medium, the punched tape, is enough for us.

Zuse did not use the punched tape to store natural language texts, but to store a “calculation plan”, i.e. what we would call a “programme” today. A calculation plan consisted of a series of quite simple commands, which were then stored on the punched tape, possibly enriched with numerical values, converted into bit sequences. I don’t want to go into too much detail here about these commands. It is sufficient at this point to know that the commands consisted largely of simple arithmetic operations, i.e. adding, subtracting, multiplying, dividing, root extraction etc. In addition, there were commands to store numbers in the computer’s memory or to load them from the memory. The computer had two so-called “registers”. These registers were special memories that the computing unit had direct access to. So there were always only two numbers directly available for calculations. If a calculated number was needed again later in the course of the programme, it had to be stored in the working memory and then loaded into a register again. One of the most important commands in the instruction set of the Z4 and its successors was conditional instruction execution. It checked, for example, whether the current intermediate result in one of the registers was greater than zero. If this was the case, the subsequent command on the punched tape was executed, otherwise it was ignored.

Zuse’s computers read the programme from a paper tape reader command by command and executed it directly. The Z3 had only one such programme reader. The computers from the Z4 onwards had two paper tape readers. By means of a special programme command, a programme could cause the computer to switch between the two paper tape readers. The addition of the second paper tape reader allowed programmers to programme so-called loops. Loops are a very basic technique in programming. You need it whenever you are dealing with a large amount of information, such as a list, that needs to be processed, or more generally, when you need to repeat something until a certain condition is met. In such cases, the programme must execute a whole set of commands over and over again until all data has been processed or the desired condition has occurred. The expression “loop” could be taken quite literally with the Z4. To create a loop, the punched tape inserted in the second reader was simply tied into a loop (or actually into a ring), so that the same programme commands were read again and again from the beginning. In the programme read by the first paper tape reader, it was possible to switch to the second paper tape reader. The loop ran on this one. In order to prevent the computer from running this loop for all eternity, the part of the programme bound in the loop had to contain a conditional command that at some point switched back to the first punched tape or ended the programme run altogether.

In the architecture of Zuse’s computers, one can recognise the characteristics of the problems that Zuse wanted to solve. In his earlier work as an aeronautical engineer, Zuse found that the same calculations had to be performed over and over again with different values. To simplify this tedious and repetitive work, he wanted to develop a machine. This basic characteristic - same calculation steps, different values - was reflected in the user interface of his machines. On a Zuse computer, and here the computers from Z3 to Z11 do not differ at all, work was always carried out as follows:

  • The punched tapes on which “the calculation plan” was stored were threaded into the punched tape readers.
  • In the first steps of the calculation plan, the initial values were “keyed in” via the keyboard and stored in memory. Although Zuse’s computers worked internally in binary, i.e. with 0 and 1, the input was in the engineer-friendly decimal system. The conversion took place directly after the input.
  • Once all values were entered, the actual calculation started. The computer now processed the commands stored on the punched tape one after the other. Outputs of the programme were done on an electric typewriter.

The operation of the Zuse Z4

Zuse’s Z4 was a computer for engineers. These engineers usually sat at the computer themselves and carried out their calculations. What is important here - we will see in a moment that it can also be quite different - is that entries were made directly on the computer and directly before the programme was executed. The user was present at the invoice and could control the programme flow at the control console of the computer, interrupt it in case of an error and execute commands manually, independent of the programme. The Z4 could read in numbers from a keyboard as well as from a special punched tape called a number strip, and it was the keyboard input that was given a lot of consideration in terms of user interface. A 19532 calculator instruction manual, for example, describes:

 1 The keyed-in number appears in the lamp field for checking purposes. If
 2 this control is not right: Press the "Error" key and correct.
 3 If the control is correct: Press the "Done" key; this causes overfeeding.
 4 The number is then entered into the arithmetic unit and makes any correction impossi\
 5 ble.
 6 
 7 [...]
 8 
 9 The request to the operator to key in (when calculating
10 with the calculation plan inserted) is a red flashing signal, or the
11 A protocol lamp lights up.

The said protocol lamp assisted the engineer at the computer in entering many values for his calculation. It was part of the so-called protocol field, which indicated which value was expected for input. The operating instructions describe:

1      The procoll field is used to facilitate the entry of many
2 Numbers in a certain arrangement (matrix). It is in each case the number
3 of the protocol under which the light comes on. For this
4 the numbers must be written down on a record form, which is
5 is placed on the ground glass. A protocol form can be
6 can, of course, only be used in conjunction with a specific calculation plan.

So for a calculation plan where a whole series of input values were made, in a way a kind of input form was supplied. The calculator was thus able to indicate which value should now be entered.

Since the input commands of the Z4 stopped the computing unit and the computer also had conditional command execution, an interactive mode of operation would have been possible in principle with the Z4, in which the users would have been prompted during the course of the programme to enter further values to which the programme could then have reacted. However, this way of working was not common. All explanations and programme examples for the Z4 that can be found envisage having all the data entered at the beginning and then processing it without the need for any further input. A programming manual by Zuse from 19453, for example, explicitly recommends:

 1 In the case of longer calculation plans, one does not only save those
 2 Values, the storage of which is absolutely necessary in order to
 3 free, but first of all all, all the
 4 Output values are entered into the memory unit and then the
 5 actually carried out invoice. This has the following advantages:
 6 
 7 1.) The structure of the calculation plan is simpler.
 8 2.) In the practical calculation, the keying in takes place
 9     of the initial values in quick succession at the beginning. On-
10     After that, you can leave the machine to its own devices.
11 3.) The values can be keyed in the logical order.
12     become.
13 4.) It can be subsequently checked whether the machine is
14     calculated with the correct values.

That interactive use was not what Zuse had in mind for his computer was already clear from the division of the input and output devices (called input and output in the manual). Protocol fields, keyboard and numeric field, i.e. the elements for convenient data input, were located in close proximity to each other, the typewriter for output at some distance from it. Admittedly, this typewriter also had a keyboard. However, this did not serve as input for the computer.

The pleasant possibility of entering data can definitely be seen as an advantage of the Z4. Its biggest disadvantage, which it shared with all early computers from Zuse up to and including the Z11, was its low processing speed. The computers worked with telephone relays, which were naturally limited in speed due to the electromechanical mode of operation. Above all, however, the way the programme was executed, with the programme being read in from the computer command by command as it was processed, limited the speed. Although it was possible to speed up the reading of punched tape to a certain extent, there was an upper limit to what was possible here because, after all, a strip of paper had to be unrolled from a roll. The paper was not allowed to tear or crumple. Even leaving this aside, there was a fundamental problem: a programme that was read in sequentially could of course only be processed sequentially. There was no way to jump to another place on the punched tape in one step and continue the programme there. However, a jump in places in the programme is absolutely necessary for more complex programmes. Zuse’s computers, unlike the computers I will introduce to you later in the book, did not have a jump instruction, but a skip instruction, which skipped over all the instructions while the programme was being read in, until a certain code was read in. Through the tricky combination of this skip command with the previously explained technique of the loop, it was thus possible to create programmes with several sub-programmes.

A sub-programme is used in programming when certain sequences of commands are used repeatedly at different points in a programme. Instead of repeating them each time, let the computer jump to the place in the programme where these commands are written down and then jump back to the actual “main programme”. The use of sub-programmes not only makes the programmes shorter, but also more maintainable, because improvements now only have to be made at one point rather than several times. Since the Zuse computers did not have a jump instruction, jumping to such a sub-programme always involved skipping many programme instructions, which nevertheless had to be read in[^Z4_sub-programmes]. The computing unit had to wait during this time until it could continue. This way of working was neither fast nor particularly practical in programming.

Stored Program - The programme in the computer

Reproduction of Hollerith's punch card devices, on top the counting clocks, on the table on the left a punching device, on the right the device for scanning the punch cards - image: Adam Schuster (CC BY 2.0), detail, exempted
Reproduction of Hollerith’s punch card devices, on top the counting clocks, on the table on the left a punching device, on the right the device for scanning the punch cards - image: Adam Schuster (CC BY 2.0), detail, exempted

Much faster programme execution and easier jumping within a programme became possible because the programme is not read in command by command during execution, but is already completely present in memory. A computer that makes this possible is called a “stored programme computer”. In German, the term “speicherprogrammierbar” (programmable in memory) is often used for this, but, strictly speaking, it does not quite describe the same thing, as it suggests not only that the programme is in memory, but also that it can be created and edited in memory. However, I do not want to go into this possibility at this point (yet).

Running a stored programme computer meant, of course, that the programme had to be read in completely first. What was such a programme like? What was the storage medium? You have already got to know punched tape as an input medium. A stored programme computer could also basically be fed with punched tape. The British EDSAC of 1949, for example, one of the first computers of note to work according to the stored programme concept, used this simple input medium - for both the programme and the data.

Another common storage medium for computers of the time was punched cards. The history of punched cards, like that of punched tape, is far older than the history of digital computers. The first punched cards were used as early as 1890 for the semi-automatic evaluation of the US census. The data of the individual citizens were punched onto cards with a special device. A hole in a certain place stood for the gender of the citizen, another for religious affiliation, a third indicated the profession. The punched cards produced in this way could be entered into counting machines. Depending on the perforation, counting clocks were then switched one step further. Herman Hollerith, who invented the punch card technology for the census, founded companies based on this technology. His companies built both input devices to fill punch cards with data and processing devices that could, for example, aggregate and tabulate the data from a set of punch cards. Hollerith’s companies merged in 1911 into the Computing Tabulating Recording Company, which was renamed International Business Machines in the 1920s. This company IBM was to be mentioned frequently in the following years of computer history.

Hole punch - Image: Mwaelder (CC BY-SA 3.0)
Hole punch - Image: Mwaelder (CC BY-SA 3.0)
A standard punch card - image: Mutatis mutandis (CC-SA 3.0)
A standard punch card - image: Mutatis mutandis (CC-SA 3.0)

IBM’s standard punch cards were now also used for computer programs and for data entry into a computer. The principle of a punched card was basically very similar to that of a punched tape. A punch card reader read a stack of punch cards card by card. Punch cards could be written on punch card punches like the one shown above. A single card usually corresponded to a data set or, in the case of programming, a programme instruction.

Almost all computers built and developed after ENIAC were stored programme computers. The ENIAC was also modified so that it could be called a stored program computer4. It then ran at only a sixth of the previous speed, depending on the calculation, because the parallelism could no longer be exploited so well, but this loss of performance during the calculations was more than compensated for by the considerably easier programmability. As a third option, the ENIAC could also read in the programme directly from punch cards after the conversion. This was, of course, even slower and limited programmability, as the programme was not stored in memory but processed step by step. However, it had the advantage that no more hardware changes had to be made and programmes became easily exchangeable and thus also improvable.

How do you use a stored programme computer like the EDSAC?

In order to use a typical stored programme computer, both the programme and all the input data had to be available before the programme ran. So at the very least, all data had to be prepared on punched cards or punched tapes. When a new programme had to be written and executed, it was done in a lengthy process:

  • The programme was written down on paper in a code that corresponded to the computer’s instruction set. This type of programme code is called assembly language 5. In assembly language, the commands correspond directly to those of the computer architecture. Commands, however, do not have to be entered as bit patterns or numbers, but are noted by more easily understandable abbreviations, so-called “mnemonics”. Instead of 01001011 one writes in assembler language for example ADD for the add command. Higher programming languages were also possible, but did not appear until the early 1960s. Therefore, more on this in the following chapter.
  • From the assembler code, the programme had to be recoded into the machine language. Commands consisting of short letter sequences, such as JMP for the jump command, thus became numerical values again that the computer could process directly.
  • This machine language programme now had to be punched onto punched cards or punched tape.
  • The punched cards or tapes with the programme and all input data were handed over to an operator. The operator managed a queue of programmes to be processed before the submitted one.
  • When it was the programme’s turn, the operator had the programme read in, put the input data into the punched tape or punched card reader and started the programme.
  • Outputs of the programme were executed on a printer.
  • The operator placed the programme, the input data and the printed outputs of the programme in an output tray where they could be collected by the user.

It’s quite complicated to programme something with such a computer and then run the programme! Characteristic of the mode of operation outlined was that users or programmers - in most cases probably one and the same person - did not even come into contact with the computer itself. How high do you think the probability was that a programme created in this way was completely correct at the first attempt? It had to have been programmed correctly in assembly language on paper, transferred error-free into machine code and then punched just as correctly. I don’t want to speak for everyone, but I at least would certainly always need several runs until my programme was correct. This programming problem was not the only consequence of the disconnect between the provision of programme and data and the processing of the programme by the computer. The lack of interactivity was also problematic. With this method of use, it was not even possible to write a programme that required the user to make a decision while the programme was running. Nor could the user intervene if the programme “ran amok”, i.e. calculated pointlessly for a long time, went into a continuous loop or produced masses of nonsensical output. The user was not present at all. All eventualities had to be considered beforehand. This is explicitly formulated in a paper that is quite famous among computer scientists and was published under the name of John von Neumann. This “First Draft Report on the EDVAC”6 from 1945 states:

An automatic computing system is a (usually highly composite) device, which can carry out instructions to perform calculations of a considerable order of complexity-e.g. to solve a non-linear partial differential equation in 2 or 3 independent variables numerically. The instructions which govern this operation must be given to the device in absolutely exhaustive detail. They include all numerical information which is required to solve the problem under consideration: Initial and boundary values of the dependent variables, values of fixed parameters (constants), tables of fixed functions which occur in the statement of the problem. These instructions must be given in some form which the device can sense: Punched into a system of punchcards or on teletype tape, magnetically impressed on steel tape or wire, photographically impressed on motion picture film, wired into one or more fixed or exchangeable plugboards-this list being by no means necessarily complete. All these procedures require the use of some code to express the logical and the algebraical definition of the problem under consideration, as well as the necessary numerical material.

Once these instructions are given to the device, it must be able to carry them out completely and without any need for further intelligent human intervention. At the end of the required operations the device must record the results again in one of the forms referred to above. The results are numerical data; they are a specified part of the numerical material produced by the device in the process of carrying out the instructions referred to above. (emphasis not in the original)

Von Neumann thus describes a computer in which programmes run “without any need for further intelligent human intervention”. If one accepts this definition as it stands, one can deduce that computers do not need a user interface at all for users and programmers. Of course, von Neumann was also aware that actual computers definitely needed some controls, because after all, they were machines and machines had to be controlled. For example, it needed at least buttons to switch on and off, to start and interrupt the operation and to read the programme and data from the punch card or punch tape reader. In addition, of course, they had to have some kind of output device, such as a teletypewriter or an electric typewriter. This interface would be the absolute minimum. The control panels of the computers of the time usually had considerably more displays and buttons. This ranged from the display of the activity of individual components to the extensive representations of the memory contents and lamps to indicate alarm conditions. In fact, however, there was no user interface for the programme itself. In keeping with Neumann, the programme ran entirely without human intervention.

This way of working may seem impractical and untenable to you. In fact, however, this type of operation was the standard in many areas until well into the 1970s. The separation of user and machine has even been reinforced, as you will see in the next chapter.

Jobs and Batches

Large computer systems, such as those found in universities, research institutions and some companies from the 1960s onwards, were very powerful compared to early computers such as an ENIAC or a Z4, but they were also very expensive, both to buy and to maintain. Operating the computers themselves required expertise, extensive training and a lot of experience. The few people who had this knowledge, usually called operators, were opposed by many users who had an interest in using the computer for their calculations and for other forms of data processing. The high demand for computer use and the need to utilise the expensive, powerful computer as ideally as possible led to the mode of operation already explained in the previous chapter, in which the computer users had no contact at all with the computing machine.

  • The users created the programme and had all the necessary data available in the form of punched tape or punched cards. The programme and data together formed a computing job.
  • The punch cards or punch tapes were handed in at the computer centre. There they were sorted into a kind of queue while the computer was still processing the jobs of other users.
  • When it was the turn of the corresponding job, the programme was first read in from the punched cards or from the punched tape and copied into the memory command by command. It was then available as a stored programme and could now be started.
  • The programme reads the input from the punched cards or punched tapes in the course of its processing. It was possible for the programme to first read the data completely and transfer it to the memory, but this limited the possible amount of data due to the limited working memory.
  • During processing, the programme produced output in the form of printouts using a high-speed printer or electric typewriter, or in the form of new punched cards or punched tapes.
  • After running the programme, the punched tapes or punched cards that one had submitted were placed in a return tray along with the output produced, from which they could be collected by the user on occasion.

This way of working, called job-based, remained the same from the user’s point of view on large computers in universities, research institutes and most companies for many years. Behind the scenes, however, optimisations were made, because the way of working described above wasted valuable resources of the computer, especially if it was a computer that had a fast arithmetic unit. Ideally, the computer’s processing unit should calculate without interruption during the entire operating time of the computer. However, if the computer was operated as described above, the computer often had to wait for quite a long time and could not continue. The main reason for this was the reading in of punched tapes or punched cards. Even when so-called “speed readers” were used for this purpose, they were as slow as a snail compared to the processing unit of a large university computer. Valuable processor time was therefore wasted waiting for the programme to be read in. The same problem occurred when reading in the data during the runtime of the programme and also when generating the outputs. Here, high-speed printers were used in large organisations, but of course even such a high-speed printer was many times slower than the computing unit of a mainframe computer.

The UNIVAC I, control panel in front, tape drives in the background - Image: United States Census Bureau (Public Domain)
The UNIVAC I, control panel in front, tape drives in the background - Image: United States Census Bureau (Public Domain)

This problem was mitigated by not using slow input devices such as punch card readers or punch tape readers for input and printers and punches for output, but by using a much faster medium. The medium of choice here was initially magnetic tape, which could be read and also written to much more quickly. Computers that read only from magnetic tapes and wrote only to magnetic tapes had to wait much less for read and write operations. The computing unit was thus utilised significantly better. The UNIVAC I from 1951, pictured above, was a computer that relied exclusively on magnetic tape as its input and output format. But now a new problem arose. How do the programme and data get onto the tape? And how does the result data come back down from the tape? Since the users of such a computer could neither write to the tapes themselves nor read the output stored on the tapes, a number of external additional devices had to be used. There were corresponding units for copying stacks of punched cards onto magnetic tape and corresponding units for punching out data stored on magnetic tape onto punched cards or for output to a printer.

Batch processing

Of course, the use of magnetic tape technology only had a real advantage for the utilisation of the computer if as little time as possible was wasted changing tapes. Copying each individual job onto a magnetic tape first, then inserting it into the computer, waiting for the output, removing the output magnetic tape and printing or punching out the results did not really make sense, because there were still long waiting times due to changing the tapes. Magnetic tape technology only showed its real advantage when the computer could process one job after the other without changing the tape. It was precisely in this direction that optimisations were made. Jobs were no longer fed to the computer one after the other, but in so-called “batches”.

In “batch processing” or in German “Stapelverarbeitung”, the job data, i.e. the programmes and data of the computer users, were copied from punched cards and punched tape onto a magnetic tape. There was not just one job on a belt, but a multitude. Such a collection of jobs was called a “batch”. A stack was fed to the computer as a whole. If the computer had, for example, two magnetic tape drives for reading in the jobs, waiting times due to tape changes could be almost completely avoided, provided there were enough jobs. The computer in batch mode worked through the programmes one after the other. The outputs were written to an output magnetic tape one after the other in the same way as the job data. This, when full, was taken from the computer and put into an auxiliary machine which printed or punched out the output produced according to the user’s wishes. All slow input and output operations were thus decoupled from the main computer and no longer put any strain on it.

With the introduction of batch processing of this kind, one of the last opportunities for intervention that still existed previously through human operators was lost. A human operator could easily change the order of the computational jobs, prioritising them, and could even stop processing a job if a more important job came in that required immediate processing. This no longer worked in this form because the programmes came one after the other from the magnetic tape and there was no one left to make a decision. This grievance was of course recognised and eliminated over time. The computing facilities were further developed so that they could manage the computing times of the computer users and decide according to priority lists which programme should be executed and when. The management system automatically terminated the programmes that took too long to calculate or used other resources excessively. It was now even possible, when a high-priority job arrived, to interrupt a programme that was currently running, bring the important programme forward and continue the interrupted programme at the point where it was interrupted. For all this to work, however, a few technical and organisational prerequisites were needed.

From then on, it was necessary to specify in advance for each programme what priority it had and how many resources it would take up. This information had to be available to the computer system for all pending jobs in a batch so that it could select one of the jobs to execute. Of course, this did not work when this data was always at the beginning of a job on a magnetic tape that was read in bit by bit. Either this job information had to be provided additionally at the beginning of the tape or the complete job data had to be copied from the computer into a memory that could be accessed without much time delay and without major rewinding. Such storage systems with so-called random access (mostly hard disks) came onto the market in the mid-1950s.

In addition to this hardware requirement, there was of course a very important software requirement: instead of an operator, many tasks now had to be able to be carried out by the machine. This was done by software, a so-called monitoring system or operating system. From now on, this system took care of loading job data and providing and assigning output data, among other things, instead of a human operator. Another important task of operating systems was (and still is) the handling of errors in the programmes. If a programme encountered an error situation during its execution, this was not allowed to put the computer into an alarm state and stop all further operations, as was still common before, because after all there was still a long queue of other jobs that also had to be processed. Instead, corresponding outputs had to be generated and the processing of the batch had to be continued with another job. Another important task of the operating systems was the management of priorities and runtime accounts. Student jobs at universities, for example, were given lower priority and little computing time. If this calculation time was reached, the processing was automatically aborted. If, during the processing of low-priority jobs, one with the highest priority came in, the current processing was interrupted if necessary and the more important job was brought forward.

Even when reading data from tape drives and hard disks, there was still wasted computing time. While it was of course much faster than reading from a punched tape or punched cards, the speed difference to a fast computing unit was still there. More and more technical tricks, however, achieved an ever better utilisation of the computer. Computing systems from IBM, for example, could hold several programmes in memory at the same time. This meant that while the computer was busy with comparatively slow input/output operations, the operating system could switch to another programme and keep it running. This early form of multitasking was called “multi programming”. The price for the increased throughput provided by this technology was, of course, a system that had to have a lot of RAM and was accordingly very expensive.

All the optimisations by means of magnetic tapes, hard disks and operating systems mentioned here took place in secret for the user of a computer, because the users still had nothing to do directly with the processes in the computer centre and did not get to see the computer itself at all.

Coding sheet with part of a COBOL programme
Coding sheet with part of a COBOL programme

The complete programming process was still upstream and took place only with analogue, i.e. mechanical means. Initially, programming was done with a pen on paper, on so-called “coding sheets”. Above is a coding sheet for an IBM computer system. When this phase of programming was finished, the code had to be transferred to punched cards. When programming with punch cards, one card usually corresponded to one programme line. In addition, there were data cards that were either created in the same way or received as output from a previous programme. As an instruction for the operating system, each job was preceded by a job control card. On the one hand, it served to separate jobs, but also to specify the programming language - more on this in a moment - as well as other process details such as priority and the maximum permitted runtime. The programmer tried to hand in this set of punched cards - preferably without dropping it first - at the computer centre. Everything that happened now eluded the user. The time from submitting the programme to getting the result, the so-called “turnaround time” or “job dwell time”, could be very long in some circumstances. The University of Münster described this impressively in the newsletter “inforum” of the university computer centre 7:

Every order for the execution of programmes (in short: job) that is handed over to the computer must be managed by the operating system. This task is performed by the operating system component HASP (Houston Automatic Spooling Priority System). HASP reads the jobs from the card readers in the data centre and at the terminals. […] Each job is sorted into one of the following queues according to CPU time and memory requirements:

Queue - Source: "inforum" of the University Computer Centre of the University of Münster, April and July 1977.
Queue - Source: “inforum” of the University Computer Centre of the University of Münster, April and July 1977.

[…]

The user can specify time and memory requirements of his job on the JOB card. For example, the job card //ABC99XY JOB (ABC99,0020,Z23),A,USER,REGION=155K requests 20 minutes of CPU time in 155 KBytes of main memory. HASP therefore sorts the job into queue H. The same effect has CLASS=H instead of REGION=155K.

The average time a job is in the machine from reading to printing according to its class and reading time (job dwell time) can be taken from the following table for the current machine operation.[^Job dwell times].

[^Job retention times]: Source: “inforum” of the University Computer Centre at Münster University, April and July 1977.

The execution of the job given in the example of the University of Münster with twenty minutes of computing time and 155 KB of required main memory thus took more than one day on average in March 1977. This long time could of course be more or less critical. If you had an already polished programme that you used over and over again and where you only adjusted the data, you could possibly come to terms with the long time until you got the result. However, in the science business in particular, it was often the case that new, quite specialised calculations had to be made. So you always had to write new programmes for most tasks. These programmes were not often reused, but served to solve exactly one problem. When it was solved, there were other tasks that required other programmes, which then had to be explicitly programmed again.

Especially during the development period of such a programme, the long turnaround time was a major problem. This can be easily understood by anyone who has ever programmed. To all others, be assured: a programme is almost never correct on the first try. Almost always, there is a series of error messages during the first attempts at execution, or one finds out that the programme is syntactically correct, i.e. correctly written in the programming language, but unfortunately does not do what one had imagined. So it almost always takes several correction steps before a programme works correctly. Now imagine that with programming on paper, with manual transmission on punch cards and, above all, with an answer that only comes hours later or the next day. Error correction or programming through step-by-step optimisation, as is common today, thus took hours to days. The only remedy that made programming computers in batch mode at least somewhat more comfortable was the advent of so-called “higher programming languages”.

Higher programming languages

Using a computer in the 1950s, 1960s and 1970s usually meant that users had to write their own programme to solve a problem, for example to perform a calculation. Since the computer itself was not accessible to its users, the programming language with which a computer could be programmed was therefore the only thing that somehow came close to a user interface. Until the end of the 1950s, computers could only be programmed in machine code or very close-to-machine assembler code. Programmers who wanted to solve a task with a computer had to know the architecture of the machine quite well. For example, they had to know how many registers a machine had, how the memory was organised and with which instruction set the machine was to be programmed. Programming in this way is still possible today and is also carried out now and then when the highest performance is required. However, the majority of programming has long since been done not so close to the machine, but in so-called “higher programming languages”. The first of these languages appeared in the late 1950s.

Let us first look at the machine-oriented programming of a computer using a simple example: The process of programming in a machine-oriented programming language is always accompanied by a decontextualisation of the problem. All clues to the meaning of what is being programmed there are lost in the process. A solution to a problem as a computer programme is thus always very difficult to understand. The example I would like to explain to you here is a small programme fragment that determines whether someone is “broke”. In natural language, we want to define, slightly simplifying, “being broke” as: “If someone spends more than he takes in, he is broke”. Mathematically formulated, one could write this down as "broke: a > e". This definition looks very formal and it is. However, some hints of human-understandable semantics have survived. The choice of the word broke and the choice of the variable names a for expenditure and e for income are not part of the mathematical formalism, but refer to the problem domain. One could also choose completely different identifiers without changing anything in the formal specification. However, human comprehensibility would suffer if we left out these clues to meaning. Unfortunately, this is exactly what we have to do when we write down the problem as a machine programme. The result is then a programme like the following[^RAM Explanations]:

1 0: LOAD 2
2 1: SUB 1
3 2: JGTZ 5
4 3: LOAD=1
5 4: JUMP 6
6 5: LOAD=0
7 6: STORE 3
8 7: STOP

The “problem” to be solved was broken down into a series of instructions that were written down line by line when transferred to the computer programme. The values for income and expenditure are expected in two registers of the machine. A register can be thought of as a storage location for exactly one value. The value for expenditure is stored as a number in register 1, that for income in register 2. Register 3 contains a 1 at the end of this mini-programme in case of bankruptcy. Otherwise, a 0 is stored there. Our machine, which is being programmed for here, has a special memory location called an “accumulator”. All arithmetic operations of the machine are applied to this accumulator. The programme starts in line 0 and then runs as follows:

  • In line 0, the value of register 2 is loaded into the accumulator. “Loading” means nothing other than that the value is copied there. The value in the accumulator is now the same as that in register 2, i.e. the value of the income.
  • In line 1, the value from register 1 - the expenses - is subtracted from the value in the accumulator. The accumulator then contains the result of this calculation, the value of the income minus the value of the expenditure.
  • Line 2 is a conditional jump. The value in the accumulator is checked to see if it is greater than 0 (JGTZ for Jump if Greater Than Zero). If this is the case, continue in line 5.
  • Let us assume that the value in the accumulator is less than 0, i.e. the person is broke. In this case, the condition does not apply. The jump is therefore not executed, but it is continued in line 3. In this line, the value 1 is written to the accumulator.
  • Line 4 is again a jump command, but in this case a jump without condition. So it continues in line 6.
  • If the check in line 2 had proceeded differently, the programme would not have continued in line 3, but in line 5. Here, too, a value would have been loaded into the accumulator, but here it would have been a 0.
  • In line 6, the value from the accumulator, i.e. a 1 or a 0, is stored in register 3.
  • In line 7 the programme ends.

You have probably noticed: In order to be able to write this programme at all, you have to know a lot about the architecture of the computer. You have to know what registers are and that the computer has an accumulator. The commands must also be known. You also have to remember many things, such as which value is stored in which register. Above all, you have to manage to programme without any hints of human-understandable semantics. In practice, one could help oneself with comments that one wrote down in addition to the programme commands. However, this did not change the fact that the programme itself was completely devoid of references to meaning.

In programming, different ways of working seem to be irreconcilably opposed to each other. In the sphere of the user, the meaning of what is to be solved plays a major role. It is important what the numbers used to calculate stand for and what the result means. The computer’s way of working, on the other hand, is completely devoid of meaning and is based purely on the form and arrangement of signs. These characters control the computer, which then adds, compares, loads or jumps to another position in the programme accordingly. The computer does this completely independently of what the programme means to the person using it. Higher programming languages try to bridge precisely this contrast. On the one hand, they abstract from the internal structure of the machine, which relieves the programmer of the necessity of knowing too specific technical details that have nothing to do with the problem he has to solve, but exclusively with the device. That is already a great help. But it is perhaps the linguistic aspects of programming languages that are even more important for good programmability.

Higher-level programming languages, in fact, fulfil two roles at the same time: on the one hand, they are languages that, unlike natural languages, are completely formally specified. This is the only reason why they can be used to control the operations of a computer. Nevertheless, they offer basic structures of natural language, so that a “text” is created that is also somewhat understandable for humans. This natural language starts with the keywords and constructs of the language itself. Instead of confusing jumps to programme lines, there are constructs such as if, then, else and a function semantics with transfer parameters and return values that is based on mathematics. Furthermore, the possibility of giving programme constructs their own names makes programming extremely easy. For the automatic processing of a programme by a computer, the name of a variable (a stored value) or a sub-programme is irrelevant. However, it helps the programmer to be able to understand the content meaning of what a programme construct does. It makes a considerable difference for understanding whether one has to refer to something as abstract as “register 2” or “x”, or whether the understandable word expenditure appears in the programme.

The first high-level programming languages with these features appeared in the late 1950s and were increasingly used in the 1960s. The new languages had different characteristics depending on their intended field of application; from Fortran (Formula Translation) and Algol (Algorithmic Language) for scientific calculations to COBOL (Common Business Oriented Language) for business data processing. The programme explained above was presented in COBOL and Fortran as follows:

COBOL:

1 IF expenditure GREATER income
2     MOVE 1 TO broke
3 ELSE
4     MOVE 0 TO broke
5 .

Fortran 668:

1  IF (expenditure > income) THEN
2    broke = 1
3  ELSE
4    broke = 0
5  END IF

Programmes in a higher programming language cannot be executed directly by a computer. Special programmes called “compilers” first translate the programme code into the native machine code, which is then executable. For this reason, when using a computer in batch mode, programmers had to specify the programming language used on the job control card of their jobs. The operating system then automatically executed first the corresponding compiler and then the programme generated by it in machine language.

Programming languages are not usually associated with user interfaces. Especially in the context of today’s systems, this is certainly the right thing to do, because hardly any users today have to programme their computers themselves, and if it should happen, it is seen as something other than the use of the device. A relatively clear distinction is made between use and programming. At the time, however, when a computer user was always also a computer programmer, the nature of a programming language was essential to how efficiently the computer could be used. The characteristics of the higher programming languages anticipated something that we will continue to observe in the further course of the user interface development. They enabled the user of the computer to think in a way that was very different from the technical reality of the machine. Users have to deal with variables and functions that can be addressed by name instead of registers and jump commands in the higher programming language. The structures in the computer remain hidden from them. This creation of a world of use by the computer system, which has completely different characteristics than the technical structure, is a basic characteristic of interactive user interfaces, which I will talk about intensively later. First, however, I would like to explain to you that the batch use described here, with its great distance between man and machine, was by no means without alternative.

Early Real-Time Systems

In the previous chapter you learned how computers work in batch mode. This revealed problems caused by the very long waiting times between submitting the programme and receiving the result, and by the need to programme on paper with punched cards and tapes without computer contact. When a history of the computer is told, it is often presented as if batch operation was downright without alternative and more direct computer use was only invented relatively late, sometime in the 1970s. However, this is not true in this absolute form. Although batch operation was certainly dominant, it was already possible to operate computers in so-called “real-time operation” at that time. The term “on-line operation” or “dialogue operation” was also widespread at the time.

In universities and research institutes in particular, computing systems that operated in batch mode dominated for quite a long time. This type of computer use was so widespread here mainly because there was a very large number of users who wanted to use computer services. Due to the requirements of scientific calculations, only a well-equipped computer with a lot of memory and the ability to calculate quickly came into question. Since such computers were large purchases and therefore expensive, a single computer had to be sufficient to process a large number of jobs from a large number of users. Batch operation was most suitable here, despite the disadvantages described. However, even then there were usage scenarios in which high computing power was not the most important thing, in which there were only a few users who wanted to use a computer, or in which only a small number of rarely changed programmes were used. For these areas of application, computers were available early on, one could almost say from the beginning, which were used in real-time operation. Let’s take a look at some such systems with very different characteristics below.

Land consolidation: Zuse Z11

In the first chapter you already learned something about Konrad Zuse’s computers. The focus was on the Z4. We now look at a slightly later machine, the Z11. This Z11 was the first computer from Zuse KG to be mass-produced. The idea behind Konrad Zuse’s computer development was, as already explained, his experience that engineers have to perform the same calculations over and over again with different values. From this thought it follows quite directly that Zuse did not have machines in mind that were constantly being equipped with new programmes. He rather assumed that there were a few “calculation plans” that were used again and again for new calculations with different starting values, but were themselves rather rarely changed.

Control panel of the Zuse Z11 - Image: Dr. Bernd Gross (CC BY-SA 4.0)
Control panel of the Zuse Z11 - Image: Dr. Bernd Gross (CC BY-SA 4.0)

Shown here is part of the Z11’s control panel. Central is a keyboard for decimal numbers. This keyboard was used by the engineer to enter the values of his calculation. Then a button on the console was used to start the calculation. The results of the calculation were output on a connected electric typewriter. A typical operational scenario for the Z11 was land consolidation9. In order to carry these out, a series of calculations had to be made over and over again for different cases. Thus, only a small number of programmes were needed for the tasks of land consolidation, and these were used very frequently. Since the programmes were so typical for the purpose and required so little variation, the most important of them were even permanently installed, so they no longer had to be read in from punched tape. The punched tape, however, naturally allowed for more advanced calculations and the use of the calculator in other fields of application.

The Z11 was a very pleasant computer, especially for the employees in the surveying offices, as it required little knowledge of computer technology from the users and also delivered results quickly. The programmes could be started at the control panel by pressing a button. Then only the numbers had to be “keyed in” in a fixed way and the calculation started. Immediately after receiving the results, the machine was available again for the next calculation. There was no need to bother with computer internals such as number coding, registers or jump commands when using this method.

The Zuse Z11 was a rather slow computer, even for its time in the mid-1950s. Just like the Z3 and Z4, it worked with telephone relays and, like them, did not have a stored programme in the sense of a programme in which any point in the programme could be jumped to. In scenarios like land consolidation, however, this was not a bad thing. Speed was not the decisive criterion for this calculator. The aim was neither to keep the computer running permanently nor to optimise the computing speed and throughput. The advantage of the computer was rather to be available for the users at any time. Although the computer was slow as such, it delivered results to the users much faster than if they had used a fast computer in batch mode and then always had to wait a long time for the results.

Accounting: IBM 305 RAMAC and IBM 1401

Other typical areas where the requirement to quickly read in many different programmes and satisfy the diverse needs of different users was not a priority were accounting, warehousing and similar administrative processes. Such operations are characterised by the fact that there are large amounts of data that need to be processed. Each storage and retrieval in a high rack, for example, is its own date and therefore, at that time, corresponded in principle to its own punch card that had to be processed. In this context of use, it is still the case today that the number of different programmes is limited and the existing programmes are very stable, which means that they only need to be changed very rarely. Even before the beginning of the computer age in the narrower sense, machines that processed punched cards were used for administrative purposes of this kind. The IBM company, for example, offered devices that could calculate data stored on punched cards or output it as a table. In the tradition of these punched card processing systems, there were also the IBM computers optimised for this area of data processing, two of which we will look at here:

IBM 305 RAMAC

In 1956, IBM launched the IBM 305 RAMAC system. The abbreviation RAMAC stands for Random Access Method of Accounting and Control. “Random access” means that the stored data can be accessed at random without having to go through stacks of punch cards or search sequentially for the data on tapes. For this random access, a new data storage device, the hard disk, was specially developed by the IBM company. You can see this hard disk in the picture as a stack of magnetic disks behind the lady diagonally to the left under the lettering “RAMAC”. The target audience for the 305 RAMAC was not scientists or other users with complex computing needs, but the companies that had previously used IBM punch card processing machines and printers to sort, tabulate and accumulate data.

IBM 305 RAMAC - image courtesy of IBM
IBM 305 RAMAC - image courtesy of IBM

Users of the IBM 305 RAMAC could select and run a programme via a programme selection wheel, much like the Zuse Z11. The programme was either hardwired into the machine via cable connections or was located on the machine’s drum memory. Such a drum memory was not unlike a hard disk in principle. It was a rapidly rotating roller that was provided with a magnetisable layer. Several fixed read-write heads were distributed along the length of the drum. They are located in the illustration behind the clearly visible connection cables, which are led out to the front. The stored data rotated away under the read-write heads. The storage space on such a drum was relatively limited. The drum memory of the RAMAC could store a total of 3200 characters on 32 tracks. The magnetic drum formed the programme and working memory of the computer. A big advantage of this technology compared to today’s memory modules was its persistence. Modern RAM modules lose their memory content as soon as they are no longer supplied with power. If a computer with magnetic drum memory was switched off, however, the memory contents were retained. So the next time the unit was used, the previously loaded programmes could be started again directly and did not have to be read in again. All variables and configurations stored on the drum were also retained.

Magnetic drum storage - Image: Robert Freiberger from Union City, CA, USA (CC BY 2.0)
Magnetic drum storage - Image: Robert Freiberger from Union City, CA, USA (CC BY 2.0)

The RAMAC’s programmes processed input data in the form of punch cards, but also had access to the data on the hard disk and that of the magnetic drum memory. Outputs could be made on a printer, by means of a punch card or on a console typewriter, which was really nothing more than an electric typewriter with a computer connection. Unlike university mainframe computers, a RAMAC system did not process data permanently. The highlight of the system was rather to process data in the form of punch cards directly when they arrived, instead of doing this in a large programme run only once a day. The current total status of the data was persistent, i.e. permanently accessible, on the disk stack. If, for example, current information about the stock should be obtained, the corresponding programme could be selected or, if necessary, read in by punch card. The programme then generated the report based on the data on the disk stack without the need to provide large amounts of input data again via punch card. Data punch cards were thus no longer the basis of all data processing, but only served to record changes to the data status, which was stored on the hard disk and thus permanently accessible.

The improvements in data storage are certainly interesting and certainly an advance in computer technology, but they are not in themselves the reason why I am dealing with them here. The exciting thing about this computer system, at least from my perspective, is the console (called “das Konsol” in German IBM-speak) consisting of a keyboard and, if necessary, an additional connected typewriter. In the picture above, the lady is sitting at this console. The console allowed for some form of interactivity. There were no interactive programmes in which users could make inputs and thus control the course of the programme - programmes ran quite classically without human intervention. What was possible, however, was to read out and manipulate data stored on the drum memory. This functionality was mainly of interest for troubleshooting and corrections. Even more interesting was the function of being able to retrieve data sets from the hard disk and have them output directly. However, this output was still independent of a programmed user interface and therefore had to be done very close to the machine. The user had to know the physical location of the stored data on the disk. By specifying this location, the data could then be output to the console typewriter.

The 305 RAMAC was not a particularly computationally powerful computer. Complex calculations would certainly have been possible with the calculator, but would have been quite slow. Using a magnetic drum as a working memory was also quite slow. But that didn’t matter, because speed and computing power were not the focus of this machine either. Typical accounting and administration programmes did not perform complex calculations, and since the computer was not operated at full capacity, working speed was not a limiting factor.

IBM 1401

The IBM 1401 calculator appeared just three years after the 305 RAMAC. Introduced in 1959, IBM’s 1400 series could be ordered and used in a variety of configurations. For example, it could be used, quite classically, in batch mode. In many universities, 1401 computers were used to prepare and read out the magnetic tapes of larger computer systems, i.e. to copy data from punch cards onto the tapes and to punch out and print out existing data from the tapes. In addition to these operating possibilities, the computers also allowed a mode of operation similar to the 305 RAMAC, but went even further at a decisive point. The 1400 series computers allowed data to be entered via a special console, which was then processed by the running programme. The operating instructions for the IBM 140110 describe working with the console (with the internal designation 1407) as follows:

When an inquiry is to be made, the operator presses the request enter key-light. As soon as the system is free to act on the request, the enter light comes ON and the operator can type the message and enter it into 1401 core storage.

When the system completes the processing of the inquiry, it is transferred to the inquiry station by the stored program. The message is typed, and the operator may act on the reply.

The advantage of this way of working is also highlighted:

An account record or stock-status record needed by management can be requested by the operator and made available in a short time. Thus, management can, at a moment’s notice, request information from the 1401 system and have an answer almost instantaneously.

IBM 1407 Control Inquiry Station - Picture: Reference Manual IBM 1401 Data Processing System. IBM. 1962.
IBM 1407 Control Inquiry Station - Picture: Reference Manual IBM 1401 Data Processing System. IBM. 1962.

To retrieve data from the 1401, it was no longer necessary to specify physical memory addresses. Rather, the computer allowed “messages” to be entered in a special mode. The messages - today we would perhaps rather say “order” or “command” - could be anything. If we assume warehousing, for example, messages to request stock levels or changes in stock data would be obvious. So the IBM 1401 allowed a kind of command mode and thus a fairly interactive way of working. However, it was not yet an interactive programme as we imagine it today. A programme did not prompt the user for input. Instead, the user had the option of interrupting the running programme to make a “request”. This input was written to a fixed area of the computer’s memory from where the programme could read and process it. This may seem somewhat cumbersome to us today in terms of use and programming, but the system already fulfilled the most important potential of interactive interfaces, namely responsiveness, i.e. it enabled an immediate reaction to a user request. The instructions make this clear. A request can be made “at a moment’s notice”, the answer comes “almost instantaneously”.

Flight booking: SABRE

Another important example of early real-time applications is American Airlines’ SABRE (Semi-automated Business Research Environment) system. What is particularly interesting about this system is the US-wide networking of input terminals with a central computer. Prototypical from 1961 and in full use from 1964, this system managed all bookings for American Airlines. When a customer of the airline booked a flight by phone or came into one of the airline’s service points, customer service staff could contact a central computer via special terminals, which accepted and processed the requests in real time. There were over a thousand of these terminal stations spread throughout the country. The terminals were connected to the central computer by telephone line. They could be used to book a flight directly. The system gave the airline’s service agent information about available seats and processed the customer’s passenger data without any significant delays. Missing information on a booking was immediately corrected by the system. The terminals could also be used to change or cancel existing bookings. By entering the name of the customer and the flight booked, an earlier booking could be retrieved and then changed. Since it was a networked system where all terminals had access to the same database, a change did not have to be made at the same terminal, but could also be made in a completely different city, for example, if a traveller wanted to rebook the return flight at the destination.

SABRE terminal - single image from "An Introduction to SABRE", American Airlines, 1961.
SABRE terminal - single image from “An Introduction to SABRE”, American Airlines, 1961.

The illustration above shows a terminal by means of which service staff could make flight bookings. The central element was an electric typewriter. To the left of the aircraft was a recess in the table for so-called “Air Information Cards”. These cards contained both readable information for the service employee and machine-readable punch holes so that the terminal could identify a card. The cards were placed in a holder above the typewriter. With buttons on the left and above, fields (rows and columns) could be selected on the map. To the right of the typewriter, next to a dial for telephone calls, was the so-called “director”. With this, commands for booking, cancellation and for specifying seat requests etc. could be executed directly. The typewriter itself was used to output messages from the central computer and to enter textual data such as names, addresses and telephone numbers.

The SABRE scenario also has a very different characteristic from university and scientific computer use, but it is also different from the accounting scenario. At SABRE, all users work with the same programme, the flight booking programme. The programme is fixed for them. You don’t have to programme or load it first and you can’t change it either. The programme is changed rather rarely, and when it is, it is done by people other than the customer service staff. It was completely different in the scientific field: each user came along with his or her own programme that was supposed to process very different kinds of data. Also, SABRE was not a system that needed high computing power, because hardly anything worth mentioning was calculated during a flight booking that would have gone beyond counting free seats.

Science and technology: LGP-30

In the fields of science and technology, too, it could of course be very advantageous to have a computer on site and to be able to operate it without waiting times and without the administrative effort of a mainframe with batch processing. The advantages could definitely outweigh the compromises in computing speed.

LGP-30 - Image courtesy of the Computer Museum of Informatics of the University of Stuttgart
LGP-30 - Image courtesy of the Computer Museum of Informatics of the University of Stuttgart

Calculators that matched this characteristic were the LGP-30 from the Librascope company in 1956 and its successor LGP-21 from the mid-1960s. In Germany, the machines were built under licence by the company Schoppe & Faeser. The functioning computer pictured above is in the Computer Museum of the Computer Science Department at the University of Stuttgart. The actual computer is the right-hand one of the two boxes. It is relatively compact by the standards of the time, having about the volume of a smaller freezer.

As with the IBM 305 RAMAC, the LGP-30 was a calculator with drum memory, which was the central element in this calculator, as there was no other mass storage. As the drum was magnetic, the memory content was retained after switching off. So once you had loaded a programme onto the drum, you didn’t have to reload it after switching it off. It could be executed immediately after the computer was switched on again. The drum could store 4,096 words of 31 bits each. If the term “word” means nothing to you in this context, it doesn’t matter. I don’t want to go into too much technical detail at this point, so I’ll give you a rough outline here to give you an idea: If you wanted to fill the drum completely with text characters and estimate six bits per text character, there would be room for about 20,500 to 21,000 characters. That is about twelve A4 pages. So you can imagine it roughly as if you had a computer with a working memory of twenty kilobytes - not gigabytes, not megabytes, but kilobytes - and these twenty kilobytes of working memory are also the hard disk of your computer.

Rebuilt typewriter as console of the LGP-30 - Picture: Marcin Wichary from San Francisco, U.S.A. (CC BY 2.0)](images/LGP-30_Console.jpg)

The central input and output device of the LGP-30 was a converted Friden Flexowriter typewriter, itself a converted IBM typewriter, to which a punch tape reader and punch were added. The most important extension of this machine was an interface that passed on characters entered or read by the punch tape reader to the computer and output characters coming from there as if they had been entered directly at the machine. In addition, there were control keys by means of which the computer could be set in motion and with which it was possible to switch between manual input and input from the punched tape. A lamp in the middle of the machine above the keyboard served to indicate that the computer was expecting input from the user. Finally, some of the machine’s buttons were replaced with light-coloured buttons. These light-coloured keys correspond to the commands of the machine code and have been set off visually here.

Using inputs from the typewriter, values could be written directly into the computer’s memory. In this way, it was possible to programme the computer directly or at least to start a programme already present on the magnetic drum. To do this, a jump command to the start address of the corresponding programme had to be entered, this command executed and the computer then switched to automatic operation. New programmes could be put on the drum by hand input or by punched tape. A simple operating system had routines that made it relatively easy to load programmes onto the drum and also to run programmes from a given memory address. By means of a compiler for the simple high-level language ACT-III, more complex programmes could be developed without having to resort to the machine language of the computer.

Interesting in terms of the usage interface was the “input by hand” setting on the console typewriter. If this key was pressed, the computer did not read input from the paper tape reader, but directly from the keyboard of the typewriter. The computer thereby enabled programmes that required the input of values via the keyboard during their execution and thus also made it possible to control the progress of the programme through user input. So the computer could be used interactively in a very modern sense. Even simple games were possible this way. The operation of an interactive editor, i.e. a programme that would have made it possible to keep the source code of a high-level language programme in memory and edit it directly on the computer, had not yet been implemented. Although this should have been possible in principle with the computer, the low memory of the computer probably made practical use impossible, because there would have had to be space on the drum for the editor, the programme code to be processed, the compiler and the compilation in machine language, not to mention any data of the programme. This computer was therefore also programmed in the same way as described several times before, i.e. first with a pen on a coding sheet and then by punching holes on punched tape. Since it was a computer that was used locally and did not work in batch mode, it was at least possible to change the programmes often and then try them out again and again. In batch mode, this would have taken several days.

The LGP-30 was a calculator with relatively low computing and memory speed and only small memory size. It was relatively cheap for that. In 1956, it cost $47,000, which, adjusted for inflation, is equivalent to almost half a million dollars today. Of course, that’s a lot of money. However, a typical computer of the time, such as the UNIVAC I, built from 1951 to 1954, cost almost thirty times as much at $1.25 to $1.5 million.

Interactivity on large computers - UNIVAC I

You were briefly introduced to the UNIVAC in the previous chapter in connection with tape drives. There, it was about improving the utilisation of expensive computers by reading and storing programmes and data on magnetic tapes. Of course, data entry via keyboard runs totally counter to this kind of optimisation. It is interesting to note, however, that a UNIVAC could basically be operated in a similar way to an LGP-30, because the computer’s control console allowed values to be entered into the programme and the computer’s instruction set allowed values to be queried from or output to the console. The operating instructions of the computer describe 11:

The programmer may arrange to use the Supervisory Control as an input-output device. When this is done certain options are available depending on the switch settings. These options are explained in “Supervisory Control Operations”. The two programmed instructions which may be used for input or output in connection with the Supervisory Control are listed.

INSTRUCTIONS

10m: Stop UNIVAC operations and produce a visual signal. Call for one word to be typed from the Supervisory Control keyboard into memory location m. UNIVAC operations are resumed after the “word release button” on the Supervisory Control has been actuated.

50m: Print one word from memory location, m, onto the printer associated with the Supervisory Control. UNIVAC operations are resumed automatically after m has been transferred to an intermediate output storage location prior to printing.

This way of operating the UNIVAC was possible, but it often stopped the computer in its processing and thus wasted valuable computing time. In contrast to the operating instructions for IBM’s 1401, this form of use is not advertised, but the disadvantages are explicitly pointed out:

The ability to type into or print out of any desired memory location during the processing of a problem permits a very flexible control of that problem. However, it is well for the programmer to remember that the time required to execute these instructions is relatively great especially for a type-in instruction which is a human operation and an added source of error.

It is worth noting that it is not only pointed out here that humans are very slow in executing inputs; rather, it is also problematised that human inputs are an additional source of error. With the UNIVAC, this hint may be interpreted to mean that it is better to avoid this mode of operation. In fact, interactive use of a computer results in an important change in programming. Whereas previously it could be assumed that input data was by and large correct and well-formatted, now a fair amount of programming effort had to be put into input handling and the processing of input errors. The main programming effort shifted. All of a sudden, a large programming effort was made not for the actual programme logic, but for its user interface.

Shared Time

In the application scenarios presented in the previous chapter, such as the RAMAC computer and the LGP-30, the advantages of real-time processing outweighed the disadvantages of a computer that is not fully utilised and is also quite slow. Real-time use came into question here because few users needed the computer and the problems to be solved with the computer were clear. Thus, there was no need for particularly well-equipped, fast computers. However, if many users wanted to use computing services and if these users brought a large number of different programmes with them, the completion of which required a well-developed computing system, the devices mentioned and the operation of the computer by the user himself were no longer an option. Annoyingly, however, it was precisely for these users, who often had to create new, extensive programmes for complex calculations, that the batch mode was particularly troublesome, because it made programming very cumbersome, time-consuming and error-prone. Programmes had to be hand-coded and punched and feedback on almost inevitable programming errors only came after hours or even days. The way out of batch use for large, powerful computers came with the invention of “time-sharing”. The first conceptual and theoretical preliminary work on this began as early as the 1950s, but the first non-pure experimental systems were not put into operation until the mid-1960s.

Teletype ASR-33 - Image: AlisonW (CC BY-SA 3.0)
Teletype ASR-33 - Image: AlisonW (CC BY-SA 3.0)

In batch computing, a high utilisation of the computer was achieved by the fact that a computer was almost never in a waiting state, but there was always a programme being calculated on. With time-sharing, too, the goal was to utilise the computer as much as possible. However, this was no longer guaranteed by feeding jobs to the computer at a high rate, which could then be processed one after the other. With time-sharing, the programmes of many users share the computer’s computing time. They were running quasi simultaneously, or alternating so quickly that it seemed as if this was the case. With time-sharing, the user now also came into contact with the computer itself. However, this was not done directly in the computer room, which was still taboo, but by means of so-called “terminals” connected to the computer. Typical terminals for early time-sharing systems were simple teleprinters like the one shown on the right. Users could make inputs at teleprinters and get outputs of their programme printed directly while the programme was running. A user of a time-sharing system did not have to wait at his teleprinter until another user’s programme had come to an end, because time-sharing systems worked through the users’ programmes in a round-robin procedure. The programmes were switched back and forth in such rapid succession that each individual user had the illusion of having the computer all to himself.

The user experience of a time-sharing system with a teleprinter was fundamentally different from that of using a computer in batch mode. Instead of handing in stacks of punched cards and receiving punched cards and printouts back hours later, orders were now given to the computer by text input via the teletype. This in turn responded by outputting text printed from the teleprinter onto continuous paper. Instead of waiting hours for results as before, the results appeared within minutes or even a few seconds. The way it works is very similar to the SABRE system, where terminals were connected to a computer and inputs could be made to which the central computer immediately responded. With the SABRE system, however, there was only one running programme that served all terminals with the same functionality. Time-share systems, on the other hand, were designed to make the full flexibility of individual programming available. Each computer user could create their own programmes, run them on the time-sharing system and get a result promptly.

For the performance and utilisation of a computer system, time-sharing naturally had its price:

Working memory requirements: Since many programmes were in memory at the same time and additional data was needed to manage the logged-in users and the running programmes, time-sharing systems had to be equipped with more RAM than would have been necessary to perform the same calculations in batch mode.

Performance: The organisation of the time-sharing operation itself caused a so-called “overhead”. If twenty users were logged in at the same time and ran a programme, then it took longer overall than if the twenty programmes had been run one after the other, because computing time was constantly lost for switching between the programmes. This wasted time increased the more users there were on the system. The delay in processing, however, was generally bearable, because since the job handling, the punching and the long wait for a result, which is so particularly inconvenient for the user, was eliminated, it was significantly faster for each individual user despite the overall slower processing.

Occupancy: The batch operation ensured a steady utilisation of the computer even in times when few new jobs were received. During peak hours, jobs piled up and ended up in a queue. This queue could be worked through at times when there were few people. With time-sharing, on the other hand, processing became very slow when many users were using the system, while computing time was wasted on a large scale when only a few users were running programmes. In order to nevertheless ensure good capacity utilisation, time-sharing and batch operation were often combined. The computer’s operating system served users at their teleprinters in interactive mode, but at the same time also processed classic jobs. At times when there was little interactive use, more priority could be given to batch operation accordingly. Some time-sharing systems combined the two modes of operation even more cleverly. For example, they allowed users to edit and test their programme in real time and then allowed the execution of the programme, possibly with a large amount of data, to be queued in a batch queue to be processed on occasion. M. V. Wilkes describes this12 for the Cambridge Time-Sharing System. There, programme files could be prepared by editor and then be enqueued in a classical job queue by entering RUNJOB and the file name. The outputs of these programmes were either printed out in the traditional way or again made available as a file.

Time-sharing changed the way programmes were created and edited. It was now no longer necessary to do this task on paper and then punch the programme manually. The computer could now finally be used for programming itself. For this to work, however, a special programme was needed to edit the programme code, i.e. an editor. We will devote ourselves to this editor in a moment. Before doing so, however, you should be aware of some consequences of time-sharing for the user interface.

Virtual objects

The advent of time-sharing was also accompanied by the transition from consoles for controlling and monitoring the machine to user interfaces created for the user. Previous computers did not need an interface for the user of the computer, because computer and user did not even physically come into contact with each other. This was now different: a user sitting at the teleprinter was directly connected to the computer on which his programmes were running. So he had to have a way to select and start the programmes, create them or delete them if necessary. For this, it was not enough to simply provide a textual implementation of what was previously possible with the operator consoles. These consoles were very complex and provided direct access to the computer’s hardware, internal states and memory registers. However, a user at a teleprinter was not interested in machine states and memory addresses, but needed an interface that related to the data and programmes he was processing. The interface provided by the computer had to offer the programmes and the data to the user as something they could relate to. This already began with the basic system of the time-sharing system, the command interpreter, with which programmes could be started, for example.

One of the first time-sharing systems that was not purely experimental was the Dartmouth Time Sharing Ssystem (DTSS) at Dartmouth College in the USA. It was put into operation in 1964. When logging into the system, one first had to identify oneself with a user number. Once this was done, one could either start programming a programme directly - more on this later - or manage already existing programmes. A number of commands were available for this purpose:

  • LIST output the listing of the currently loaded programme.
  • NEW created a new programme. A programme name was requested after entering NEW.
  • OLD enabled to reload a previous programme. After entering OLD, the programme name of the saved programme was requested.
  • SAVE saved the programme under the current programme name.
  • UNSAVE deleted the programme with the current programme name.
  • SCRATCH deleted the programme but kept the programme name. The user could therefore start from the beginning.
  • CATALOG outputted a list of all programmes stored for the logged-in user.

What did these orders refer to? It was no longer a matter of registers, memory locations or sectors on mass storage devices as in classical programming. The commands did refer to programmes, but not in the sense of the individual instructions of a programming language, but to programmes as a whole, to something that the user could address with a name. I call such entities, which a user can refer to in an interactive user interface and which are generated and made accessible to the user by this interface, virtual objects.

The word “virtual” is defined very captivatingly in the German Wikipedia, in my opinion:

Virtuality is the property of a thing not to exist in the form in which it appears to exist, but to resemble in its essence or effect a thing existing in that form. Virtuality thus means an imagined entity that is present in its functionality or effect.

What I call “virtual objects” here does not exist in the real world as an object in the sense of an object. For computer use, however, the virtual objects have the properties of actual objects. So you can refer to them, create them, destroy them and of course manipulate them.

Incidentally, using the term “object” in this context is my choice of words according to the perspective I want to present to you in this book. The term object is also sometimes used in this way in older descriptions and instructions for computer systems and programming languages. With the advent of so-called “object-oriented programming”, the concept of objects in computer science was given a certain programming paradigm. The term was literally annexed. However, I am not concerned here with the programming, but with the world created for the user by the user interface. What I call an “object” here is a responsive and manipulable entity for the user. How a programmer realises this object behind the scenes, whether there is also something there that makes the object appear as a unit, or whether it is a collection of variables or even data streams, is not relevant from the perspective of the user interface.

The time-sharing application par excellence: The editor

The ability to edit programmes using the computer directly, leaving behind the inconveniences of programming with punched cards and punched tape, was one of the main driving forces behind the development of time-sharing systems. The fact that with these systems it was then also possible to have programmes that could be controlled interactively was seen, but was not necessarily in the foreground and, as you will see in a moment, was not possible in every time-sharing system from the beginning. The main application was usually the editor functionality.

If you think of an editor as a programme that looks something like a text editor under Windows or MacOS, or even like a modern programming environment, then you have the wrong idea. The operation of an editor of that time was quite different from the use of modern editors. This is due to the characteristics of the teletypewriter as an input and output device, which did not quite fit the task of an editor. Since teleprinters wrote something on a sheet of paper in a completely analogue way, you can only ever add something with a teleprinter, like with a typewriter, i.e. write something new. The task of an editor, however, is to edit, i.e. change, a text that is in the computer, such as a programme source text. But you cannot change what is already written on a piece of paper. There was no deleting or backspace in today’s sense with a teleprinter. Teleprinters such as the ASR-33 pictured earlier did have a “rubout” button that could be pressed to announce that the previously transmitted character was to be “wiped out”. Of course, the previous mark was not erased from the recipient’s paper. When such a character was sent to a computer, it deleted the input internally and sent back a new character - often the dollar sign - to confirm that the input had been accepted. This type of input and output may still be sufficient for deleting a wrong character that has just been entered, but it reached its limits at the latest when trying to insert text between existing words. More complex tasks such as inserting text could only be done by giving the editor commands that described how the text could be adjusted.

If you have access to a Linux or Unix system (including MacOS), you can still reproduce this functionality of early editors today. The line editor “ed” available in these systems dates from the early days of the Unix operating system in the early 1970s, a time when many computers were still operated by teletype.

If the editor is started by entering ed in the command line, nothing happens at first except that a line feed is triggered or, in the modern screen terminal, the cursor (the cursor) moves to the next line. Now write H and P in succession, each followed by Enter. These two commands ensure that messages are issued when errors occur and that a * is displayed when you can make a command entry. The first step is to load a file. Let us assume that a file “textfile.txt” already exists. By entering r textfile.txt, the file is read in after confirmation with Enter. The editor answers with the number of bytes read, in our example 86. Since the file is not large, you can output it in full length. This is done by entering the command ,l (this is a small L and not the number 1).

1 $ed
2 H
3 P
4 *r textfile.txt
5 86
6 *,l
7 This is the heading.$
8 The text starts hree. There may be many important things to say.$

As you can see, this is a simple text consisting of two lines. The dollar sign stands for the end of the line. You can now edit this text by entering commands. In the example, we will firstly insert a line with plus signs below the heading to make it stand out better, and secondly replace the word “hree”, probably a typo, with the correct word “here”.

To enter the plus signs, you must tell the editor that you want to insert something in line 2. This is done by the command 2i. Now you can enter the new text. To complete the entry, write a single dot on a line and finish with Enter:

1 *2i
2 +++++++++++++++++++++
3 .

The former line 2 should now have become line 3 due to the insertion of another line. You can check this by outputting line 3 with the command ,3.

1 *,3
2 The text starts hree. There may be many important things to say.$

Now type a command to replace the first occurrence of “hree” with “here” in line 3 and then re-print the complete corrected text with ,l.

1 *3s/yours/here/
2 *,l
3 This is the heading.$
4 +++++++++++++++++++++$
5 The text starts here. There may be many important things to say.$

This concludes the intended amendments. Finally, you can save the improved text as “better.txt” with w better.txt. The editor again acknowledges this by indicating the bytes written. Entering the command q then exits the editor.

1 *w better.txt
2 108
3 *q

Could you relate to the back and forth at a teleprinter when following this example? Editing a text in this way is very cumbersome, because you only edit the text indirectly. The text is present in the computer as an editable object, but you cannot see it as text and edit it on the spot. Instead, one must give commands for editing and repeatedly request the current state of the text in whole or in sections. You can compare this way of working with calling someone who has a text in front of him and always passes it on in parts. You could now tell this person the changes you want to make and then ask him or her again and again what the current state of the text is.

Virtual objects: Interactive object manipulation

In the editor, the text does not appear as a bit stream or as a long character string, which it in fact is. If “ed” had no concept of lines as addressable objects, it would be even more cumbersome to use the editor, because then one could not refer to lines, but would have to address bytes within a data stream, thus going down much more to the level of computer architecture. “Ed”, on the other hand, creates objects that are understandable, addressable, perceptible and manipulable for the user through its user interface. Every sensible interface for real-time systems creates such virtual objects to which the user can refer. With “ed” here it was lines and characters. At the level of the control programme, the command line interpreter, it is programmes and files. If you use a programme to manage appointments, they are calendar entries. In all these cases, you refer to an object of the world of the usage interface instead of address ranges and machine operations. In all these cases, however simple the implementation of the user interfaces may still be, they are user interfaces that are created by the computer, rather than just being the interface for the computer. Programmes for real-time operation create both the controls with which they can be operated and the objects because of which the software exists in the first place. What is on the other side of the user interface, i.e. the technical implementation of the software, is not relevant to the user - at least in this view - and remains invisible to him.

The following picture shows Dartmouth College students together with mathematics professor John G. Kemeny. They sit around a series of teleprinters connected to a central computer. One of the first time-sharing operating systems went live at Dartmouth College. This Dartmouth time-sharing system has already been mentioned above. The system went into operation on 1 May 1964.

BASIC co-inventor John G. Kemeny with some students at teleprinters connected to the DTSS - image courtesy of Dartmouth College Library
BASIC co-inventor John G. Kemeny with some students at teleprinters connected to the DTSS - image courtesy of Dartmouth College Library

For the system, a programming language called BASIC (Beginner’s All-purpose Symbolic Instruction Code) was developed by said John G. Kemeny together with Thomas E. Kurtz and Mary Kenneth Keller. BASIC, as the name suggests, was aimed at use as a beginner’s language, especially in educational institutions. The aim was to enable not only budding technicians and engineers, but all students and staff at the college and beyond to learn programming. The language became one of the most widely used programming languages about fifteen years later. Virtually all home and personal computers of the 1980s were equipped with a BASIC interpreter. Consider the following small programme that outputs the numbers from 1 to 10.

1 10 FOR I=1 TO 10
2 20 PRINT I
3 30 NEXT I
4 40 END

As you can see, each line begins with a line number. The line numbers belong permanently to the BASIC language. They serve two very different functions. Purely functionally, the line numbers are needed to be able to jump in the programme. A GO TO 100 (in later BASIC dialects usually written without spaces, i.e. GOTO 100), for example, continues the programme flow in line 100. However, the line numbers also have a function in the user interface, as they allow new lines to be defined and new lines to be inserted between the existing ones. If you want to change the above programme so that it does not output the numbers from 1 to 10, but from 2 to 20 in steps of two, you can simply redefine line 20:

1 20 PRINT 2*I

If you have cleverly left larger spaces between the line numbers, you can also easily insert lines in between by simply using line numbers between the existing ones, so for example:

1 15 PRINT "A NUMBER",
2 35 PRINT "READY!"

Although the first version of Dartmouth BASIC had an interactive editor through the DTSS, it was not possible to write programmes with the language that were themselves interactive, because there was no way to prompt the user for input. All input data had to be known in advance, as in batch processing. BASIC also lacked a way to read data from files, because DTSS did not support any data files. So all the data had to be given in the programme itself. In the following example, you can see that the earlier batch way of thinking with programme punch cards and data punch cards still shines through.

 1 100 PRINT "INVOICE
 2 200 PRINT
 3 300 PRINT "QUANTITY", "PRICE"
 4 400 PRINT "------------------------------------"
 5 500 LET SUM=0
 6 600 READ QUANTITY
 7 700 IF QUANTITY=0 THEN 1100
 8 800 READ PRICE
 9 850 PRINT QUANTITY, PRICE
10 900 LET SUM = SUM + QUANTITY * PRICE
11 1000 GO TO 600
12 1100 PRINT
13 1200 PRINT "SUM:"; SUM
14 1300 SEM
15 1400 REM INVOICE DATA
16 1500 SEM
17 1600 DATA 5,1.90
18 1700 DATA 1,2.70
19 1800 DATA 2,1.39
20 1900 DATA 7,0.99
21 2000 DATA 0
22 9000 END

This programme issues an invoice. For this purpose, data is read in lines 600 and 800. These data are given in the lower part of the programme from line 1600. The location of these DATA lines is irrelevant. They could just as well appear at the beginning of the programme or distributed throughout the programme. The data here, according to the programme, were given in groups of two consisting of a quantity and a price. For BASIC, however, this grouping is unimportant. Only the order decides. For example, all data could be defined within one line, separated by commas. In line 2000 there is a 0 as the date. This zero is used in the programme logic as an indicator that no further data follows. If this zero is read in line 600, the programme jumps from line 700 to line 1100 and continues with the output of the grand total. If the zero were left out, the programme would try to continue reading in data and abort with an error message. If one started the programme, one received an output of the following type:

 1 USER NO. 163405 PROBLEM NAME: INVOICE 6 SEPT. 1963 TIME: 19:34
 2 
 3 INVOICE
 4 
 5 QUANTITY PRICE
 6 ------------------------------------
 7 5 1.9
 8 1 2.7
 9 2 1.39
10 7 0.99
11 
12 SUM: 21.91
13 
14 
15 TIME: 1 SECS.

BASIC was intensively developed further - among others by students of the college. The third iteration in 1966 introduced the INPUT command, which now allowed interactive programmes. If the command was executed, the system issued a question mark. The user could now make an entry. The input was terminated with a line feed. With this INPUT command, programmes like the following now became possible:

 1 100 PRINT "GUESS A NUMBER BETWEEN 1 AND 100."
 2 110 LET X = INT(100*RND(0)+1)
 3 120 LET N = 0
 4 150 PRINT "YOUR TIP";
 5 160 INPUT G
 6 170 LET N = N+1
 7 180 IF G = X THEN 400
 8 190 IF G < X THEN 250
 9 200 PRINT "TOO BIG!"
10 210 GO TO 150
11 250 PRINT "TOO SMALL!"
12 260 GO TO 150
13 400 PRINT "GUESS RIGHT!"
14 410 PRINT "YOU HAVE USED"; N; "TOEGE"
15 420 END

This programme is a simple game. The player’s task is to guess a number between 1 and 100. After each guess, the computer indicates whether the guessed number is too big or too small until the number is finally guessed. You can see from some of the features of this listing how BASIC is entirely designed for teletype-based time-sharing systems. This already begins with the choice of words for the commands. PRINT’ does not neutrally mean ‘writing’, but just what a teleprinter does, namely printing. The command INPUT creates an input option typical for teleprinter operation. A hint sign is output to indicate to the user that an input is expected. The computer processes existing inputs on a line-by-line basis. The line feed thus becomes the release of the input. The functionality of the editor is also well optimised. Programme parts can be changed by defining new lines and overwriting existing lines. Insight into the programme can be taken completely or also in parts by the command LIST. This way of improving and supplementing the programme is very appropriate to a teletype operation and the disadvantages associated with it. The problem remains, similar to “ed” above, that you can only see the programme in overview if you “list out” it. If you then edit it, this change is of course not visible in the previously printed listing. As a programmer, you must therefore have the current state of the programme code in mind or frequently request new listings.

What has remained?

The time-sharing system DTSS is rather limited in its range of functions. It merely provides its users with a programming environment. In addition to BASIC, the system could also be programmed in the scientific programming languages ALGOL and FORTRAN. Later time-sharing systems that emerged over the years were far more extensive. They offered the user access to a wide range of system programs, had sophisticated editors and compilers for a variety of programming languages. Added to this were administrative functions for the users of the system. Separate areas for storing programmes and data had to be provided for each user and protected from access by other users. Time-sharing systems in large organisations such as universities also often had facilities for limiting computing time and disk space (so-called “quotas”) to ensure operation with many users.

What has remained of the time-sharing systems of the early days, which seem rather antiquated today? It may surprise you, but in fact many of the peculiarities of today’s operating systems can be traced back to the early time-sharing systems. Very clear lines of development can be discerned here. The Multics (Multiplexed Information and Computing Service) developed from 1963 onwards, for example, had a major influence. Many of the innovations of this operating system, such as a hierarchical file system, found their way into the operating system UNIX, which was developed at the American telephone monopolist AT&T from 1969. UNIX, also a time-sharing system, had a strong influence on the PC operating system MS-DOS and thus also on Windows. Finally, the free operating system Linux is also based on the concepts of UNIX.

Another influential early time-sharing system was the TOPS-10 system from 1967, which ran on mainframes from the Digital Equipment Corporation, or DEC. The very user-friendly operation of this system for the time was the basis for the OS/8 operating system for DEC’s minicomputer. This OS/8 in turn inspired Gary Kildall in the development of the operating system CP/M, which was very widely used on early microcomputers. CP/M in turn was the template for Microsoft’s MS-DOS and MS-DOS remained the technical basis of Windows for a long time. Even today, the command line interpreter of Windows, the “command prompt”, corresponds to the functioning of the MS-DOS command line. In this prompt you can enter the command dir to list files. You can trace this all the way back through MS-DOS, CP/M, and OS/8 to TOPS-10.

Even if you disregard these concrete commands for a moment, the command lines of today’s operating systems are tools that are very similar to their grandfathers and great-grandfathers from the time-sharing era. Open the Windows command prompt or a terminal under MacOS or Linux and try out a few commands. With the constant back and forth between the input line and the outputs of the system, with a little imagination you can still literally hear the teleprinter rattling…

Terminals Instead of Teleprinters

Let me summarise once again: Teleprinters connected to computers in time-sharing mode made it possible to use computers interactively while meeting the high demands of throughput and computing power. Turnaround times dropped from days and hours to minutes and seconds, and by means of editors, computers could be used as tools for writing programmes, which could then later be launched on the computer straight away. The teleprinter or the electric typewriter as an input and output device had their disadvantages, however, because teleprinters and typewriters produced an analogue medium by inscribing characters on continuous paper. Inside the computer, the interactive software created virtual objects that could be changed very dynamically by the user, but to the outside world, these objects could not be represented with the same dynamic, manipulable properties. For example, you could change a text stored in the computer, but you could not use the teleprinter to change the text already printed on the paper. The consequence of this gap between the flexible world in the computer and the statically inscribed world of the user interface was that changes to the virtual objects in the computer could only be made by describing the changes in language. Changes resulting from this could not be perceived directly, but only mediated via a request from the user and a response in the form of a printout from the system.

The command-oriented way of working described can be somewhat compared to ordering a pizza. Imagine calling a pizzeria from which you don’t have one of these handy foldable paper cards at home. They want to order pizza there. We now think of a very unimaginative pizzeria employee who always does exactly what you tell him to do. In front of his nose, this pizzeria employee has the complete menu, a piece of paper and a pen to write down what you would like to have. You don’t see any of this, of course, because you have him on the phone. How do you get your food now?

  • First of all, you have to ask what you can order in the first place, so maybe first of all, what are the basic groups of dishes, i.e. pasta, pizza, fish dishes and so on. Ultimately, by asking several times, you can get behind the contents of the menu and choose something.
  • Now place some orders. Of course, for many of the dishes you don’t know exactly what ingredients they have, such as which pizza has which toppings. So once again - and probably more often - you have to ask.
  • You may not like the answer to a dish of your choice. Broccoli? Can’t you replace it with paprika? Again, you have to ask and, if necessary, give an appropriate instruction.
  • What had you ordered right now? Have you perhaps forgotten something? Has your counterpart always understood you correctly? Have you perhaps misspoken? Since you cannot see the note from your counterpart on the other side, your only option is to have the order read back to you.
  • Now that you’re listening to it again, it occurs to you that the scalloped noodles might be a bit powerful after all. They ask that it be cancelled and order a Pizza Margherita instead. Has the correct dish been removed? You guessed it - you have to ask again.

It’s not that complicated for you when you order something? Probably not, because you may have an app or a menu from the pizzeria at home where you can see what’s available to choose from and also what adjustments you can make to the dishes. In addition, your counterpart naturally knows about the problems of orders of this kind and thinks along, repeats approximately what he has understood and gives a summary of his own accord towards the end of the order. You are not in this situation when using the computer. You have no insight into the complete choices the system offers you, and the command line interpreter is unfortunately - perhaps thankfully - very bad at thinking along for you and recognising when you have done something by mistake.

Terminals and spatial objects

ADM-3A - Image: FreeImages.com/Konrado Fedorczyko
ADM-3A - Image: FreeImages.com/Konrado Fedorczyko

The problems of this linguistic-indirect interaction with the computer can be overcome with the introduction of terminals with screen and keyboard, because the terminal allows you a permanent, spatial representation and manipulation of virtual objects of the user interface.

Terminal development tends to be an understudied area in computer history. One could easily write entire books just on the development of terminals and their influence on the development of user interfaces, because later terminals were equipped with their own processors, their own operating system and storage media and thus became quasi personal computers. For the time being, however, we do not want to go that far and limit ourselves to the simple devices such as the ADM-3A by Lear Siegler shown on the right. This unit is not a computer, but only a terminal. Such a device could be connected to a computer instead of a teletypewriter and then initially used just like it. Instead of a printout, the characters were output on the screen. If the screen was full, the characters automatically slid upwards. Terminals with additional memory even allowed scrolling upwards to view what had previously been output, as with paper printouts. The ADM-3A shown here could display 24 lines of 80 characters each and, in contrast to the version without the A at the end, supported not only upper case letters but also lower case. 80 characters per line means that the complete width of the US letter format or even the DIN A4 format could be displayed.

Terminals of this type were often initially used as a replacement for a teleprinter, but otherwise just like it. Operation was quieter, not as much paper was used and a terminal was also considerably smaller than a teleprinter. However, if left with the teletype-like mode of operation, the potential of the new device was not used at all. The term “glass teletype” was then used jokingly, because the terminal was then functionally identical to a teletype, i.e. a teleprinter, except that the paper was made of glass. The real innovative advantage of terminals of this kind, however, was not in saving paper, but in the possibility of not only outputting characters, but also deleting and overwriting them, and thus above all in being able to position the cursor freely on the screen. This positioning technique allowed the letters to be arranged on the screen and this arrangement to be updated flexibly. This type of terminal use, in which the screen could be cleared with control characters and the input and output cursor could be freely positioned, was thus the basis of updating status displays, form masks on the screen, menus, editors in which the edited text can be permanently seen and edited on the screen, and many other things that we take for granted today at the user interfaces of computers.

Dumb and smart terminals

The ADM-3A - ADM, by the way, stands for American Dream Machine - was a device that was called a “Dumb Terminal”. The manufacturer Lear Siegler even used this expression in its own advertising for the device, but with the ruse that using such a stupid terminal was a very intelligent thing to do after all. If there were “dumb terminals”, there must of course have been other terminals that were less dumb. Indeed there were, and quite logically they were called “intelligent terminals”.

The difference between the two types of terminal is easy to understand with an example: Imagine a programme in which you can fill out a form, let’s say for a delivery. With the Dumb Terminal, the programme filled the terminal screen with the characters of the input mask, then positioned the cursor, for example, at the position after “Last name:” and waited for input from the user. The programme remembered that the input field “Last name” is now filled in. If the user now typed an “F”, the computer processed this input, updated the input variable and wrote an F on the screen. If a backspace followed, the computer also processed this and deleted the F from the memory and the screen. This continued for all the characters entered. Only when at some point the input was completed with ENTER did the programme save the previous input variable into a programme variable, then positioned the cursor behind “First name:” and the game began again. You certainly notice that the programming effort and thus also the execution effort during use was high with such a programme, because the system had to interpret every single input, convert it into output, position the cursor and so on.

When using an “intelligent terminal”, the effort at this level was much lower. If, for example, an ADM-1 or a Datapoint 3300 was used, a complete screen mask was transmitted by the programme, which also contained information about which areas could be manipulated by the user and which could not. The terminal itself took over the input control within the defined fields. When a user typed something into a multi-line “remarks” field, he could make corrections in it at will, insert characters or even whole lines without the computer programme having anything to do with this editing. Only when a send button, often called “data release”, was pressed, was the data transmitted to the time-sharing system. Intelligent terminals thus saved considerable effort in application programming and “computing time” in programme execution, as many of the standard text entry processing tasks were handed over to the terminal itself. You can compare this with programming an input form for a website. Here, too, users use a kind of intelligent terminal, namely a web browser. As a website programmer, you do not have to deal with details such as the user’s input at the level of typing letters on the keyboard, but can rely on the browser and operating system to provide the appropriate functions. They only deal with the user’s finished input again.

The use of Intelligent Terminals seems to make a lot of sense. Dumb terminals were nevertheless much more widespread and advanced the user interfaces more. Why was that? On the one hand, these terminals were of course considerably cheaper, which was of course an advantage not to be underestimated, especially if many terminals were to be used in an organisation. The second advantage, on the other hand, concerned the areas of application of the terminals. Intelligent terminals developed their greatest advantage with classic masks, i.e. input forms with many data fields. Dumb terminals, on the other hand, were not limited to this, because they could display and process everything that was conceivable with dynamic text input and output, and were not restricted to the input formats that the terminal gave them. For example, you can hardly implement an editor, such as the “vi” presented below, with the functionalities of an intelligent terminal. Of course, you could also operate a smart terminal like a dumb terminal, but then all the advantages would be lost. On the other hand, you can simulate the functionality of an Intelligent Terminal well with software, provided you have enough computing power.

Direct manipulation: Visual editors

We looked a few pages earlier at how to edit a text with the Unix editor “ed”. Its usage paradigm is still designed for simple terminals and teleprinters today. Equipped with a terminal like the ADM-3A, we now dare to take the step to a visual editor.

From “ed” to “vi

The Unix editor vi in insert mode
The Unix editor vi in insert mode

The Unix editor “vi” from 1976 shown above - “vi” stands for “visual” - is basically not at all dissimilar to the editor “ed” in the way it functions. In contrast to “ed”, however, with “vi” you see a section of the text permanently on the screen. “vi” also allows you to position a cursor in the text, then switch to an insert mode and insert new text content at the location of the input cursor. In a command mode, “vi” allows you to enter commands in a command line at the bottom of the screen. The result of these commands, for example replacing one word with another, is immediately displayed in “vi” as a change to the displayed text. The operation of “vi” is cryptic and complicated by today’s standards, but the editor, in keeping with its name, realises the potential of characters that are permanently visible on the screen and can also be edited there. Although the editor allows commands to be entered, they no longer have to be used for insertion and it is no longer necessary to know the line number in the text for output. “vi” still supports the manipulation of texts by command. Fans of “vi” often like the editor precisely because of this possibility to perform text manipulations by command. In fact, this can be done very quickly, especially with complex operations and if you know your way around, because “vi” allows text transformations via regular expressions. To the uninitiated, however, these commands often look as if someone had fallen on the keyboard.

If one compares the original version of “vi” with the functionality of today’s text editors, one notices that a very basic feature is still missing: The great advantage of text editing at terminals was that the output text itself could be edited directly on the spot. Instead of a command of the type “Insert a comma after the 4th word in line 20”, the cursor could be navigated to this position and the comma inserted. Every modern editor supports this way of working - even “vi”. What was not possible with “vi”, however, was the spatial marking of a text section and the application of a manipulation command to this area. If you type “vi” on a Linux or Unix-based system today, an editor opens that you can use exactly as described before. However, it is usually no longer the “vi” from the 1970s, but an extended version called “vim” (for “vi improved”). At the end of the 1980s, the classic editor was expanded in many ways. Among other things, there is now a mode that allows a spatial selection of text parts. The selected text parts are displayed inverted. This selection under “vim” works as follows:

  • Ensure that you are in command mode. If necessary, exit the insert mode by pressing ESC,
  • position the cursor at the beginning of the block,
  • select the complete line by entering SHIFT+v, the complete block by pressing CTRL+v or
    • Enter v to set the beginning of the block,
    • navigate to the end of the block with the cursor,
  • Enter d (delete) to cut the block or y (yank) to copy it,
  • Navigate to the target position with the cursor,
  • Enter p (paste) to paste the block here.

The possibility of spatial selection of the elements displayed on the screen enables direct manipulation here. Classically, this term “direct manipulation” is associated with mouse interaction and graphical representations13. In principle, however, a text terminal is sufficient insofar as objects can be spatially displayed and also spatially selected and manipulated. This is the case with “vi”. “Direct manipulation” means that the space of action and the space of perception are coupled. Objects are displayed at a location on the screen in “vi”, are selected at that very location and manipulated on the spot. However, when “vi” is used in command mode, this is not so, because then the manipulations are entered in a command line but affect objects elsewhere.

Experimental Graphical Systems

The interactive computer systems described up to this point with connected teleprinters or later with text terminals were limited in their interface to purely textual inputs and outputs or at best to text characters with which a kind of pseudo-graphic could be generated.

In the “vim” example in the previous chapter, a cursor was spatially positioned to select objects on the screen. Such a selection via cursor keys is practicable within texts, but in many cases it is quite cumbersome and indirect. From smartphones today, you all know a considerably more direct method, namely pointing at an object with your finger. It is quite astonishing that something quite similar was already conceived and used in the 1950s. Although pointing still required an additional device at that time, this additional device could already be pointed directly at the screen. The beginnings of this direct spatial selection and graphical representation of information lie, as often in computer history, with the military.

Whirlwind and SAGE

The first Lightpen - Image: Mitre Corporation
The first Lightpen - Image: Mitre Corporation

In the mid-1940s to early 1950s, the Massachusetts Institute of Technology (MIT) developed a real-time computer for the US Navy. It was initially planned as a flight simulator, but was later redesigned for air traffic control requirements. This “Whirlwind” computer was technically noteworthy in many ways, if only because it was a real-time computer that continuously processed, displayed and awaited input. Whirlwind could also display memory contents on the screen of an oscilloscope, among other things. That in itself was not extraordinary. Even the British EDSAC of 1945 already had such a device, where memory contents of the computer could be displayed as dots on a screen. One innovation, however, was the development of a pen-like device called a “lightpen”. The simple instrument is shown on the right. This pen could be used to point to dots on the screen. The computer could recognise the position of the pen on the screen and thus identify the memory bit that was pointed at.

The Lightpen was an ingeniously simple design that made use of the way monitors with picture tubes worked. Pens of this type consisted in principle only of a simple photocell attached to the tip of the pen. So the pen itself can only detect whether it is light at the tip of the pen or not. In a monitor with a cathode ray tube, i.e. also in the old, clunky televisions, an image was produced by an electron beam causing dots on a fluorescent layer on the front of the tube to glow. The image was composed like a written text, line by line, from left to right and from top to bottom. However, this happened so quickly that one could not see the image composition with the naked eye. If one now pointed with a lightpen to a spot on an illuminated screen, the photocell was brightly illuminated at a specific time. By precisely measuring this point in time and comparing it with the knowledge of when the image in the upper left corner began to be displayed, it was possible to calculate which screen position the Lightpen was currently pointing to. However, the lightpen of the Whirlwind computer was not yet intended as an input device for the actual functionality of the computer, but was used for troubleshooting purposes.

A Weapons Director Console - image courtesy of the Computer History Museum
A Weapons Director Console - image courtesy of the Computer History Museum

The Whirlwind computer was the basis for the SAGE system built by MIT and IBM in the 1950s. SAGE stands for Semi-Automatic Ground Environment. At the heart of SAGE were two huge computers manufactured by IBM. The two systems worked in parallel to exclude computer errors and failures as much as possible. The task of the SAGE system was airspace surveillance. Enemy Soviet aircraft should be able to be detected and defensive measures initiated if necessary. A simple radar surveillance of the airspace could not do this well, since only quite small areas could ever be monitored with a usual radar installation and there were already many completely harmless military and civil aircraft movements at that time.

Lightgun - image courtesy of the Computer History Museum
Lightgun - image courtesy of the Computer History Museum

A suspicious radar echo could so easily be lost in the abundance of information. To improve airspace surveillance, the data from many distributed radar stations were combined in the SAGE computer. This data was cross-checked with existing information on reported aircraft movements. Anything deemed harmless was filtered out, leaving only information about deviant, potentially hostile aircraft. The airspace was monitored with a large number of special consoles. Shown above is a Weapons Director Console. Central was a screen, a so-called “view scope”, borrowed from radar technology. It displayed the data calculated by the computer, such as a map of the outline of the terrain and the positions of any enemy aircraft. The soldiers sitting at these consoles could select objects on the screen using a pointing device. They used a so-called “light gun” to do this, which they held up to the screen and “pulled the trigger”. The many buttons around the screen made it possible, among other things, to call up functions related to the marked object, i.e. to display the previous route of the flying object, or to virtually continue the current route into the future. Finally, targets could be marked for interception by missiles in this way.

TX-0 and TX-2

During the Cold War, civilian and military research often inspired each other. MIT’s Lincoln Lab, where Whirlwind was built and which was involved in the SAGE system, developed an experimental transistor-based version of the Whirlwind computer called TX-0 (Transistorized Experimental Computer Zero) from 1955 to 1956. The system was used to explore new approaches to human-machine interaction. Like the Whirlwind, it had a graphic output by means of a screen based on radar technology. However, it was now no longer only used for troubleshooting, but also as a regular output option. As with the SAGE system, a device could be used for spatial input at this screen. In the non-military context, the term “lightpen” was now used again for this.

The TX-0 had two direct successors. Firstly, another experimental computer, the TX-2, was developed at Lincoln Lab and put into operation in 1958. However, the findings of the TX-0 and some influences of the TX-2 also gave rise to the first commercial computer with graphic input and output, the PDP-1 from the company Digital Equipment (DEC), which will be discussed again later. The three systems, the TX-0, the TX-2 and the PDP-1, were used at MIT in the 1950s and 1960s to work on such advanced ideas as handwriting recognition, graphical text editors, interactive debuggers (programs for finding and removing program errors), chess programs and other early projects of so-called “artificial intelligence”.

One of the projects that pioneered the development of user interfaces with spatial-graphic display and object manipulation was the “Sketchpad” system, conceived and implemented in 1963 by Ivan Sutherland as part of his doctoral thesis. The photo below shows an MIT scientist at the Sketchpad system running on the TX-2. He has a light pen in his hand. With this pen, graphic objects could be created and manipulated. This was done by pointing to a spot and pressing one of the keys on the keyboard on the left. In this way, routes or circles could be drawn up. Let’s limit ourselves here to routes as an example. To draw a line, the pen was moved to the desired starting point and a key on the keyboard was pressed. Now the pen was moved to the end point of the track. The system continuously drew a straight line between the starting point and the current position of the pen on the screen during this generation process. Pressing the button also fixed this point, which then in turn became the starting point for the next route. This process could be cancelled by removing the pen from the screen.

All points of lines and the other geometric figures could also be edited afterwards. For this, one item had to be selected first. This was done by pointing with the lightpen. The point did not have to be hit exactly, which could have been difficult. The system supported the selection of a point by also assigning the immediate surroundings of the point to the point. A selection cursor displayed under the pen position jumped to the point in this case to indicate that the following manipulations would refer to this point. So even if the pen was pointed slightly beside the point or if the scanning was not accurate, a point could be selected precisely. Once a point was selected, it could be brought into a shifting state by pressing a key, which in turn was ended by pressing a key or by removing the pen. In this operation, too, the drawing was continuously updated during the entire manipulation process, so that the Sketchpad user was always continuously informed about the consequences of his manipulation.

Timothy Johnson uses Sketchpad on a TX-2 - Image: Computer Sketchpad, National Education Television, MIT 1964
Timothy Johnson uses Sketchpad on a TX-2 - Image: Computer Sketchpad, National Education Television, MIT 1964

This direct manipulation presupposes a number of technical conditions that were anything but self-evident at the time of Sketchpad.

  • Objects had to be made permanently and stably visible. This required a screen and a control of the same that made it possible to display the characters or graphics in such rapid succession that they appeared to the human eye as stable objects.
  • It had to be possible to select the objects spatially. So a spatial input device was needed that could refer to coordinates on the screen. This role was taken over by the lightpen of the TX-2 in Sketchpad. In addition, programming was needed that could assign these coordinates to the objects present at the screen location.
  • Spatial manipulations had to be immediately translated into manipulation commands, which in turn immediately ensured an update of the representation, because only through fast and immediate processing could the impression of direct manipulation be achieved. If there had been delays in such processes, precise work would no longer have been possible. In order to achieve this extremely high degree of responsiveness, Sutherland developed a performant data structure[^performant], which, in combination with the high computing power of the machine, allowed the corresponding processing speed.

A whole branch of computer science is concerned with developing algorithms (calculation rules) with which a calculation is not only possible at all, but which can also be processed as quickly as possible.

For Sketchpad, in addition to the clever data structures that allowed direct manipulation, Sutherland developed a variety of other exciting concepts that can still be found in today’s CAD systems (Computer Aided Design). All of these would certainly deserve extensive consideration, but can only be listed here as examples and very briefly. It is worth noting, for example:

  • When drawing geometric structures, constraints could be defined. For example, a draughtsman could specify that the angle between two lines must be ninety degrees, that two lines should be of equal length, or that lines must be parallel to each other.
  • Sketchpad not only offered its users several virtual drawing sheets that could be switched between. Each of these drawing sheets was also considerably larger than the representation on the screen. Sketchpad allowed to move the visible section and to change the granularity of the display, i.e. to zoom.
  • With Sketchpad, objects could be composed of partial objects. An object created on one of the virtual drawing sheets could be inserted into another sheet in any size and rotation. The objects inserted in this way were not copies but references to the original object. Changes to one of the sub-objects were thus automatically transferred to all the composite objects. For example, a Sketchpad user could draw a window on one drawing sheet, a door on the second and a house on the third. For the house, he used the previously designed components. If he now returned to the drawing sheet of the door and refined the representation by adding a lock and a handle, the house on the third sheet also had a lock and a handle in its door.

The legacy of SAGE, Sketchpad and Co.

The technical features of the TX-2, which made software like Sketchpad possible, were outstanding at the time. It was an experimental system on which exactly such tests could be carried out. For a long time, the rest of the computer world was very different from these computers with their direct user interaction.

DEC PDP-1 in the Computer History Museum - Image: Alexey Komarov (CC BY-SA 4.0)
DEC PDP-1 in the Computer History Museum - Image: Alexey Komarov (CC BY-SA 4.0)

The development from the Whirlwind via SAGE and TX-0 to the TX-2 and corresponding innovative software led to the development of the PDP-114 in 1959, the first minicomputer that could be purchased. The photo above shows a fully equipped PDP-1. The actual computer is the cabinet-like structure on the left. In the lower area of the computer you will see the control panel. Above it was a punch tape reader. DEC used punched tape with a special folding. You can see some of these punched strips in the holder above the reader. Just like the TX-0 and the TX-2, the PDP-1 focused on interaction. Inputs and outputs were therefore not only made via the control panel, but also via a connected electric typewriter or via screen and lightpen. You can see both in the picture on the right. Here, too, the proven technology from the radar sector was used. With 53 copies, the PDP-1 was not yet a really much sold computer. With a price of 120,000 dollars (converted to 2021 about 1.1 million dollars), the computer was not exactly affordable either. However, this was to change with the new class of minicomputers opened up by the PDP-1. The PDP-8, which you will get to know in the next chapter, was one of the most successful representatives of this class of devices, whose features were an important step towards personal computers.

Minicomputers

Minicomputers are a sometimes overlooked class of device in popular computer history. In the German version of the American documentary “Triumph of the Nerds” produced by ZDF called “The Brief History of the PC”, it is claimed in connection with the “first PC”, the Altair 8800, which you will get to know in the next chapter: “Previously, computers had filled entire rooms, hidden away in large corporations, government offices or institutes.” Of course, it is true that before the advent of personal computers in the late 1970s, most people did not come into direct contact with computers and certainly could not call a computer their own, but that computers always filled entire rooms before PCs is simply not true. Relatively small computers existed fifteen to twenty years earlier. Even the LGP-30 from 1956, which you got to know in the chapter “Early real-time systems” and the PDP-1 from 1960 from the previous chapter were already quite compact computers.

A faithful, functional replica of a LINC I - picture: HNF
A faithful, functional replica of a LINC I - picture: HNF

Another interesting and very compact computer was also created at MIT’s Lincoln Lab in the early 1960s. The Library Instrument Computer, LINC for short, pictured above, was a cabinet-sized computer with casters underneath. The computer thus became mobile and could always be moved to the place where it was needed in a laboratory. The computer was also otherwise very much tailored to laboratory needs. This already concerned the computing unit itself, which in the LINC had a floating-point computing unit. In practice, this meant that the LINC was good at calculating numbers with many decimal places. This was not the case with all computers and often it was not necessary. In accounting and flight booking, for example, one usually did not get beyond the two decimal places for penny or cent amounts. However, when it came to precise measured values in the laboratory, these naturally also had to be able to be stored and processed in the computer. The computer was also remarkable in terms of its user interface. An interesting feature of the system was the tape drive with the so-called LINCTape technology, which was later used by Digital Equipment Corporation (DEC) in other minicomputers under the name DECTape. You can see two LINCTape drives on the table on the right in the photo. Unlike tapes from mainframes, where high read and write speed was the primary concern but the data on the tapes was not changed again, the DECTape was designed as a random access system. A DECTape had a file contents directory and thus files that could be addressed and manipulated by name. Sophisticated error correction ensured that the tapes were not only reasonably fast to access, but also very fail-safe.

The first version of the PDP-8 - image courtesy of the Computer History Museum
The first version of the PDP-8 - image courtesy of the Computer History Museum

For a laboratory computer, it was particularly important to be able to directly process analogue signals in the form of voltages and also generate analogue output signals. There were also analogue components in the hardware interface. Input values could not only be entered via keyboard, but also set with rotary knobs. In addition to operation by electric typewriter, the LINC could also be used by keyboard and screen. You can see the keyboard and screen on the table on the left in the photo. A central and very interesting programme here was a screen editor for programmes that could be output with it on the screen as text and edited with the help of the keyboard.

The LINC project was continued by the company Digital Equipment, which combined the technology with its own computers and also integrated individual components into minicomputers with less direct orientation towards laboratory operation. The epitome of the minicomputer par excellence was DEC’s PDP-8. You can see the first version of this computer on the right. It was one of the first computers to fit on a desk, apart from small computers for purely educational purposes. The computer cost $18,500 in 1965, which is equivalent to about $157,500 in today’s purchasing power (2021). Compared to mainframes of the time, this was a very cheap purchase. With improvements in transistor and later microchip technology, many versions of this computer have been produced over the years. Already in the following year, a functionally identical but very slow PDP-8/S followed, which was the first serious computer with a cost of less than 10,000 dollars. The very common PDP-8/E from 1970 then even cost only 6,500 dollars.

The PDP-8 versions after the original version were considerably smaller than the unit shown here. The complete technology could be installed in the lower box. The large superstructure on top was no longer necessary. A distinctive feature of all versions of the PDP-8 was the front panel with its lights and switches that could be used to enter and read values into the memory and control the operations of the unit. The unit could be programmed via these same switches. Programming via the front panel meant entering a programme into memory bit by bit. Once this was done, the programme counter could be set to the start address and the programme started. Of course, this programming option was not the only one. If, for example, a punch tape reader or a reader for DECTape was connected, in case of doubt only a small loader had to be entered by hand. The actual programme was then reloaded from the medium. In the practice of using the PDP-8, it was relatively rare to have to enter programmes by switch, because once they were entered, they remained even after the computer was switched off. That this was so was due to the memory technology that the PDP-8 used. We have already got to know magnetic drums as persistent working memory in connection with the IBM 305 RAMAC in the chapter Early real-time systems. However, these drums were clunky and quite limited in terms of storage capacity. Moreover, the technology was very slow. Therefore, a different technology was chosen for the PDP-8.

Detailed view of a magnetic core storage element - Image: Original: Konstantin Lanzet, derivate work: Appaloosa (CC BY-SA 3.0)
Detailed view of a magnetic core storage element - Image: Original: Konstantin Lanzet, derivate work: Appaloosa (CC BY-SA 3.0)

The working memory of the PDP-8 was a magnetic core memory. Such a reservoir consists of fields with small iron rings threaded onto wires. A diagonally continuous wire is used to read out the memory. Each iron ring can be magnetised individually. Since there are two different ways to carry out this magnetisation, two possible polarisations, this results in two possible memory values, corresponding to a 1 or a 0, i.e. one bit. When writing a bit, if this bit “flips”, a voltage is induced in the read wire - the diagonal wire. A bit is thus read by attempting to set it. Reading out the memory therefore destroyed the stored data. So to get the information, the bit value had to be stored again after reading. A magnetic core memory was fast, but above all expensive, because the rings were threaded on by hand. Its great advantage was that, like the magnetic drum, it was based on magnetism and storage was therefore not dependent on electricity. The memory retained its information without voltage applied. The basic version of the PDP-8 had a memory of 4,096 12-bit words. With a character encoding of 6 bits, this meant that there was room for 8,192 characters in the memory. By adding a memory expansion controller, the memory could be expanded up to 32,768 words.

A magnetic core storage element - Image: Original: Konstantin Lanzet, derivate work: Appaloosa (CC BY-SA 3.0)
A magnetic core storage element - Image: Original: Konstantin Lanzet, derivate work: Appaloosa (CC BY-SA 3.0)

From 1968, a simple programming language called FOCAL (Formulating On-Line Calculations in Algebraic Language) was available for the PDP-8. The language was roughly comparable to BASIC, but had some advantages over it. The following example of a FOCAL programme can be found in a DEC promotional brochure from 196915.

1 01.10 ASK "HOW MUCH MONEY DO YOU WANT TO BORROW ?",PRINCIPAL
2 01.20 ASK "FOR HOW MANY YEARS ?",TERM
3 01.30 FOR RATE=4.0,.5,10;DO 2.0
4 01.40 QUIT
5 
6 02.10 SET INTEREST=PRINCIPAL*(RATE/100)*TERM
7 02.20 TYPE "RATE",RATE," ", "INTEREST",INTEREST,!

A visible difference to BASIC here, besides different keywords like ASK instead of INPUT and TYPE instead of PRINT, is the support of blocks. They show up in the line numbering. This simple example has two blocks. The lines of the first block begin with “01.”, the second with “02.” A block can be referenced as a whole. This happens here in line 01.30. The line is a loop that increments the variable RATE from 4 to 10 in steps of 0.5. However, the actual loop body, i.e. the commands that are to be executed several times, does not follow directly in place in the example, as would be necessary in BASIC. Instead, DO 2.0 is called for each iteration, i.e. the second block is executed. When the programme is started, the result looks something like this:

 1 HOW MUCH MONEY DO YOU WANT TO BORROW ?:100
 2 FOR HOW MANY YEARS ?:5
 3 RATE= 4.0000 INTEREST= 20.0000
 4 RATE= 4.5000 INTEREST= 22.5000
 5 RATE= 5.0000 INTEREST= 25.0000
 6 RATE= 5.5000 INTEREST= 27.5000
 7 RATE= 6.0000 INTEREST= 30.0000
 8 RATE= 6.5000 INTEREST= 32.5000
 9 RATE= 7.0000 INTEREST= 35.0000
10 RATE= 7.5000 INTEREST= 37.5000
11 RATE= 8.0000 INTEREST= 40.0000
12 RATE= 8.5000 INTEREST= 42.5000
13 RATE= 9.0000 INTEREST= 45.0000
14 RATE= 9.5000 INTEREST= 47.5000
15 RATE= 10.0000 INTEREST= 50.0000
16 *

Interactive programming in FOCAL is of course not the only way to use the PDP-8. A number of operating systems have been developed for the computer. The most popular and widespread of these was the OS/8 system, which was heavily inspired by the TOPS-10 operating system for DEC’s PDP-10 series of mainframes. TOPS-10 was a time-sharing operating system that was valued for its fairly simple operation. For the considerably simpler and less well-equipped PDP-8, however, the system could and of course had to be slimmed down. The PDP-8 was not a computer that was well suited for time-sharing. In OS/8, therefore, only one user was using the system at a time. Accordingly, there was no need for time-sharing-typical administrative operations, user administration, quotas or complex administration for external mass storage. What remained was a simple operating system that had a command line interpreter, an integrated help system and remembered command line parameters of earlier calls to a programme. The latter was and is definitely something special! The command DIR, for example, output the table of contents of the files on a DECTape in OS/8. DIR /A /N displays the files in alphabetical order with readable modification date. Once these parameters had been entered, from then on the directory was specified in the described way even with a simple DIR. The parameters were automatically applied again. This functionality was particularly handy when switching back and forth between an editor and a compiler, for example. Only once did file names and parameters have to be specified. After that, it was sufficient to call the editor and the compiler without the additional information, because the appropriate parameters and file names were added automatically.

The operating system OS/8 and its big brother TOPS-10 were the inspiration for the operating system CP/M, the first widely used operating system for personal computers. Its developer Gary Kildall worked with the DEC operating systems himself and used them for the development of his CP/M. For its part, CP/M’s user interface was the template for Microsoft’s MS-DOS, whose command line interpreter, hardly changed, can still be found in Windows. The command dir, which you can still use today to output a directory listing in the Windows command prompt, can be traced back to TOPS-10 via MS-DOS, CP/M and OS/8.

Personal Computers

With minicomputers like the PDP-8, we have come pretty close to personal computers. Some later minicomputers look like personal computers. Pictured below is a VT78 from DEC. This unit came onto the market in 1977. It is technically a PDP-8, now shrunk to chip size, built into a VT52 terminal. This compact unit ran a customised version of OS/8 and DEC’s WPS-8 word processor. The computer was based on the architecture of a proven minicomputer and an equally proven VT52 terminal. So is it a compact mini-computer? Around an intelligent terminal? Or is that already a personal computer?

DEC VT78 Video Data Processor - Image: Frotz at English Wikipedia (CC BY-SA 3.0) (optional)
DEC VT78 Video Data Processor - Image: Frotz at English Wikipedia (CC BY-SA 3.0) (optional)

You see, once again, it is not at all easy to say what constitutes a personal computer. The term “personal computer”, especially in its abbreviation PC, today mostly stands for the IBM 5150, i.e. the IBM Personal Computer from 1981, and its direct successors and compatibles. The architecture brought onto the market at that time by IBM is probably one of the most enduring computer architectures ever. Almost all desktop PCs and laptops used today, in 2021, are successors to this computer and are still fundamentally compatible with computers of that time and with their software1. In fact, however, the IBM PC architecture with its processor compatible with the Intel 8086 is by far no longer the most common computer architecture. There are now far more smartphones and tablets than PCs and laptops, and these devices are based on a different technical foundation - but we won’t get to smart devices until the next section of the book.

Here we will first deal with the personal computer, but of course not only with the IBM PC, because to restrict the term “personal computer” in this way would not allow us to follow the lines of development. There were considerably more what were called “personal computers”, “home computers” or even “desktop computers” at the time. The fact that IBM came up with the idea of releasing a PC and that Microsoft later came up with the idea of creating a working environment called “Windows” has a history that needs to be explored.

But there is still the question of what a personal computer is. How does the personal computer relate to the minicomputer? First, there is one major commonality: a personal computer, like a minicomputer, is a local resource, so it has local memory and local input and output devices. Although one might have used a teleprinter or terminal to operate a minicomputer or early personal computer, the computer was usually located only a few metres away from the terminal and not far away in a computer centre. One consequence of these local resources and local use was that the computer did not serve multiple users in time-sharing, but was served by only one user at any given time2.

Because of the aforementioned similar characteristics of personal computers and minicomputers, computers such as the PDP-8 and the LINC are referred to by some enthusiasts as “early personal computers”. However, if you compare minicomputers at their peak in the early 1970s with personal computers from 1975 onwards, you can definitely see differences. Minicomputers emerged at the time of transistor-based computers, but before the advent of integrated circuits and especially before the advent of the microprocessor. Although many later minicomputers were more and more miniaturised and also used microchips, they remained for a longer time with computing units that consisted of several components. Personal computers, on the other hand, emerged at the same time as the first microprocessors. Miniaturisation, mass production of components and the tinkering efforts of early PC pioneers made it possible to produce computers at considerably lower prices. Another important difference between a minicomputer and a personal computer lies in the name. Of course, names are smoke and mirrors, but the “personal” in Personal Computer is not unjustified. In most cases, a personal computer is assigned to a user or a family. You own your own personal computer or share one in the family, and even in the office you have a personal computer at your individual disposal, even though it may not belong to you directly. Minicomputers, on the other hand, were usually not assigned to a person, but rather belonged to the equipment of a workshop or laboratory.

Related to personal computers are also the computers called “workstations”. Here, it is even more difficult to draw a line. It lies above all in the costs and thus in the performance. While personal computers are found in home, school and office environments, the more powerful workstations tend to be used in technical and scientific environments. We will not look at typical workstations such as the Sun Microsystems computers in the context of this history narrative. In the chapter “Desks and windows”, however, we take a look at the Alto and Star calculators from Xerox, which could probably be assigned to this device class.

In the following table you can see typical characteristics of each unit class including a typical representative. I have divided the personal computer here into hobby computer, home computer and office computer. You will see in the course of the chapter that there were differences in terms of performance, equipment and also the user interface.

Mainframe Computer Mini Computer Hobby Computer Home Computer Office Computer Workstation  
Example DEC PDP-10 (1966) DEC PDP-8 (1965) Altair 8800 (1975) C64 (1982) IBM PC (1981) Sun-1 (1982)
Size Room Cupboard, Chest Case Shoe Box Case Case
CPU Transistor Transistor Micro Micro Micro Micro
price very high high low moderate high  
Performance high low low medium high  
resources remote local local local networked  
users many few 1 1 few few
Personal no no yes yes yes partial
area-of-use administration, science science, technology home office science, technology  

We will return to the question of what constitutes a personal computer in the history of the user interface in the chapter “Small computers in the office”. You will then see that, perhaps more important than the differences discussed above, it is the software and its exploitation of the potential of interactive user interfaces that makes the small but subtle difference that qualitatively sets personal computers apart from minicomputers.

On to the Altair!

Just as it seems to be a need of many to be able to name the first computer ever, the first personal computer is also often sought and named. But finding the first one here is almost more futile and even more hopeless than with the computer itself, because of course it was not the case that lightning struck someone who would then have invented something completely new and unprecedented with a personal computer, without borrowing from previous computers. In this chapter, I introduce you to the Altair 8800, the computer that is perhaps most often given the honour of having been the first personal computer. You will see that the function and working methods of existing computers, especially the popular minicomputers, were clearly the inspiration for the design of the Altair.

With minicomputers like the PDP-8 from the previous chapter, computers became relatively small and cheap, making them affordable for small organisations for the first time. For a company with an electrical laboratory, for example, a PDP-8/E from 1970 with its $6,800 (2021: $47,000) was quite reasonably priced. For private individuals, however, the situation was different, as there were also costs for terminals and storage devices, such as a DECTape device. Until about the mid-1970s, computers were therefore something that existed in companies, in the military, in universities and colleges, and that private citizens might have heard of, but which was not directly accessible to them. Computers were thus surrounded by a somewhat mystical aura. For left-wing thinkers in particular, they belonged to the military-industrial complex. The desire of many hobbyists to build their own computers at home thus has a political aspect as well as a pure interest in technology. Building your own computer instead of being at the mercy of computer technology can be interpreted as an act of liberation and self-determination. Many of the American computer pioneers of Silicon Valley emerged from this mood. A corresponding duct is still often found in their external presentation today, which holds a certain irony, because the role of companies like Apple has of course changed greatly over the decades. But let’s not get political and pre-empt history, but come back to the situation in the 1970s when the first personal computers appeared.

At the beginning of the 1970s, the Intel company brought the first fully integrated microprocessor onto the market. The processor with the designation 4004 and the 8008, which appeared one year later, were used, for example, in pocket calculators, desktop calculating machines and partly also in early video games. Hobbyists also found interest in the processors and constructed their own small, experimental computer systems based on these processors. But they were not suitable for really serious computing. This changed in 1974 with the introduction of the Intel 8080, an 8-bit microprocessor with a 16-bit data bus that could address 64 KB of RAM.

You have certainly heard of specifications in bits, in this case 8 bits, in relation to a processor. But what does that actually mean? I should perhaps explain some terms and ways of speaking at this point.

8-bit processor?

That a processor is an 8-bit processor means that it is designed to operate with sequences of 8 bits each. For example, a processor can add two numbers, each binary coded in 8 bits, in one step because it has a built-in 8-bit adder. With 8 bits, only 256 different values can be represented, such as those from 0 to 255. The consequence of this, however, is not that one could only process small numbers with such a processor. It is easily possible to process much larger numbers that require more bits. But that makes it more complicated. You can make this clear by assuming a processor that calculates in the normal ten system. This makes it a little easier to imagine than the ones and zeros in the binary system, which are unmanageable for humans. Our imaginary processor has the ability to add two numbers, but can only process three digits of a number at a time. 54 + 92 can be calculated without problems, but what do you do with 12742 + 7524? It’s quite simple, you first calculate only with the back three digits. 742 + 524 = 1266. This result is too long with 4 digits! They can only “store” the 266 and have an overflow of 1. So “one in mind”, as they used to say in school. You must also process this 1 when you offset the front parts of the number. We now have to calculate 1 + 12 + 7 for the front part. You have to do this calculation in two steps, because the processor can only add two numbers at a time. So: 1 + 12 = 13 and 13 + 7 = 20, so the result of the front result part is 20, and the back result part is 266. So the total result of our addition is 20266. As you can see, we were able to do the calculation, but instead of adding once, we had to do it three times. On top of that, there was the effort of coordinating the process.

Address bus?

The address space of the working memory of a system is also specified in bits. What does the bit number mean here? For simplicity’s sake, think of a computer’s RAM as a cupboard with lots of drawers. You can put a certain number of bits in each drawer. This bit sequence is called a “word”. How long such a word is, i.e. how many bits it contains, has often changed in the course of computer history. In keeping with the era, let’s take 8 bits once again. 8 bits are usually also called a byte. The size of the working memory in bytes is now the number of drawers in the cabinet. A processor must be able to tell the working memory in the computer which drawer it needs. Think of it as having a number on each drawer. A programme can therefore request the byte from drawer number 245 or write a new byte value into it. The drawers are selected via the so-called address bus, which simply consists of a series of power lines. The question now is how many power lines are needed. If the address bus consisted of only one line, only the one memory cell - when there is no voltage on the line - or the other - when there is voltage on the line - could be addressed. So there would only be two different memory locations (0 and 1) that could be addressed. The result would be a working memory of 2 bytes. With two lines it would already be 22=4 memory locations (00, 01, 10, 11), with three lines 23=8 bytes and so on. If the address bus consisted of 8 lines, i.e. a bus width of 8 bits, 28=256 different memory locations could be directly addressed. This could be used to build a pocket calculator, but of course it is not enough for a serious computer. If one now uses 16-bit-long addresses, as with the Intel 8080, there are 216 bytes, i.e. 65,536 bytes or also 64 KB3. A processor with a 16-bit address space can therefore address 64 KB of memory directly. With techniques such as bank switching, however, it is certainly possible to address more memory.

Bank Switching?

The restriction to an address bus width of 16 bits does not mean that one could really only use 64 KB with such a processor, because there are techniques with which one can in principle expand the memory size as desired. Let’s return to the cupboard with the drawers and expand the picture a little. Imagine a wall. Against this wall, place cupboards with drawers directly next to each other until the wall is full. The drawers still form the directly addressable memory cells of the system. You can always access the drawers in the cupboards on the wall. If you now need to store more data, you can help yourself by replacing entire sub-cabinets. You have a large warehouse next door with lots of cupboards and, if necessary, you can move one or more of the partial cupboards from the wall into the warehouse and take other cupboards out of the warehouse and place them against the access wall. This technique is called “bank switching”. Commodore’s C128 from 1985 is an example of a computer that used this technique to manage 128 KB of memory despite a 16-bit address bus. The disadvantage of bank switching is, of course, the additional effort. You have a lot of memory, but you can’t address parts of it directly; you always have to make sure that the right cupboard is on the wall first. This can make programming complex, depending on how close to the machine you are programming. In any case, it always takes time to replace the memory banks.

Intel’s early 8008 and 8080 processors were 8-bit processors. The 8008 had a 14-bit address bus, so it could address 16 KB of memory directly. With the 8080, as already mentioned above, it was 16 bits of bus width and thus 64 KB of directly addressable working memory. That was little compared to mainframes, but not bad at all compared to minicomputers. A PDP-8 could not address 64 KB even in its largest configuration. The maximum here was 32 K words of 12 bits each, which corresponds to 48 KB of memory in purely mathematical terms. In practice, however, this conversion is not correct, because if you have 32,768 memory locations with 12 bits each, that is exactly as many bits as 49,152 memory locations with 8 bits each, i.e. 48 KB, but of course you cannot use them in this way, because the bits cannot be addressed byte by byte and are not available in this “denomination”.

The Altair 8800 - The wonder box

“Make your own computer” was a popular theme of (US) electronics hobby magazines in the mid-1970s. One notable example was the Mark-8 hobbyist computer presented in the magazine “Radio-Electronics” in 1974. The magazine explained how it worked and provided building instructions for the computer, which was based on an Intel 8008 processor. You could not order the computer completely assembled or even buy it in a shop, but had to reach for the soldering iron yourself. Of course, that was far from everyone’s cup of tea. The greatest influence of the Mark-8 was not the computer itself, but that it prompted the editors of Popular Electronics magazine to introduce a computer of their own, the Altair 8800, in 1975.

Altair 8800 - Image: Ed Uthman from Houston, TX, USA (CC BY-SA 2.0)
Altair 8800 - Image: Ed Uthman from Houston, TX, USA (CC BY-SA 2.0)

This computer was based on the modern 8080 CPU mentioned above. Those interested in computers could order the computer from MITS as a kit and assemble it themselves, but even those who could not or did not want to do so had the opportunity to get their hands on the computer, because for “only” 621 dollars, which corresponds to about 3,100 dollars in today’s purchasing power (2021), the computer could be purchased completely assembled with a case. For the 621 dollars, you got the basic equipment of the computer, which was quite sparse. The computer then had no external storage options whatsoever and only a whopping 256 bytes of RAM. This frugal basic equipment did not prevent the authors at Popular Electronics from describing the computer euphorically:

The era of the computer in every home-a favourite topic among science-fiction writers-has arrived! It’s made possible by the POPULAR ELECTRONICS/MITS Altair 8800, a full-blown computer that can hold its own against sophisticated minicomputers now on the market. And it doesn’t cost several thousand dollars.

So here the beginning of a new era was described exuberantly, an era known from science fiction literature. More sober is the comparison of the computer with already known devices. The Altair can take on the minicomputers. On the cover of the magazine, the Altair is announced as “World’s First Minicomputer Kit to Rival Commercial Models…”. Popular Electronics promoted a computer for the home, but the computer one had at home corresponded to the computers of the time, the minicomputers - a minicomputer for the home. This background also explains the appearance of the computer. From today’s expectation of what PCs look like, the design of the Altair is very unusual, but if you take a minicomputer like DEC’s PDP-8 as a benchmark, the design of the Altair 8800 is quite logical: it is a box with a front panel on which there is a row of LEDs and a whole lot of toggle switches. Someone who knew minicomputers and how to operate them immediately felt at home here.

A comparison with the ten-year older architecture of the PDP-8 reveals further parallels. In its basic state, the PDP-8 also had to be operated via the front panel without any other connected input and output devices. If a small loader programme was entered into the memory, the PDP-8 could read in programmes from an external medium, such as a punched tape. Connecting a teleprinter or other terminal to a PDP-8 enabled interactive use of the system. DEC provided for the PDP-8 the simple interpreter programming language FOCAL described in previous chapter. Later, with OS/8 and some parallel developments, the transition to command-line oriented operating systems followed, which were based on widespread time-sharing systems in terms of operation, but did not have the same complexity. Almost the same development steps could now, ten years later, be retraced with the Altair 8800. Here, too, the potential of digital systems was realised with the development of the user interfaces. It is worth taking a closer look at this development, because the products that emerged here were not limited to the Altair, but were instrumental in the further development of personal computers even to this day.

Front panel programming - bits to touch

Getting a dewy Altair 8800 to do something useful was not so easy, because it meant programming the calculator via the front panel, entering values and reading off results. With the characteristic front panel of light-emitting diodes and toggle switches, memory locations could be filled with values, read out accordingly, the programme counter set, the programme started and interrupted or, for the purpose of troubleshooting, executed in individual steps. If you wanted to program the Altair 8800, you had to know the machine code of the Intel 8080 and enter the commands one after the other, byte by byte, into the memory via toggle switches. Once this was done, it was advisable to check the values in the memory again carefully, because errors could have crept in quickly. If the programme was in order, the programme counter was set to the first command of the programme and the switch labelled RUN was pressed to run the programme. When the programme had come to an end, or when the programme sequence was interrupted, the content of the memory and thus also a possible calculation result could be read, again via the front panel.

The following programme is a modified form of a programme from the Operator’s Manual of the Altair. This is a very simple programme that adds two numbers stored in memory locations 128 and 129 and stores the result in memory location 130.

 1 00 111 010 Charging into the accumulator Contents
 2 10 000 000 from memory location 128
 3 00 000 000 (2nd byte of the memory location, since 16 bits per address)
 4 
 5 01 000 111 Store accumulator in register B
 6 
 7 00 111 010 Charging into the accumulator Contents
 8 10 000 001 from memory location 129
 9 00 000 000
10 
11 10 000 000 Add register B to the accumulator
12 
13 00 110 010 Store the accumulator content
14 10 000 010 in memory cell 130
15 00 000 000
16 
17 01 110 110 Stop

The actual programme here consists only of the bit sequences given at the front of the lines. The texts behind are only commentary and serve you, the reader of the programme, to understand. In order to be able to enter the programme, one first had to switch on the computer, of course. He was then in a random start state. All memory locations and registers of the computer had a random value. Pressing RESET eliminated this unfortunate state and set all values to 0. Now the commands were entered one after the other in binary via the toggle switches and confirmed by means of the DEPOSIT key or the DEPOSIT-NEXT key. DEPOSIT NEXT was a convenience function. Instead of having to select each address before saving a value, DEPOSIT NEXT automatically jumped to the next memory address each time, so that the commands could be entered one after the other. Next, the two numbers to be added had to be entered. This was done by setting the toggle switch to 128 (binary 10 000 000) and pressing ‘EXAMINE’. The current memory content, i.e. 0, was now indicated by the fact that no LED lit up. A value could now be entered and saved with DEPOSIT NEXT. Memory location 129 was then automatically displayed. A value could now be saved here in the same way. Now that the programme and data had been entered completely, the programme could be started. For this, the programme counter was set to the beginning of the programme. Since the programme was entered from memory address 0, the programme counter had to be set to 0 again in order to let the programme run from this address. To do this, all toggle switches were turned down and then EXAMINE was pressed. Now the programme could be started by pressing RUN. The extremely short programme, of course, had no running time worth mentioning. Immediately after triggering RUN, the programme was also already finished, which was indicated by the WAIT light diode lighting up. One could now read the result from memory cell 130. This was done by entering 130 (binary 10 000 010) and triggering ‘EXAMINE’.

The front panel interface of the Altair 8800 was a machine-only interface. What one saw and entered in this interface was very directly the state of the machine, explicitly above all the state of the memory. The output corresponded to the bits in memory cells, the inputs corresponded exactly to the bits that were to be written to the memory cells and the individual commands corresponded one-to-one to the operations of the processor. In this way, by entering programmes and data from the front panel, it may have been possible in principle to enter all sorts of complex programmes into the computer, but of course this was not really practical. The mode was more suitable for learning and understanding how the computer worked, but not really for serious programming. In addition, the programme was only available in the main memory, which, in contrast to the PDP-8 with its magnetic core memory, was volatile. When the device was switched off, both the programme and the data disappeared.

Punched tape - programmes from the roll

Of course, front-panel programming was not the Altair’s only mode of operation. A teleprinter, for example, could be connected to the computer via retrofittable interfaces. As years before with the minicomputers and the time-sharing systems, the Teletype ASR Model 33 was again a popular device. This teleprinter had a keyboard that could be used to enter letters, numbers and control characters. It also had a print head that printed letters on “endless” paper. The teleprinter was equipped with a paper tape reader and a paper tape punch. Characters could thus be punched onto the punched tape in 8-bit code and also read from them. The punch could be used not only to print out the characters coming from the remote station, but also to store them on a punched tape. The punch tape reader could be used in reverse instead of direct keyboard input. This system had proven itself for forwarding messages and could now also be used for loading programmes into the Altair. The fact that the teleprinter used an 8-bit code was of course practical, because the computer had an architecture in which each data word was 8 bits long, i.e. each addressable memory location stored 8 bits of data.

With the teleprinter it was possible to read in programmes from the punched tape. However, a fundamental problem arose: an Altair simply “knew” nothing about teleprinters or punched tape. It could only execute the programme stored in the working memory, but immediately after switching on, there was no programme there at all. Therefore, if you wanted to read a programme from the punched tape, you first had to enter a small programme into the computer via the front panel and start it. The task of this programme, called “Loader”, was to read data from the paper tape reader, copy it into the computer’s working memory and then execute it. This programme could then in turn use the capabilities of the teletypewriter, such as printing out results or processing user input. One important piece of software that was delivered in this way was itself a programming environment.

BASIC - Programming interactive

The connection of a teleprinter made it possible to use an interactive programming environment on the Altair, i.e. a programme in which one can program and execute and test the programme. Here, too, one can see parallels to the PDP-8. At the time, such an environment was available in the form of the FOCAL interpreter. FOCAL programmes were slow to run, but their programming was quite comfortable. Initially, there was no comparable programming environment for the 8080 processor from Intel. There were some compilers for higher programming languages, but no interactive programming environment. Such an environment, and effectively the first operating system for the Altair 8800, was developed by Bill Gates and Paul Allen. Shortly afterwards, the two of them founded Microsoft with a few comrades-in-arms. The BASIC programming language implemented here by Allan and Gates was, of course, not their invention. The origins of BASIC were already mentioned in the chapter on time-sharing. When the programming language was introduced at Dartmouth College in the 1960s, it was used by students and staff at the college. These users sat in front of a teleprinter at the time, just like Microsoft’s BASIC on the Altair. However, the Dartmouth users did not have a small computer next to them, of course, but connected to the central time-sharing system DTSS. The Altair was of course not a time-sharing system4 and, unlike the DTSS, it now used an interpreter5 rather than a BASIC compiler, but from a usage perspective the BASIC programming on the DTSS and the Altair were quite comparable and in both cases the computer’s programmability benefited from the interactive capabilities of the programming language and from the interactive editor optimised for teleprinter use.

With the basic 256 bytes of the Altair, of course, BASIC could not be used. The storage had to be increased accordingly. Gates and Allen managed to program very memory-efficiently, so that only 4 KB of working memory was sufficient to run BASIC.

Since BASIC on the Altair was an interpreter, BASIC commands could be executed without having to write a whole programme. The input PRINT 5+5 was executed immediately and the result was output directly. Creating a programme could also begin immediately by entering the lines preceded by a line number. In many cases, however, you would probably not want to create a new programme but reuse an existing one. You may be surprised to learn that the first BASIC interpreter from Microsoft did not have any facility to read in programmes. There was neither a LOAD nor a SAVE command. But Microsoft had not simply forgotten these two commands. In fact, a loading routine was not even necessary, because Microsoft was able to take advantage of a peculiarity of the teleprinters. To load a new programme, a user simply had to type NEW to erase the previous programme from memory, then had to thread the punched tape with the programme into the punched tape reader and set the teleprinter to read the punched tape. The teleprinter then “typed” the BASIC programme instead of the user. For the interpreter, it behaved exactly as if the user had just entered it manually. Saving worked in a similar way. The user typed LIST, i.e. the command that printed out the complete programme, started the paper tape puncher and then pressed the line feed key. The BASIC interpreter then executed the LIST command, i.e. issued the complete programme and the teletypewriter wrote it on the punched tape in the process.

If you equipped your computer with BASIC from Microsoft, you had an interface that made it possible to use the device without having to know how the computer’s working memory was structured. It was no longer necessary to enter a programme in the processor’s machine language, apart from the loader. At the programming environment level, the source code of the programme was available line by line in the form of virtual objects that could be manipulated interactively. A BASIC programme could, of course, create virtual objects itself, i.e. have been written to retrieve and display data in a way that made it objects according to the world of use. In many places, however, a user of BASIC on the Altair still had to deal very directly with the technology of the computer and the teletypewriter. Although the loaded programme could be manipulated on the computer, the programmes themselves were not objects in the computer, but real-world objects, namely the punched tape reels on which they were stored. If the user wanted to load or save a programme, he could not reference an object in the computer, but had to deal with the data stream stored on the paper tape and its processing in the paper tape reader and puncher. Input data was also not available as separate objects, because there was no concept of files in Microsoft BASIC of this generation that could have been loaded or saved. Input data, if not entered from the keyboard, was rather in DATA lines directly in the programme or came, like the programmes themselves, from punched tape rolls as a substitute for keyboard input.

Performance: Terminal and Cassette

Cassette in a cassette recorder - Image: mib18 at German Wikipedia (CC BY-SA 3.0) (exempted)
Cassette in a cassette recorder - Image: mib18 at German Wikipedia (CC BY-SA 3.0) (exempted)

The early BASIC interpreter for the Altair was all about the teletype. It was not only used for input via keyboard and output on paper, but also as a reading and storage device for mass storage (punched tape) and as an externalised loading and storage routine. However, teleprinters were no longer the latest rage in terms of user interface in the mid-1970s and were therefore also quickly replaced at the Altair. Of course, Microsoft’s BASIC could also be used with a terminal, but this BASIC was not designed for terminals in the least and could not use its advantages.

  • BASIC had no way of positioning the cursor on the screen or clearing the screen content. As the BASIC command PRINT suggests, BASIC could only ever add something new.
  • There was no backspace functionality in today’s sense, because teleprinters naturally had no way of removing a character from the paper again. If you wanted to delete something, you had to do so by typing an underscore. If you wanted to type PRINT but accidentally typed PRR, you could “delete” the second R by typing an underscore, but the R and the underscore remained on the screen. An input of PRR_INT 5_6+_*6 was displayed in exactly the same way, but was internally processed to PRINT 6*6 and accordingly output 36.
  • A terminal, of course, did not have a punch tape reader and punch. However, no programmes could be loaded and saved without these devices, which of course severely limited the usability of the BASIC interpreter at the terminal.

Adapted BASIC versions addressed these problems. They now allowed the loading and saving of programmes by means of the commands CLOAD and CSAVE. The medium for the programmes was then a music cassette that was in a cassette recorder connected to the Altair. The programme was stored on the cassettes as shrill sound sequences. Apart from the other medium, however, not much changed, because there was no file system at all and thus no programmes as objects. In contrast to tape drives of mainframes and minicomputers, such as the DECTape, there was no computer-managed index and no possibility of computer-controlled rewinding of the tape to specific points. This tedious task was entirely up to the user. Later BASIC versions added support for the backspace key so that characters entered by mistake could be deleted again. However, the BASIC of the Altair never supported the programmed positioning of the cursor on the screen. So there were no programmes that could represent objects spatially stable and manipulable on the screen.

Floppy disks - more than just fast and random

While the pure Altair was relatively inexpensive in its basic configuration, it quickly added up to a considerable sum when the computer was equipped with more memory, a serial interface and a terminal or a teleprinter. Computers equipped in this way were no longer to be found in private homes, but in companies, schools or universities. This was even more common when the computer was equipped with a floppy disk drive, as this alone cost $2,000 in 1975, which translates to about $10,000 in 2021 dollars.

Altair 8800 with floppy disk drive - Image: Dr. Bernd Gross (CC BY-SA 4.0) (revised)
Altair 8800 with floppy disk drive - Image: Dr. Bernd Gross (CC BY-SA 4.0) (revised)

Above, under the actual Altair 8800, you can see an almost equally large housing in which an 8-inch diskette drive is built in. You may still be familiar with floppy disks. They were widespread in various versions as inexpensive, writable removable data carriers until the 2000s, before they were finally displaced by USB sticks. Floppy disks were basically cheap, small removable hard disks. Normal hard disks consist of several magnetised metal discs that can be read and written to by means of read-write heads by scanning or changing the magnetisation of the surface. A floppy disk was only one such disk and thus stored relatively little data (a few 100 kilobytes). Moreover, the disc was not a metal disc, but merely a magnetised foil. Similar to the hard disk, a read-write head moved over the rotating disk and read and wrote data. However, the rotation speed and thus the writing and reading speed was of course considerably lower. Compared to music cassettes, however, the disc was still far superior. The higher data rate and thus the faster loading of programmes has often been described here as the great advantage of floppy disks. This increase in speed is of course not to be sneezed at, but it is not the most exciting thing for us here. What is much more interesting is that with the diskette, the user interface has now been enriched by further objects.

Diskettes in comparison
Diskettes in comparison

The Altair’s floppy disk drive read and wrote 8 inch floppy disks, shown here on the left. These floppy disks, as well as the 5.25 inch floppy disks (centre) later common in many office and home computers, were called “floppy disks” or “floppy” for short. The name is probably simply derived from the fact that the data carriers were rather flabby, or “floppy”, due to their flexible data-carrying foil and the likewise flexible plastic casing. The later smaller 3.5-inch floppy disks (right), which were used until the 2000s, had a thicker plastic casing and a protective metal tab that protected the actual data carrier from contact. 3.5-inch floppy disks are fixed structures. The term “floppy”, although still sometimes used, no longer really fits here.

Microsoft adapted BASIC so that it could handle floppy disks. Instead of CLOAD and CSAVE it was now possible to load and save with LOAD and SAVE. The difference seems small, but it is huge, because with the disc it was no longer necessary or possible to rewind by hand. A programme has now been loaded by specifying a previously assigned name. So with the support of floppy disks, the programmes themselves became objects. By means of CAT a list of the stored programmes could be output. The programmes themselves could be referenced by name and thus loaded, saved or even deleted. Programmes became objects of the user interface. The user no longer had to worry about how the loading and saving was done technically. No more spooling, no more threading and no more manual switching on and off of readers and storage devices. Not only the programmes became objects with the floppy disk BASIC. Inputs and outputs could also become objects, because BASIC now allowed data to be read from files, as well as output to files. With this functionality, Microsoft BASIC surpassed the capabilities of the original BASIC from Dartmouth College, because there it was possible to load and save programmes, but there was no concept of data files.

CP/M - The operating system

Microsoft did deliver a BASIC with disk functionality and rudimentary terminal support, but did not exploit the full potential of either technique on the Altair. The situation was different when using the CP/M operating system. CP/M stands for Control Program/Monitor (later changed to Control Program for Microcomputers). The operating system was developed for the Intel 8080 processor by Gary Kildall in 1974, before the Altair was presented. According to the stories, Kildall offered Intel the operating system. However, the processor manufacturer was apparently not interested. CP/M was the first widely used operating system for home and personal computers. The Wikipedia entry on CP/M lists no less than 200 different types of computers on which the system ran. It has unfortunately been forgotten today even by many retro computing devotees, which may be because the system was purely text-based and was not found on, or at least not used on, the popular game-ready devices of the 1980s. Although no longer familiar to most, traces of CP/M’s user interface can be found even in current versions of Microsoft Windows. CP/M, however, never belonged to Microsoft. Kildall founded his own software company in 1976 with the euphonious name Digital Research. More about how the CP/M user interface ended up in Windows later.

Kildall made many interesting technical choices in CP/M. In principle, the system was executable on all computers with Intel 8080 and compatible processors. Apart from the compatible processor, however, the computers on which CP/M ran were very different. In order to still be able to use the same system and the same software, Kildall devised the Basic Input/Output System, or BIOS for short. This BIOS had to be adapted for each system architecture. The rest of the operating system could be reused and, perhaps more importantly, the software could be programmed independently of the computer. Instead of addressing the hardware directly, it was now a matter of using the functions of the BIOS. In practice, however, this computer independence did not go as far as would have been desirable. The BIOS did not abstract from the terminal used. Since different terminals had very different capabilities and used different control signals to, for example, clear the screen and position the cursor, one was therefore always dependent on using a suitable combination of software and terminal.

The operating system CP/M with running Microsoft BASIC
The operating system CP/M with running Microsoft BASIC

The peculiarities of different terminals were certainly not taken into account for the BIOS because CP/M itself hardly used the advantages of a terminal. The main interaction with the operating system was via a command processor called CCP (Console Command Processor). Since Kildall himself used DEC computers and also developed CP/M on them, the mode of operation, right down to the commands, was based on TOPS-10 and was thus quite similar to the OS/8 of the PDP-8. However, CP/M was even easier compared to OS/8. Like OS/8, CP/M had no user management and no devices for time-sharing or multitasking. However, while OS/8 was very flexible in terms of storage media and supported pretty much everything from DEC tape to hard disks or could make it accessible by programming a driver, CP/M only supported floppy disks on which files could be stored. These files had a name of a maximum of eight characters in length, followed by a dot followed by three more characters, the so-called “suffix”, also called “file extension”. The suffix was used to make it clear what kind of file it was. For example, COM stood for directly executable programmes, BAS for BASIC programmes and TXT for text files. CP/M supported several disk drives, which were addressed with letters. A stood for the first drive, B for the second and so on. With the command dir the files on the diskette could be listed. CP/M did not yet support a folder structure, but this was not a problem due to the limited disk space.

In the picture above you can see what it looked like when CP/M was used on an Altair with a terminal connected. CP/M reported its readiness for input to the user in the form of a command line. A> meant that the commands entered refer to the diskette in the first diskette drive. The command dir *.bas on the command line caused all files with the file extension .bas, i.e. all BASIC programmes, to be displayed. The command type served to output a file to the screen and the call to mbasic hallo.bas started the BASIC interpreter from Microsoft, which was supplied with every CP/M, and executed the programme “hallo.bas” in it. This did nothing more here than emit Hello world on the screen. The origin of CP/M’s ideas in the time-sharing systems of the 1970s was very noticeable in the system. It ultimately operated like a time-sharing system to which a teleprinter was connected. The user could type commands, the computer system would then write the output of the execution of the command on the screen, below that another command could be typed and so on.

CP/M’s command line interpreter was the template for Seattle Computer Products’ 86-DOS operating system a few years later. In contrast to CP/M, 86-DOS, in keeping with its name, did not run with the 8-bit processor 8080, but with Intel’s new 16-bit processor 8086 and its cheaper variant 8088, which was installed in the first IBM PC. 86-DOS was bought together with its developer by Microsoft and then marketed under the names PC-DOS for IBM and MS-DOS for compatible computers. Both CP/M and DOS prompt for an input with A>. The concept of drive mapping by letter was adopted, switching between drives worked the same way and even the basic commands like dir and type were the same. Since these basic concepts were also adopted in the Windows command prompt, you can still experience these basic working methods of CP/M from 1974 today, 46 years later. But we will come to Microsoft’s DOS and the development of Windows later in the chapter Windows and MacOS. Let’s stay in the 1970s for now and take a look at CP/M’s killer app. A killer app is not a baller game. There is nothing else dangerous either. A killer app is an application programme that is so useful that it alone justifies the purchase of a particular system.

WordStar - The Killer App

The interaction method of CP/M’s command line interpreter CCP was in no way outstanding. It resembled the method of time-sharing systems of the 1960s and 1970s, but even lagged behind them in terms of convenience. Much more interesting in terms of the user interface were many application programmes that ran under CP/M, because CP/M not only provided the computer with a comfortable environment for programming in a wide variety of programming languages, but above all also created the basis for standard software. A good and very important example here is the programme “WordStar”, the first significant word processing programme for microcomputers. It already had many functions that are still familiar from word processing systems today, from text formatting to spell checking. It was thus one of the early killer apps for microcomputer systems. Besides its functional aspects, WordStar also had a very advanced user interface that is still outstanding in some aspects today.

The word processor WordStar under CP/M
The word processor WordStar under CP/M

This illustration shows the user interface of WordStar with a text already loaded. The screen is divided into two sections. The area below the line with the many minus signs enables text editing. An input cursor can be freely positioned here. It is possible to delete text and insert new characters between the already existing text parts. Instead of a command line, which was still widespread at the time, WordStar used direct manipulation. WordStar is therefore necessarily dependent on a terminal with a screen, because the software exploited the advantage of terminals over teleprinters. Whereas a teleprinter could only ever print something new at the bottom of the paper and what had previously been printed would move to the top, on a terminal the cursor could be placed anywhere and writing could take place there. This gave the characters a spatial position on the screen at which they remained stable or from which they could be moved to any other position. Manipulating the characters did not mean a complete re-printing, but became a spatial change in the place of the character.

The area above the dividing line showed status information and allowed access to the functionality of the software. This area in particular was exceptional at the time, because with the information displayed there it was possible to open up the programme and obtain information about which inputs and manipulations were possible. This was not the case in the usual programmes of the time with a command line or even an editor like “vi”. Here you had to know which inputs were possible or read it up in the manual. The programmes themselves gave no indication of this. Of course, not all of the programme’s functions could be listed in full in this control area, because even at this stage the word processing software had far more functions and formatting options than could be displayed. A system of menus and submenus was therefore used. CTRL+k, for example, opened the submenu for block and memory commands.

The accessibility of WordStar was outstanding at the time. By far not every software that exploited the possibility of spatial division on the screen explained its operation in this way itself. Despite the graphic possibilities, which are very limited by today’s standards, WordStar made it easy to understand the options on offer. In some aspects, the indexability was even better than with today’s word processors. You can understand this quite well with the example of the block operations I described to you earlier for “vim”. The action to be performed was to select an excerpt of text and move it to another place in the text. The basic prerequisite for this is, of course, the ability to mark text areas. At the time of WordStar, the computer mouse had already been invented, but was still unknown on a large scale, even among experts. However, by positioning the writing cursor and entering keyboard shortcuts, a block of text, for example, could be spatially selected and moved quite comfortably. The keyboard shortcuts were displayed on the screen in each case. Pressing CTRL and k changed the menu displayed in the upper area and indicated the possible “block commands”. A user learned there, among other things, that B stands for start of block, K for end of block and C for copy. WordStar thus allowed us to deduce how this marking could be achieved via the displayed menus. Modern word processors, on the other hand, assume that you will already know how to make a selection by mouse or keyboard. There are no discernible clues on the screen as to how this is to be done.

The role of Altair

Whether the Altair 8800 should really be called the first personal computer is, as explained at the beginning, questionable. However, it makes perfect sense to see the Altair as the computer that enabled privately owned computers to make the step from experimental hobbyist computers to fully developed products that could be bought in shops. With the Altair and its many very similar replicas such as the IMSAI 8080 or the Cromemco-Z1, the PC acquired its own characteristics. While at the beginning the system, with its programming via the front panel and a connected teleprinter, was still more of an affordable, slightly limited minicomputer for hobbyists, it had evolved with the improvement of input and output possibilities and stepped out of the shadow of its big brothers with the CP/M operating system and software such as WordStar. Users now had a system with a powerful user interface. Instead of dealing with the technical details of the machine, they now dealt exclusively with virtual and spatial objects created for them by the computer. The characteristics of software also changed from self-written, very specific programmes to application programmes as (commercial) products. It was the beginning of standard software.

The Altair was the starting signal for many developments. Its sheer existence has inspired computer hobbyists and entrepreneurs. Microsoft has already been mentioned here, but Steve Jobs and Steve Wozniak, the founders of Apple, also cite the Altair as their inspiration. You can learn more about the resulting products in the following chapters.

The 1977 Trinity

Although the Altair 8800 of 1975 can be called the first personal computer, 1977 is much more likely to be the year when the era of personal computers really began. For in that year, three computer models came onto the market that, even on the face of it, were much more like what we would call a PC today, and which shaped development for many years. The Commodore Personal Electronic Transactor 2001 (PET for short), the Apple II and the TRS-80 from the American electronics chain Tandy/Radio-Shack were described a few years later by the computer magazine “Byte” as the so-called “1977 Trinity”.

Commodore PET, Apple II (with third party screen) and TRS-80 - Image: Springsgrace (CC BY-SA 4.0), detail, brightness corrected
Commodore PET, Apple II (with third party screen) and TRS-80 - Image: Springsgrace (CC BY-SA 4.0), detail, brightness corrected

Pictured above are the three computers. Unlike the Altair 8800, they no longer look like a broken-in mini-computer with toggle switches and light-emitting diodes, but have today’s look and feel of a PC with a screen, a central processing unit, storage devices and a keyboard. The components were more or less integrated in the three computers. The TRS-80 was the most modular computer with a separate base unit, screen and keyboard, the Apple II had a combined base unit and keyboard - a popular design until the 1990s - and the Commodore PET, shown on the left, had everything in one piece. Not only keyboard, central unit and screen, but even a cassette recorder as data storage were integrated here. Of the three devices introduced in 1977, development of the Apple II began relatively early, in 1976. Commodore PET and TRS-80, on the other hand, were fairly rapid developments. The PET was the first of the three calculators to be presented to the public in January 1977, but did not come onto the market until October of the same year and was not actually available until the end of the year. At that time, the TRS-80 could already be bought from August and the Apple II from June. So once again, it’s difficult to pick the first one.

The three calculators are quite similar in terms of operation. In principle, they operate like a fully equipped Altair 8800 with integrated terminal and directly built-in BASIC interpreter. The Apple II was an exception in that it was the only one of the three computers with graphics capability, i.e. it could not only output text characters on the screen, but also lines, circles and other graphic objects. The TRS-80 and the PET had a text-only output. However, because the PET had many special graphic characters, including all kinds of lines and pattern characters - the so-called PETSCII characters - it was possible to create applications and games that almost had the feel of real graphic output. TRS-80 and Apple II had a slightly modified typewriter keyboard. The first PET, on the other hand, had a completely impossible keyboard with tiny keys in a strange arrangement. The keyboard looked more like a bar of chocolate than a serious keyboard. Of course, this grievance was also known at Commodore. All subsequent models of the calculator therefore had a very solid typewriter keyboard, the only one that even had a numeric keypad. However, the built-in cassette recorder had to make way for this and henceforth had to be connected externally if required.

The three computers were basically aimed at private individuals, schools and small companies, but were accepted differently due to different market positioning and equipment features. The TRS-80 was initially the most successful of the three computers. Its advantage was that it was comparatively cheap and available everywhere through the US-wide network of “Radio Shack”. Because of its distribution in these electronics shops, it naturally found favour with electronics hobbyists. The Commodore PET was available in various versions, which differed mainly in the memory equipment and the graphics output. The variant with eighty characters per line was aimed primarily at office use. The cheaper forty-character version with larger letters was especially popular in schools. The Apple II was considerably more expensive than the TRS-80 and PET, but, as I said, it was also the only one with the ability to display graphics. However, in early versions of the Apple II, because of the way the colour was created, one had to choose between a sharp display of text on a monochrome monitor and a colourful but not easily readable display on a colour monitor. Apple’s advertising was mainly aimed at the home market. Parents were told that children would only be fit for the world of tomorrow if they owned an Apple II. Its prevalence in schools was accordingly also quite high - and not only in the USA, but also in Europe. The Apple II remained a successful product for a long time. It was built as a fully compatible Apple IIe until 19936.

Commodore PET Apple II TRS-80  
Presented January 1977 April 1977 August 1977
Available December 1977 June 1977 August 1977
Price 795 $ (2021: 3,520 $) 1,298 $ (2021: 5,750 $) 600 $ (2021: 2,660 $)
Sign 40/80 40 (80 with additional card) 64
CPU MOS 6502, 1 MHz MOS 6502, 1 MHz Zilog Z80, 1.774 MHz
Module Yes No No
Cassette Yes Yes Yes
Diskette Yes Yes Yes
BASIC** Microsoft Apple, from 1979 Microsoft Tiny BASIC, later Microsoft
Operating system None DOS, ProDOS, CP/M (with additional card) TRSDOS, NewDOS80
RAM (typical) 4 KB, 8 KB 4 KB 4 KB
RAM (maximum) 32 KB 64 KB 16 KB
Graphic None 40 x 48 with 16 colours, 280 x 192 with 2 colours None

However, all three computers had a built-in BASIC interpreter in ROM (Read Only Memory). So you didn’t have to load BASIC, as you did with the Altair, but just switch on the computer. The programming environment was immediately available. The BASIC of the Commodore PET came from Microsoft. Apple and Tandy initially had other, more limited BASIC dialects, but then switched so that in the end all three computers used a BASIC produced by Microsoft. Unfortunately, this did not mean that the BASIC programmes were interchangeable between the computers, because only the basic language was identical. Functions that were more specific, such as graphics display or access to the external interfaces, were different for the three computers. The BASIC programmes of one computer were therefore incompatible with the BASIC interpreters of the other. Even beyond the BASIC dialect, computers were not software compatible. Programmes that were developed for one type of computer only ran on that computer. Not even the coding on the disks and cassettes themselves was identical, so that the files from one computer could not be read or even edited on the other.

When the three computers were launched, the music cassette was the medium on which programmes were delivered, from which own programmes could be loaded and on which they could be stored. Of course, that was not very practical. For all three computers, floppy disk drives therefore quickly came onto the market. Especially with the PET and Apple II, disk-based use almost completely displaced cassette-based use. With the availability of floppy drives came, of course, the need for a corresponding operating system. Such an operating system optimised for disk use was called Disk Operating System, or DOS for short.

Both Apple and Tandy called this software DOS (DOS and ProDOS for Apple, TRSDOS and NewDOS/80 for Tandy). Today, DOS usually refers to Microsoft’s MS-DOS operating system. But it is still four years too early for that here.

However, one should not think of the operating system here as what we know today from our PCs and laptops or what was usual in time-sharing operations. The operating systems of Apple II and TRS-80 merely extended the built-in BASIC with routines for disk access. Commodore even dispensed entirely with a software-based operating system in the true sense, but built the corresponding functionality directly into the floppy disk drives.

Conclusion

As important as the devices were for the early history of personal computers, they were comparatively unimportant from a user interface perspective. While the calculators make computing accessible to the population beyond the do-it-yourselfers and computer enthusiasts, in terms of user interface they offer no advantage to a full-featured Altair 8800 running Altair BASIC from Microsoft. In all cases, the users operate a programming language developed for time-sharing systems based on teleprinters, which has been extended to handle local data carriers.

Small Computers in the Office

At the end of the 1970s and beginning of the 1980s, electronic data processing was already playing an increasingly important role in many companies, but the desks in the offices were still mostly free of computers. The results of computer evaluations were mostly processed in paper form. When there was interactive computer use, it took place at terminals connected to an in-house mainframe or a powerful minicomputer such as a PDP-10. At most, a development department or a laboratory had its own mini-computer. When computers became available for hobbyists and private individuals at the end of the 1970s with the Altair and the 1977 trinity, many a small business also discovered the technology for itself and optimised its administration by using a small computer. Previously, for this user group, a computer was far beyond anything they could afford. Large companies that had long operated large computing facilities, however, saw no point in the new, small computers. They were considered more of a toy than a serious working tool. This assessment was soon to change. Interestingly, the change of mind did not come from better or faster hardware or even from company bosses wanting to be part of a pretended revolution. Rather, a few software applications that were available for the small computers were responsible. These programmes alone justified the purchase of the small computers for office use.

Apple II and VisiCalc

Perhaps the most important of these new programmes, which were only available on the new small computers, was the “VisiCalc” programme developed by Dan Bricklin and Bob Frankston in 1979. Bricklin described the programme at the time as “a magic sheet of paper that can perform calculations and recalculations”. It was the first spreadsheet software on today’s scale. Looking at the software today, the similarity to Microsoft’s Excel is striking. After starting the application, most of the screen is taken up by a spreadsheet. Texts, numerical values or formulas can be entered in the cells of the table. Each cell can be identified by specifying the row (as a number) and the column (as a letter) and can also be referenced within calculations. The values in a cell can thus be used in formulas. If the referenced values are changed, all cell contents calculated from these values are also updated automatically. Today, you know all this from programmes such as Excel, LibreCalc or Google Spreadsheets.

VisiCalc on the Apple II
VisiCalc on the Apple II

The outstanding thing about VisiCalc was not that calculations could be made or that the calculation results were presented in tabular form. Other software products also made this possible. What was more convincing for the users was the interactivity of the software. The best way to understand this is to consider how a simple calculation could be done with and without VisiCalc.

If a calculation like the one seen above was to be carried out with classical software on a time-sharing system, the input data had to be generated first. For this purpose, a file was created in which the expenditures made by the individual persons were stored according to a defined notation. In another file, it was now defined how the data is to be calculated, in this case that a sum is to be formed. The evaluation software was then started on the command line and, according to the configuration, a tabular presentation of the calculation and the result was displayed. If a value should now be changed, the file with the input values had to be edited and the evaluation had to be carried out again. With VisiCalc it was now completely different: Both input values and calculation rules were visible as objects on the screen. They could be created and adapted by the user himself without programming interventions or complex editor operations by simply setting the input focus on the corresponding cell. If changes were to be made, only the value or formula had to be changed. All dependent values updated immediately. No programmes had to be changed, there was no complex coding and the “unfriendly” command line did not have to be used either. The ease of modification, the permanent visibility of input, output and processing rules, and the rapid updating could not be overestimated, as they enabled usage scenarios that were previously inconceivable. In credit calculations, for example, it was easy to play with interest rates and repayment. The changes were immediately visible, allowing an exploratory approach that would have been so cumbersome with the old solution that it was effectively impossible.

VisiCalc ran on the Apple II and initially only on this. The fact that the software was developed precisely for this computer obviously had little to do with the fact that it was so particularly well suited. I’m sure it would have worked just as well with a Commodore PET or a TRS-80. Bob Frankston only developed the programme for the Apple II because there was an appropriate cross-assembler available on the MIT time-sharing system he used for development that could generate machine code for the Apple II. So he could comfortably use the powerful computer for programming and then run the programme on the cheap personal computer.

The VisiCalc user interface

If you look at VisiCalc today on a real Apple II or in an emulator, you immediately have an idea of how it operates. The way cells are numbered and the way they are calculated is almost identical in a current Excel. However, you will probably first have problems moving the cursor, which is due to the fact that early Apple IIs only had two keys for cursor control. Consequently, one could only steer with them either horizontally to the right and left or vertically up and down. With the space bar one therefore switched between horizontal and vertical in VisiCalc. In the screenshot above, the minus sign in the upper right corner shows that the horizontal mode is currently active. The peculiarities of cursor movement in VisiCalc were strange, but do not pose too many problems for a user. However, one got bigger problems when trying to load or save a file, because quite in contrast to WordStar discussed in the Altair chapter, VisiCalc showed the user only very inadequately which options were available. As exciting and innovative as the software was in terms of functionality, it was a child of its time in terms of the accessibility of its functions. One had to either know the necessary inputs and commands or use the manual to operate the programme. The example of loading a file shows this well:

To load a file, one first had to put the programme into command mode. However, it did not say anywhere how this was done. You had to know or look up that you had to type / to do it. When this was done, VisiCalc showed on the first line of the screen which commands could be entered. One learned that one could choose B, C, D, F, G, I, M, P, R, S, T, V, W or - - nothing more. With this information, however, one was of course not much smarter than before and had to try out or read up again. If you did, you learned that you could now type S for storing and then L for load to load a file. Bricklin and Frankston had also attributed the ending to “storing”. The command for this was /SQ as a whole.

The fact that Bricklin and Frankston were able to realise such a complex programme as a spreadsheet on a relatively weak computer like the Apple II is not to be underestimated. Spreadsheets are so taken for granted today that the innovative aspect of the concept is no longer obvious: with VisiCalc, numbers, texts and formulas were written in cells. The data were thus not only values in variables that could be output on demand, but were stably available on the screen as spatially manipulable and referenceable objects. Changes to cells and values were not only displayed immediately instead of having to be brought up again by command, but also all pending cells were updated automatically. The entire sheet displayed was completely recalculated and was therefore always up to date. You never had to manually trigger a recalculation. VisiCalc therefore not only offered spatial objects, but also fulfilled the potential of responsiveness above all. The highlight of the software lay in the combination of these two potentials. The fact that the information in the worksheets was stored in spatially unambiguous cells that could be addressed and referenced by coordinates also allowed non-computer experts to “programme” calculations without having to know a complex programming language or even how the computer works inside. To make such responsive updating possible, Bricklin and Frankston had to work out how to store the worksheet data cleverly in memory to achieve rapid recalculation and updating without the recalculation causing a noticeable delay in operation.

This little miracle in interactivity did not need a powerful experimental computer in a laboratory, but ran on a “cheap” Apple II with 32 KB of RAM and only on this device. The expensive time-sharing systems in the data centre had nothing comparable. VisiCalc alone was therefore reason enough for some companies to purchase Apple IIs for office use.

CP/M on the Apple II - The computer within the computer

In the Chapter on the Altair 8800 you got to know WordStar. This word processor was also an application that could justify the purchase of a computer for office use, because WordStar enabled an office worker with a personal computer to type and format a text himself without having to use a typewriter himself or have access to a secretary typing up a recorded dictation. However, WordStar was not developed for the Apple II, but for the CP/M operating system. However, this operating system did not run on the Apple II because it did not have a processor compatible with the Intel 8080. Companies that wanted to purchase personal computers for office use were thus faced with a dilemma. For the Apple II there was VisiCalc, but the other attractive programmes - apart from WordStar also the dBase database system and many programming environments and compilers - ran under CP/M.

CP/M on an Apple II
CP/M on an Apple II

The solution to this problem came from a different quarter than one might expect. Technicians from Microsoft developed a plug-in card for the Apple II called “SoftCard”. The card had its own Zilog Z80 processor compatible with the Intel 8080. The rest of the circuitry on the plug-in card served to “bend” the hardware of the Apple II so that it looked like a typical CP/M computer to the card’s processor. In a sense, it was a computer within a computer. Plugged into an Apple II, the card thus made it possible to run CP/M on an Apple device. An appropriately adapted CP/M and Microsoft’s BASIC interpreter were supplied (MBASIC.COM and GBASIC.COM on the screenshot above), so that BASIC software developed for the Altair or other CP/M computers could be run immediately on the Apple II. Additionally equipped with a plug-in card that made eighty characters per line available on the screen and provided additional working memory, WordStar and dBase now also ran on the Apple II. The extension option did not remain a niche product, but was purchased en masse. The SoftCard was thus a great success. In 1980, it was Microsoft’s biggest source of revenue and through the card and later replicas, the Apple II became the CP/M machine with the widest distribution.

IBM’s path to the personal computer

With the Apple II and CP/M, the first personal computers moved into offices and companies. Apple, Commodore, MITS (the maker of the Altair) and the other personal computer manufacturers were as new to the corporate world as they were to private individuals. Of course, there were computers in commercial enterprises before that, but completely different companies were active in this market. In addition to Digital Equipment (DEC) with its popular mainframes and minicomputers, there were computers from Bull, Nixdorf and above all from the industry leader IBM. The company was virtually synonymous with large computing systems in the military, administration and universities. Countless developments in computer technology, such as hard disks, multitasking and parallel computing can be traced back to IBM inventions or, if not invented by IBM, were at least brought to market maturity there. IBM’s computers were mainly large computing devices, but minicomputers were also on offer. The System/3 Model 6 minicomputer, for example, was widespread in the 1970s.

At the end of the 1970s and the beginning of the 1980s, the new companies with small, flexible products entered IBM’s traditional field. It took a few years for IBM to respond and bring out its own personal computer. Popular computer history portrays this fact that IBM did not have a personal computer on offer until four years after Apple, Commodore and Tandy, and six years after the Altair 8800, as meaning that IBM did not like small computers and only considered large computers worthy of attention. This representation was certainly supported by the marketing of the Apple company. In the context of the introduction of the Apple Macintosh in 1984, for example, Steve Jobs claimed:

In 1977, Apple, a young fledgling company on the west coast invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business.

Steve Jobs’ typical “reality distortion field” must have been at work here7. IBM’s supposed disdain for small computers found its way into many a book and documentary. But it was not quite that simple. In fact, IBM did not have the private individual in mind with its computers. IBM built computers for companies, not home computers. But that didn’t mean they didn’t see the point of having a small computer. This could not be true because IBM had been developing a mobile computer since 1973 and had been selling it since 1975. This computer, the IBM 5100, is almost forgotten today. It was a portable computer with a fold-out keyboard, a built-in screen and a tape drive for data and programmes. Why IBM did not become the market leader in personal computers with this computer is, of course, ultimately partly speculation, but some reasons suggest themselves.

Among other things, the computer was extremely expensive at $16,000 (equivalent to $79,800 in 2021 purchasing power). In my opinion, the software, or the lack of it, also plays an important role. When the first personal computers entered the corporate world around 1979 and 1980, this new type of computer had emancipated itself from earlier computer generations not so much in terms of hardware, but primarily through software such as VisiCalc, WordStar and Co. Computers like a stripped-out Altair 8800 with floppy drive, terminal and CP/M, a Commodore PET, an Apple II or TRS-80 were not pure continuations of the devices of the 1960s and 1970s. These were not slimmed-down, scaled-down mainframes, nor were they cheaper minicomputers, but a new concept of small, self-sufficient computers on the technical basis of microprocessors, with their own software and their own modes of use.

IBM 5100 - Image: Sandstone (CC BY-SA 3.0), exempted
IBM 5100 - Image: Sandstone (CC BY-SA 3.0), exempted

The 5100 was not such a computer. It was a compact, mobile computer that could run programs for IBM minicomputers and IBM mainframes. To make this possible, IBM built in interpreters for the programming languages APL and BASIC. It was possible to switch between the two programming languages by means of a switch. The APL compiler corresponded to that of the System/370 mainframes and the implemented BASIC dialect was also used on the System/3 minicomputers.

IBM System/23 Datamaster - Image: Deskthority WIKI, exempted
IBM System/23 Datamaster - Image: Deskthority WIKI, exempted

So in 1975, IBM’s 5100 was more of a single-user mini-mainframe than a personal computer. But it did not stop there, because the 5100 was followed by a whole series of other computers. Worth looking at was the System/23 Datamaster. This computer, sold from July 1981, was a desktop computer with two integrated floppy disk drives. The unit could be equipped with many typical office applications such as accounting software and word processing. The Datamaster was typically IBM expensive and aimed at the typical IBM clientele, i.e. corporate customers. However, the focus was not on companies with a large computer centre, but mainly on small companies without experience in electronic data processing. The computer came with a very easy-to-understand manual and software was designed to work without complicated IT terminology. This computer could well be called a personal computer. However, with a price tag of $9,000, which equates to over $26,500 when adjusted for inflation to 2021, the system was far more expensive than typical personal computers, and with the ability to serve two users on two consoles simultaneously, the system’s capabilities went beyond what was usually attributed to a personal computer. The Datamaster remained in IBM’s range for a few years, but was eclipsed only a month later by the IBM 5150, the “IBM Personal Computer”.

The IBM PC and an operating system from Microsoft

In August 1981, the time had come: IBM launched the “IBM 5150 Personal Computer”, which coined the term “personal computer” and whose direct successors are still in use today. Almost every PC you can buy today is a computer that is compatible with this original PC. So in principle you could start the computer with the original DOS and run the software from 1981. From 2006 to 2020, all Apple Macintoshes also used this architecture.

IBM 5150, the IBM Personal Computer - Image: Museo Nazionale della Scienza e della Tecnologia "Leonardo da Vinci" (CC-BY-SA 4.0)
IBM 5150, the IBM Personal Computer - Image: Museo Nazionale della Scienza e della Tecnologia “Leonardo da Vinci” (CC-BY-SA 4.0)

If you compare the technical data of the 5150 with those of typical computers of the time, it does not seem particularly outstanding. An Intel 8088 was used as the processor. This worked internally with 16-bit data, but had an 8-bit data bus. So to read 16 bits from memory, the processor had to do two loads. Intel could also have used an 8086 with a 16-bit data bus. Then the memory access would have been faster, but the memory would also have been more expensive. The 8088 processor used an address length of 20 bits. Since each address denoted 8 bits of data, which corresponds to one byte, the IBM PC could therefore theoretically have addressed 220 bytes, i.e. 1 MB of working memory. However, if you bought a PC in 1981, it was nowhere near as well developed. You could order the calculator with 16 KB or with 64 KB. An extension was possible up to 640 KB, but of course also very expensive.

At that time, it was possible to purchase and operate an IBM PC without a floppy disk drive and without an operating system, because a Microsoft BASIC was built directly into the hardware of every IBM PC and the successor PC XT. If the computer was switched on without a floppy disk drive or without an operating system floppy disk inserted, Microsoft’s Cassette BASIC started automatically. So, as with the Apple II or a Commodore PET, programming could begin immediately. Programmes could be read from cassette and stored on it with the built-in BASIC. For this purpose, the PC had a socket on the back to which a commercially available cassette recorder could be connected. Hardly anyone used the computer like this, however, because it was far too expensive for this limited mode of operation. Software for the computer was almost 100 % offered not on cassette but on diskette. The vast majority of PCs delivered therefore had at least one floppy disk drive. Two drives were very common and simplified the operation of the computer in everyday life. By the way, the floppy drives of the IBM 5150 were also not very special compared to the competition. The original drive, which could only write to one side of the diskettes, accommodated 160 KB on a diskette. By comparison, the Apple II had been able to store 140 KB on one side of a floppy disk for several years at that time. So one was roughly in the same performance range.

An IBM PC was comparatively expensive compared to other personal computers. The cheapest version with 16 KB of RAM, CGA graphics (see below) and no floppy drive cost $1,565 (equivalent in 2021: $4,620). With this variant of the calculator, however, one could do almost nothing except program, save and load one’s own BASIC programmes. You could have got that cheaper elsewhere. A fully equipped computer with an operating system, floppy disk drive and 64 KB of RAM easily cost twice as much. For private individuals, however, this computer was unaffordable and ultimately probably not intended, because IBM’s target market had always been commercial enterprises, which had known the company for a long time and also valued it above all for IBM’s maintenance contracts.

The other technical features of the IBM PC also matched its business orientation. The computer, for example, had no sound output to speak of. There was a loudspeaker, but its job was rather to beep when an incorrect entry was made. In the graphics area, there were two equipment options. When using a “Monochrome Display Adapter”, MDA for short, the computer could only display text in two brightness levels. On the monitor used, this produced a sharp, non-flickering image with good character representation in green or amber lettering on a black background. The MDA card also offered an interface for connecting a printer, which was of course more important in the corporate environment than at home. The alternative to the MDA adapter was to use a Color Graphics Adapter, or CGA for short. The CGA adapter also allowed the display of 80 characters per line of text. However, the character representation was a little coarser here. On the other hand, the text mode had sixteen colours, which, in contrast to the Apple II, could be displayed legibly on appropriate monitors. CGA graphics, of course, also had real graphics modes: 640 x 200 pixels could be displayed monochrome and 320 x 200 pixels with four colours [^CGAcolours]. Unfortunately, IBM had opted for very clumsy colour palettes here. Really beautiful graphics were therefore not possible in CGA and for games an IBM PC with CGA output was certainly not ideal. Of course there were games for the IBM PC with CGA graphics. However, the PC did not become a real platform for home games until years later.

An operating system is needed!
The first version of Microsoft's DOS under the name PC-DOS
The first version of Microsoft’s DOS under the name PC-DOS

A well-equipped IBM PC with one or two floppy drives needed an operating system. With the Datamaster in the same year, a BASIC interpreter with integrated commands for disk handling was still the main interaction component. With the PC, the decision was made to use an operating system that was independent of a programming environment. IBM did not have an operating system for the computer, but did not develop it itself either, instead licensing a system from Microsoft, which Microsoft actually did not have itself either. The programmer Tim Patterson had developed an operating system for the 8086 line of Intel processors, borrowing the operation and some of the central concepts from CP/M from Digital Research. His operating system 86-DOS looked confusingly similar to CP/M. Many of the commands known from CP/M also worked under the new operating system. The similarity of the systems also existed at the architecture level, which meant that CP/M programmes could be transferred to 86-DOS relatively easily. Microsoft bought the operating system and licensed it to IBM under the name PC-DOS. Many stories, some of them contradictory, surround why IBM ordered an operating system from Microsoft and did not use the established CP/M from Digital Research. Depending on the attitude, Bill Gates of Microsoft then appeared as the cunning businessman who forced his competition out of the market with a bad product, or else it is claimed that Gary Kildall of Digital Research jilted the IBM executives and was therefore himself to blame for not getting his chance. Feel free to re-read these stories elsewhere, but don’t believe anything you read, because five different narratives give seven different versions of the same story. The truth is hard to ascertain, but probably much more boring than the exciting stories anyway.

Microsoft’s DOS was henceforth distributed with every new IBM PC that had a floppy drive. It is generally considered a stroke of genius on Microsoft’s part that IBM did not obtain exclusive rights to the operating system at the time. Microsoft was also allowed to license it to other companies and did so. Since the IBM computer consisted largely of standard components, many clones (the so-called IBM-compatibles) soon appeared on the market on which DOS, then under the name MS-DOS, was executable.

The software makes the difference!

It is worth noting that while early personal computers had operating systems that were very much like the old teleprinter style of time-sharing systems, the triumph of personal computers in offices was based on a set of applications that went far beyond this command line style. This was already the case with the Apple II and the CP/M devices. Both VisiCalc, Wordstar and dBase excelled in fulfilling the potential of interactive computer systems to a far greater degree than the operating system on which they ran. All three applications offered their users a spatial user interface with objects that could be permanently displayed on the screen and edited at their display location. Compared to the user interfaces of today, they may seem old-fashioned and complicated, but much of what exists today was already laid out here. They far eclipsed the old time-sharing interfaces with teleprinters or glass terminals. The IBM PC and its operating system MS-DOS or PC-DOS brought nothing new in terms of user interface. The operating system was very much based on CP/M, which at the time had inherited its characteristics from the mainframe time-sharing system TOPS-10. Working with the command line interpreter of DOS therefore felt like using a mainframe computer via teletype. The possibilities of spatial arrangement of objects on the screen were not used. Although the computer could be operated with a CGA graphics card in graphics mode, and in monochrome mode even in quite decent resolution for the time, the operating system and most applications made little use of this. Graphics output was used in games and in applications for displaying content, such as for displaying a diagram in the spreadsheet “Lotus 1-2-3”.

IBM computers and the many low-cost compatibles became a huge success in the business world. The popular business software from the Apple II and CP/M world was transferred. VisiCalc, WordStar and Co. were therefore also available under DOS. Over the years, they were displaced by competing programmes that benefited above all from the graphics capability and from the possibility of equipping the computer with more memory. WordStar remained on the market for quite a long time, but already had competition in 1982 with WordPerfect and in 1983 with Microsoft Word. VisiCalc quickly lost its importance. Microsoft’s first attempt at a spreadsheet called “Multiplan” did not find many followers in 1982, but Lotus 1-2-3, which appeared in 1983, quickly outstripped VisiCalc. Within a very short time, it became as synonymous with spreadsheets as VisiCalc before and Excel today. The big new feature of Lotus 1-2-3 compared to VisiCalc was that it could now display charts and business graphics.

The amount of software available for the IBM PC grew extremely quickly. In addition to many programmes from the office and corporate sector, all kinds of development environments were of course quickly available. To give an overview of all the software products worth mentioning from the early IBM PC era would of course be going too far at this point. However, it can be quite exciting to look at old software from that time as an example. Of course, their user interface seems old-fashioned today, but sometimes the old software had interesting features that one would like to see again today.

Microsoft Word 1.0 with two windows looking into the same text
Microsoft Word 1.0 with two windows looking into the same text

As an example of an interesting user interface, let’s look at the early versions of Microsoft Word. The first version of Word appeared in 1983. The illustration shows this first version with its very typical user interface. From version 1 to 5, Microsoft used a design style for Word that Microsoft used as the standard interface for many of its programmes. At the top was the content area of the software. Here you could see the text that was just opened for editing. There was always a command area at the bottom of the screen. Pressing ESC took the user to this area. All the words after the word “COMMAND:” corresponded to a function or submenu. The function or the submenu could be called up by entering the initial letter. So to copy a marked block of text, one typed ESC and then C. Alternatively, the commands could also be selected with a mouse without first pressing the ESC key. In 1983, however, hardly anyone had a mouse, which only became widespread as a standard input device a year later with the introduction of the Apple Macintosh. It took until the 1990s for it to become standard on IBM PCs and compatibles.

The most remarkable thing about the early Word versions was not the command area. The operation with it worked and made it possible to open up the functionality of the programme, but was nothing really special. However, some of the software’s functionalities were exciting. For example, a very interesting function was hidden behind “Window”. Did Microsoft already have usage interfaces with windows at this early stage? As you can see in the illustration, Word did indeed allow the screen to be divided into several areas, which Microsoft called “windows”, and to display different files in them, for example. That in itself is nothing special. Every modern word processor offers you this possibility. What is exciting, however, is that two windows of this kind could also display the same document. The illustration above shows the same file “PB.TXT” in both windows. These two windows in the same document make it possible to display in one area, for example, the beginning of the text, where you might announce what all you are going to write, while in the other area you write further down in the document and fulfil the promises made. When I found this function while exploring the old Word version in an emulator, I was thrilled. I would like to have such a function in current word processors. I was all the more surprised because I also found this function in later Word versions - in fact in all versions of Word up to the present day. However, the function is nowhere near as prominent as it used to be, which is why I never noticed it. You can find them in current Word versions in the ribbon “View” under “New Window” or “Split” and in older versions in the corresponding menu.

Also very interesting were the cut and paste functions. The corresponding commands were Copy, Delete and Insert, whereby Delete corresponded to today’s “cutting”. What was remarkable was what happened when one marked a block and then called Copy or Delete. One was asked to give this snippet a name for the “Glossary” (in German roughly “Textbausteine”). So, unlike today, there was not a single clipboard where there is always only one content and where a new content overwrites the previous one, but a real collection of text snippets that remain available. This collection of snippets could be viewed in the software and any snippets could be inserted when inserting them. For this, one had to know the name given or select it from the list. Moreover, the glossary, i.e. the collection of snippets, was not volatile, unlike the modern clipboard. It could be saved independently of the document and then reused the next time the word processor was used. Such a collection of snippets with its complex logic of copying and pasting, which corresponds much more to the typical handling of texts than today’s handling of the clipboard adopted from Apple’s “Lisa”, is what I would like to see in modern word processors - and, oh wonder, there is also still this functionality in modern Word versions (under the name of “quick blocks”). However, it has since lost its connection with the clipboard and its operations.

The PC as a home computer?

Today, the vast majority of computers you have at home that are not smartphones or tablets are direct descendants of the IBM PC. At that time, at the beginning of the 1980s, the IBM PC was still rather uninteresting for the home user. A computer suitable for home use had to be cheap, perhaps enable programming, but it had to be suitable above all for games. With good equipment, the IBM PC was very expensive, but still not very suitable for gaming, and with poor equipment it was still expensive, but then completely unattractive in every respect. Those who wanted to play games were better off with one of the game consoles that were emerging at the time, and those who wanted to use a computer for occasional games, for their own programming experiments and for the occasional word processing could do more with an Apple II. Also interesting were a VIC 20 (sold in Germany under the name VC 20) or an Atari 800. These two belonged to the new computer class of home computers, whose best-known representative was to become the Commodore 64, or C64 for short, and which we will take a closer look at in the coming chapter.

The Home Computers of the 8-Bit Era

With home computers, we are entering the realm of computer history, which today has many enthusiastic followers under the catchword “retro computing”. The retro fans tinker with the old computers, write new software and games for thirty-year-old computers and reminisce about their childhood and youth, when they took their first steps in the computer world with these computers, wrote their first own programmes and when they traded the latest games with their buddies in the schoolyard. Many books on computer history focus precisely on this period of computer history, describe all the devices in detail and tell stories and anecdotes about the manufacturing companies, the creation of the devices and about the user scene at the time. In this book, however, this chapter will probably be one of the shortest. There is a personal and a professional reason for this: The personal reason is that I am not a player. The technical reason, and probably the more important one, is that although these devices were the entry point into the computer world for many, they brought little that was new in terms of user interfaces.

In 1981, when the IBM PC began to make inroads into the office world, there were no computers in most homes at home. The computers on the market were not exactly cheap (Apple II), not well suited for games (Commodore PET) or even neither (IBM PC). However, the first game consoles from Atari and Activision had been on the market for a few years. They offered good graphics and sound capabilities and were simply connected to the TV at home. Games were delivered as plug-in modules, so-called “cartridges”. This made it very easy to operate. To run a game, all you had to do was connect the console to the TV and the controllers to the console, plug in a module and switch the whole thing on. Extensive knowledge of computer technology was not necessary. Games consoles have, of course, evolved over the decades. They still exist today and are still characterised by the fact that they are comparatively easy to use and do not require any special computer skills. The early game consoles were a cornerstone of the now emerging home computers, which then developed independently of them.

In 1979, Atari launched the Atari 400 and Atari 800 computers. The two computers were basically Atari game consoles with built-in keyboards, although the keyboard of the Atari 400 in particular was not suitable for effective typing, because it was a membrane keyboard like the ones we know today from older and cheap remote controls. Programmes and games for the Atari computers appeared in the form of plug-in modules. If you had such a plug-in module, you opened the flap above the keyboard, inserted the module and started the computer. As with the game consoles, the programme was started immediately and could be used straight away. Most of the modules were games, of course, but there was also a word processing module and a BASIC interpreter. If you plugged this module in and started the machine, Atari BASIC started and you could create your own programmes, just like on a Commodore PET or Apple II.

Atari 800 from 1979 - Image: Evan-Amos (CC BY-SA 3.0)
Atari 800 from 1979 - Image: Evan-Amos (CC BY-SA 3.0)

If one programmed with the computer in BASIC, one naturally had to be able to save and load the programmes. For this, one had to purchase a programme recorder. This was a cassette recorder for standard music cassettes, but with additional electronics so that the encoding of the data into sounds and the decoding of the sounds into data was already done in the device. A music cassette with a length of thirty minutes (i.e. one side of a 60s cassette) could fit 50 KB of data. Of course, since the cassette was played at its normal speed, this meant that loading 50 KB of programme would have taken a full thirty minutes. But you never had to wait that long, because the computers only had 8 KB of RAM, which of course limited the possible programme size. A floppy disk drive was also available, which of course offered faster access and could store a total of 180 KB per floppy disk, or 90 KB per disk page.

The Atari computers did not remain the only computers of this kind. Atari itself built the 600XL and 800XL computers from 1983, which were equipped with 16 and 64 KB of RAM respectively and already had Atari BASIC integrated. The VIC 20 by Commodore (called VC 20 in Germany) appeared as early as 1980. Compared to the Atari computers, it was quite limited. It could only put about 22 lines of 23-character text on the screen and had no real graphics mode. Due to its favourable price, however, it did find widespread use. In 1982, the VIC 20 was followed by the home computer par excellence of the 1980s8, the Commodore 64, or C64 for short. It was built until 1994.

The Commodore 64 (C64)

Commodore had launched the Commodore PET in 1977. This bulky computer with integrated keyboard and screen was especially interesting for schools and companies. With the VIC 20, Commodore then had a computer in 1980 that was rather less suitable for work and programming, but was cheap and, thanks to simple graphics and sound options, suitable for gaming. The Commodore 64, or C64 for short, which appeared in 1982, was also very much in this vein. The computer was developed to be cheaper than an Apple II, to allow comfortable programming and, above all, to be well suited for games due to the support of graphics and sound, and thus to be able to outdo Atari.

The Commodore 64 (C64) - Image: Bill Bertram (CC BY-SA 2.5)
The Commodore 64 (C64) - Image: Bill Bertram (CC BY-SA 2.5)

How could you save on the C64 compared to an expensive Apple II in order to be cheaper? One could save on the monitor, for example, because every user already had a monitor at home, namely the domestic television. So the C64, like the VIC 20 before it, was connected to the TV set at home. This also took care of the sound reproduction, so there was no need for a built-in loudspeaker. The fact that the resolution was limited by the television output was less of a problem for the intended use in the home. The C64 was not the computer for writing form letters or for extensive spreadsheets. This area was gladly left to others or, with the successors to the PET, they offered corresponding devices themselves.

RAM characters text colours 2 4 16 sound  
C64 64 KB 40 16 320 x 200 320 x 200 320 x 200 3 Stimmen, moduliert
IBM PC 16/64 KB 80 16 640 x 200 320 x 200 160 x 120 1 voice beeper

The C64 was a very attractive device. The table above compares some technical features of the Commodore 64 and the IBM PC 5150. As you can see, the PC actually only had an advantage in the characteristics that are more important for office use: it was possible to achieve eighty characters per text line in text mode and a resolution of 640 x 200 pixels in graphics mode with two colours. The Commodore 64, on the other hand, allowed the display of sixteen colours in text and graphics mode at a resolution of 320 x 200 pixels. The PC could only display four colours at this resolution. From completely different worlds were the sound generators of the systems. The PC only had a single-voice beeper, whereas the C64 had a three-voice synthesiser that could provide a sound experience well worth hearing.

A C64 starts directly with BASIC
A C64 starts directly with BASIC

If you switched on a Commodore 64, a BASIC interpreter started immediately. The same BASIC was used that Commodore had already used for the PET, i.e. a BASIC dialect from Microsoft. An owner of a C64 could enter BASIC commands immediately after switching it on and perform the simple calculation 5+3*7, for example, as in the picture above. The programming of a real BASIC programme could also be started immediately. The C64, like the PET, had a comfortable editor for BASIC programming. The cursor could be moved freely on the screen. Since the output of a line in the BASIC listing corresponds to the definition of this very line, the programme could be edited comfortably by navigating to the corresponding line in the listing, changing the output there and making the changed line the new input by pressing “Return”. Other computers, such as the Apple II, also supported this in principle, but the movement of the cursor could only be achieved very awkwardly by entering key combinations.

Programmes created in BASIC were also able to take advantage of a text screen. The screen could be cleared, colours could be used and the cursor could be freely positioned by the programme. Strangely, despite the C64’s outstanding graphics sound chips, there were no BASIC commands for graphics and sound output. Programmers had to make do by writing values directly into memory areas of the computer with the command POKE. This worked, but actually brought the programmer unnecessarily close to the hardware level.

As with the Atari 800, the Apple II, the PET and the TRS-80, the C64 also had the two variants of mass storage that were common in the 1980s. The cheaper but slower option was to use standard music cassettes. The corresponding player was called a “Datassette” by Commodore. On a thirty-minute tape, 100 KB of data could be stored with on-board means. For many users, developers and hobbyists, however, this was too little and too slow. With the help of so-called “turbo loaders”, which existed in both hardware and software form, they managed to squeeze up to 1 MB onto the side of the cassette. Of course, there were also disk drives for the C64. Surprisingly, the distribution of these drives varied nationally. In Great Britain, for example, the cassette was the medium of choice, especially as far as games for the C64 were concerned, in Germany both storage media were encountered and in the USA a C64 was actually only used with floppy disk drives. 163 KB of data fit on one side of a floppy disk. It was also possible to write on the other side of the diskette. To do this, however, you had to turn the disc over by hand. The diskette could be read fabulously slowly at 400 bytes per second. It took no less than seven minutes to read in an entire diskette. There were also fast loaders for floppy disk use that could increase data throughput to 10 KB per second.

Most computers of the time had to be equipped with their own disk operating system in order to work with floppy disks (Apple II, TRS-80, IBM PC) or had one built in (such as Atari 600XL/800XL). This was not the case with the Commodore computers. Their operating system, just like that of the PET and the VIC 20, did not actually support floppy drives at all. If you wanted to load data from the diskette, you had to enter the device number of the connected device. For floppy drives, this was usually the number 8. The fact that it was then a floppy drive was irrelevant for the computer. It was also possible to read a programme in exactly the same way - by specifying a different device number - from an interface to which, for example, another computer was connected by cable and sent the data. For the system, it was irrelevant what kind of device was hanging on the other side. So how the data was to be read from a diskette, how a diskette was structured or how it was formatted was not a matter for the C64’s operating system. Commodore placed all these functions directly in the floppy disk drive, which ultimately became a small computer in its own right. This architecture chosen by Commodore for disk access sounds very sensible, because the concrete implementation of disk access was thus decoupled from the computer and the operating system. This basically allowed a wide variety of floppy or even hard disk drives to be connected, all of which could have been accessed in the same way as long as they had the same interface. Unfortunately, this decoupling was not so great in practice, because the C64 did not have abstracted support for floppy drives as general data carriers, but simply no support for them at all.

Loading a programme on the Commodore 64
Loading a programme on the Commodore 64

In the picture above you can see how a programme, here the programme “RATEN”, was loaded from a diskette on the Commodore 64. This was achieved via the command LOAD "RATEN",8. Important is the addition ,8, the indication of the device number 8. If one forgot these two characters, the C64 tried to load the programme from cassette. Saving worked in the same way as loading, i.e. with SAVE "NEW",8. These two operations were still quite simple and plausible. However, it became complicated if, for example, one wanted to delete a programme on the diskette. The C64 did not know any command for this. So you had to establish a connection to the floppy drive, send a specially formatted message there and close the connection again. For example, if one wanted to delete the programme “UNSINN”, this could be done by entering OPEN 1,8,15, "S:UNSINN.PRG":CLOSE 1. From a user interface point of view, of course, that was pretty terrible. Instead of handling, creating, deleting or manipulating virtual objects, they opened channels to connected devices and sent them cryptic messages. So for such everyday operations as deleting, copying or renaming files, one went deep down into the technology of the computer or the floppy disk drive. However, many of the owners of the computer probably did not even notice this shortcoming, they just inserted a floppy disk and entered LOAD "*",8,1 to directly load and start the first programme on the floppy disk and they could already play their favourite game.

The role of 8-bit home computers

With the home computers of the 8-bit era, computers made their way into living rooms and children’s rooms, which was of course due in no small part to the fact that their graphics and sound capabilities made them well suited for games and that these games, when distributed on cassette or diskette, were easy to copy. Because the devices started with a BASIC interpreter when they were switched on and thus offered quick access to programming, many a person who had acquired the computer for playing also learned programming and built small programmes for the problems of everyday life or hobbies. If you look at the computers from the user interface perspective, they didn’t really bring anything new. On the contrary, they even lagged behind operating systems such as CP/M or even the DOS of the Apple II.

Home computers, of course, continued to evolve. The successor generation of 8-bit computers from Commodore and Atari was not only a souped-up version of these devices, but also strongly influenced by the Apple Macintosh. The Commodore Amiga and the Atari ST were thus equally successors to the game-focused 8-bit computers and competitors for the Macintosh and IBM PC office computers. At the same time, IBM PCs with better graphics and sound became steadily more suitable for gaming and, with falling prices, also found their place on many a desk at home. The era of computers connected to the TV was coming to an end and the distinction between typical home and office computers was visibly fading.

Desks with Windows

Let’s realise where we have reached in the history of the personal computer. From humble beginnings in the mid-1970s, an impressive industry has developed. Let’s take ourselves back to 1983, for example: home programmers and game lovers got a Commodore 64 or an Atari 800XL for a manageable budget. Those who had more money to spend got an Apple II. Some schools and smaller companies also equipped themselves with Apple computers. Of course, the IBM PC, now in its second version PC XT, was also attractive for companies. Many other manufacturers such as Compaq offered replicas of the IBM PC for little money. Almost all of the computers mentioned were graphics-capable. The graphics capabilities were used, for example, to output diagrams, display graphics and, of course, to make games possible. The graphic has not yet become part of the user interface of the operating systems and the software. The operating systems remained text terminal-based. Popular software, however, used the possibility of structuring the screen spatially with text characters. How this was done was not standardised. In principle, each software manufacturer cooked its own soup, so that each application served itself differently. In 1984, a computer was launched that changed this. Its user interface relied on graphics and the company that made the computer provided resources and specifications that ensured all software programmes had a common user interface. This computer was the Apple Macintosh. Its descendants, today only called “Mac”, are still known to us as the biggest competition to computers running Microsoft Windows. The Macintosh was the first economically (more or less) successful computer whose user interface used the WIMP paradigm (Windows, Icons, Menus and Pointer) and whose central concept was the so-called desktop. The history of this type of user interface is what this chapter is about, because the roots of the Macintosh and its user interface go far back into computer history. They are in experimental computers from the 1960s and 1970s. The computer mouse also found its beginning in these decades.

The pointing device - The history of the mouse

A user interface with sliding windows, selectable icons and pop-up menus cannot be used effectively with a text-only keyboard. Rather, it calls for an input device that enables spatial selection. Now, the advent of computers with user interfaces of this kind is usually associated, and rightly so, with the mouse as an input device. But you know from the chapter on experimental graphic systems that spatial input methods have been around for a very long time and that the mouse did not start it. At SAGE and Sketchpad you have become acquainted with light pens as input devices. Also considered early on, but not considered here, were joysticks as input devices to enable the selection of an on-screen object. The computer mouse brought advantages over the light pen and joystick in its ability to allow very precise and accurate selections and manipulations, while not being very tiring.

When trying to figure out the beginnings of the computer mouse, we again get into a situation where one can argue who thought up, invented and launched the first mouse and again, as with the question about the first computer, the question is usually answered differently depending on the national glasses on the questioner’s head.

We know computer mice as small, box-like structures that can be slid over a table surface and whose movements trigger a corresponding movement of a pointer on the screen. Today’s mice mostly function purely optically. In the 1980s and 1990s, on the other hand, mice were common in which a ball rotated mechanically. This ball was mounted in the mouse in such a way that its movement correlated with the movement of the device over the table surface. However, the idea of mechanically sensing the rotation of a ball and using it as input is older than interactive computers and thus older than the mouse. So-called “trackballs” were already being used in radar technology in the 1950s. Such a trackball was, as the name suggests, a ball whose movement is tracked. Trackballs were embedded in the surface of the radar station. Radar operators could then move the sphere by hand, for example to control the movement of a selection cross on the radar screen.

Among the companies that produced such control consoles for airspace surveillance with trackball was the German company Telefunken. Telefunken Germanised the term “trackball” and used the beautiful term “roller ball control”. The company’s range of products in the late 1960s and early 1970s included a mainframe series called TR-440, which was used in many universities, partly because its purchase was supported by public funds. In the vast majority of institutions, the computer was probably used in typical batch mode at the time to enable mass use. However, interactive and even graphic use of the computer was also possible. Telefunken offered a so-called “satellite computer” called TR-86 for this purpose. The task of this computer was to handle input and output, which was thus separated from the actual mainframe, because after all, it could not be justified that an entire mainframe “wasted” its computing time on a user typing something in. Terminals could be connected to the satellite computer. In addition to text terminals, Telefunken’s product range also included a graphic terminal with the designation SIG 100 (SIG for Sichtgerät). The SIG 100 consisted of a black-and-white television and an attached keyboard. In turn, a separate version of the rolling ball control called RKS 100-86 could be connected to this sight.

The RKS 100-86 rolling ball control system - I will refer to it as the “rolling ball” in the following - worked in principle in exactly the same way as the rolling ball control system of the radar units, but with one important difference: the rolling ball on the radar units was permanently integrated into the control consoles. This was not suitable for use with the SIG 100, because this was a unit that should be placed on existing tables. Putting a trackball in these tables didn’t fit the concept. So without further ado, the rolling ball was turned around and built into a housing that could be slid over the table.

The German rolling ball control - picture courtesy of Jürgen Müller (http://www.e-basteln.de)
The German rolling ball control - picture courtesy of Jürgen Müller (http://www.e-basteln.de)
The inner workings of the German rolling ball control - picture courtesy of Jürgen Müller (http://www.e-basteln.de)
The inner workings of the German rolling ball control - picture courtesy of Jürgen Müller (http://www.e-basteln.de)

Above you can see an illustration of one of these rolling ball controls. The device is larger than what we know from today and than the photo would lead us to believe. The hemisphere, which is pushed across the table, has a diameter of twelve centimetres. So you operated it by putting your whole hand on it. One had to be careful not to accidentally press the button on the top. When the rolling ball was pushed across the table, a ball moved inside the device. As you can see in the picture below, the rotation of the ball is transmitted to two components that are somewhat reminiscent of bicycle dynamos. These components are so-called “encoders” that transmit the movement of the ball into an 8-bit code, which is then fed out via a cable.

Telefunken engineer Rainer Mallebrein developed the rolling ball from 1965 onwards, and from 1968 onwards it could be purchased as an accessory for the sight. However, it was not often sold. Not fifty copies were produced. The time for this kind of input device had not yet come, because only a few used a twenty-million-mark mainframe computer interactively and even fewer had any use for graphic input and output. Of course, the amount of software that could be operated at all by means of a rolling ball was correspondingly small. Later Telefunken computers therefore no longer supported the roller ball - the innovative input device fell into oblivion. This ended the German history of the computer mouse, which was not even called that.

At the Stanford Research Institute (SRI), starting around 1963, computer scientist Douglas Engelbart and his colleagues at the Augmentation Research Center, of which Engelbart was the director, developed an experimental computer system called oNLine-Ssystem, or NLS for short. In a self-report[^english_etal] Engelbart and Co. describe the project as follows:

Viewed as a whole, the programme is an experiment in cooperation of man and machine. The comprehensible part of man’s intellectual work involves manipulation of concepts, often in a disorderly cut-and-try manner, to arrive at solutions to problems. Man has many intellectual aids (e.g., notes, files, volumes of reference material, etc.) in which concepts are represented by symbols that can be communicated and manipulated externally. We are seeking to assist man in the manipulation of concepts–i.e., in his thinking, by providing a computer to aid in manipulation of these symbols. A computer can store and display essentially any structure of symbols that a man can write on paper; further, it can manipulate these symbols in a variety of ways. We argue that this service can be made available to help the on-going intellectual process of a problem-solving man; the service can be instantly available to perform tasks ranging from the very smallest to the very largest.

English, William K., Douglas C. Engelbart, Bonnie Huddart: Computer-Aided Display Control. Final Report, 1965, Contract NASl-3988, SRI Project 5061.1.

This text was written in 1965. For that time, what was described was absolutely extraordinary, because a computer system was described that could be used interactively. In 1965, as you know by now, very few computers were operated in this way, and interactive operation of powerful computers was out of the question. Only military and some experimental systems were used in this way. There was talk of a system that would help people think by allowing dynamic handling of symbols on the screen. As the text continues, one learns that this should not only be possible for individual users, but also collaboratively. Engelbart and his colleagues described many fascinating techniques, such as linking texts together, dividing a screen into different areas and spatially selecting operations on the screen. For all this to be possible, of course, a powerful computer was needed, but actually such computers did not yet exist in a reasonable size and with a reasonable budget. Engelbart and his colleagues therefore ended up using a university mainframe simply as they would a modern PC.

The American Mouse (replica) - Image: izas/Shutterstock
The American Mouse (replica) - Image: izas/Shutterstock

One of the concepts of NLS was not to let users type in function names, but to offer the functions on the screen for selection. For this to be possible, something Engelbart and colleagues called an “operand selecting device” was needed. The group tested a range of devices that could be used for this purpose, including joysticks and the light pens familiar from SAGE and Sketchpad. The most promising input device turned out to be a construct called the “Mouse”[^english_etal2], the first prototypes of which they had developed themselves in 1964. Compared to the light pen, the mouse had above all the advantage that you did not have to point directly at the vertical screen, which could be very tiring and also had the disadvantage that you always covered the content with your own fingers. With the mouse, a “bug”, we would say “mouse pointer” today, is positioned on the screen instead.

English, William K., Douglas C. Engelbart, Bonnie Huddart: Computer-Aided Display Control. Final Report, 1965, Contract NASl-3988, SRI Project 5061.1.

The picture shows a replica of the early mouse prototype. As you can see, it was a considerably simpler device than the German rolling ball. There were two rollers on the underside of the mouse. These rollers were designed to rotate when the mouse was moved along the roller and to simply slide when the movement was across the roller. The movement of the mouse was converted into a movement of the “bug” on the screen in NLS. The mouse was introduced in 1968 in the “Mother of All Demos”, where the other innovative features of the system were also presented.

We now have a German and an American mouse. Both were developed independently of each other. The German rolling ball was brought to market maturity and was considerably more solidly built - however, it was not a success and fell into oblivion. Among other things, this may have been due to the fact that the rolling ball was developed for a system for which graphic-spatial operation was an exception at best. Engelbart’s mouse was more of a “proof of concept” at the time. The system on which it was used, however, was one that was strongly oriented towards graphic-spatial operation. Even though Engelbart’s mouse was not yet a finished product and even his direct further developments did not have any distribution outside SRI, his developments at NLS had more influence on further user interface development than Telefunken’s rolling ball, because many of those who helped develop the mouse at NLS later ended up at Xerox PARC, where the mouse became the central input device of innovative computers. This greater influence does not diminish the developmental achievement of Mr Mallebrein and his colleagues in the slightest, it rather shows how much it is sometimes a question of accompanying circumstances or ultimately even a bit of chance which technical developments prevail and which do not.

The Experiment - The Xerox Alto

Xerox Alto - image courtesy of the Computer History Museum
Xerox Alto - image courtesy of the Computer History Museum

The first graphical user interfaces of the kind we still use today were developed at Xerox PARC. Unlike in Germany, Xerox (pronounced something like “Sie-Rocks”) is a name that absolutely everyone knows in the USA. The company takes its name from dry copying, which is called “xerography” in English. Xerox held the patent on this copying technology, on which all modern photocopiers and laser printers are based. The company name “Xerox” is as much associated with photocopies in the USA as Google is with web searches and here in Germany Zewa is associated with kitchen rolls. “A Xerox” there is an expression for a photocopy and “to xerox” stands for photocopying. One does not necessarily expect innovations in computer and user interface technology from a company that manufactures copiers, but this very company founded one of the most important research institutes for innovative user concepts and computers. Xerox also wanted to be part of the future of office and information technology and therefore founded the Xerox Palo Alto Research Center (Xerox PARC) in 1970. What was remarkable about Xerox’s philosophy was that the researchers employed at PARC were, in principle, completely free in what they did and what they researched. The result was incredible creativity and innovation. If you look at the list of everything that was developed and conceived at PARC, you can sometimes get the impression that almost all modern computer technology that we still use today started there. Central concepts of object orientation, a programming technique, were developed at PARC, the Ethernet standard originated at PARC, the laser printer and much, much more.

One of the early key projects at Xerox PARC was the development of the Alto. The first concepts for this computer were developed as early as 1972 and the first Alto was completed in March 1973. The Alto was probably the first computer, apart from NLS, which relied heavily on the mouse for its operation. He thus plays a very important role in the development of the graphical user interface, especially with the software developed for him. Surprisingly, in many computer histories it is either forgotten or lumped together with the Xerox Star of the early 1980s. The Alto was a cornerstone for this and virtually all graphic user interfaces that followed in the 1980s, but it was still something of its own worth looking at.

A 1974 PARC documentation describes the Xerox Alto as a “personal computer system”9. An interesting choice of words, since computer history usually has personal computers begin with the Altair in 1975. The developers at Xerox tell us in the documentation what they mean by a personal computer:

By ‘personal computer’ we mean a non-shared system containing sufficient power, storage and input-output capability to satisfy the computational need of a single user.

This definition would later have fitted IBM’s Personal Computer of 1981 quite well, because Xerox aimed here for the computer to be used by a single person and to be available quasi completely for that person. The fact that the computer also belongs to this person in the sense of the personal computer idea at Altair was not mentioned here and was probably not what Xerox had in mind, because the office computers of the future were to be developed at PARC. Cheap home computers were not in view10.

Technically, the Xerox Alto was a minicomputer. Usually there was a screen, keyboard and the input devices mouse and “Chord Keyset”, a five-finger keyboard, adopted from the NLS project, on a table. This additional keyboard was very important for the operation of the Alto programmes, as it was assigned important programme functions. Usually under the table was the processor unit, the size of an average refrigerator. An Alto was equipped with a lot of memory for the time, with a minimum of 128 KB and a maximum of 512 KB. Two removable disk drives were installed in the computer. Each of the removable disks held 2.5 MB of data. The Alto’s screen was used in portrait mode, which is due to its intended use as an office computer, because the portrait screen can display the entire contents of a sheet of paper. The Alto displayed content on the screen in graphics mode. The display was pure black and white (without grey tones) with an extremely high resolution of 606 x 808 pixels for the time.

Now that you have learned about the technical features of this machine, you probably expect it to have come with an innovative operating system with an exciting user interface. If you were to look at the basic system of the Alto now, you would probably be disappointed, because the computer had a command line with the typical characteristics of this form of interaction. What is interesting for us are not the features of this command line, but the application programmes developed at PARC for the Alto. Actually, all these programmes had a great influence on what was to come in the decades that followed. Here is a non-exhaustive list of interesting programmes:

Neptune: A file manager with a two-column display and file and operation selection by mouse operation. Neptune is a distant predecessor of today’s file managers such as Explorer or Finder.

Bravo (1974): A word processor that not only allowed the formatting of text in principle, but could also display it directly on the screen. Bravo allowed the use of different fonts in one document and even the inclusion of images that could be seen on screen at the same time as the text. Bravo is generally considered to be the first word processing system that worked according to the WYSIWYG principle (What you see is what you get).

! The word processor Gypsy. At the bottom of the screen, the wastebasket with a selected text content, represented by underlining - screenshot from “Alto System Project: Larry Tesler demonstration of Gypsy”

Gypsy (1975): The successor to Bravo. The programme is characterised by a “mode-less” interface. What this means can be seen by moving a text block. In a mode-based system, you move a text block by activating a move mode. The system then asks the user to first select the block to be moved and then to move the pointer to the place where the block is to be moved. While the mode is active, you can only move text with the programme. All other programme functions are not available. Bravo also still functioned in this way. In Gypsy it has now been solved differently. An exciting invention in this context was something close to what we now call the “clipboard”. Gypsy used the term “wastebasket”, which indicates a slightly different way of functioning. The wastebasket was permanently visible on the screen. The operating instructions11 explains:

The Wastebasket has two lines but can be expanded at the expense of the document window. Whenever material is cut (deleted) it is put in the Wastebasket so that the operator can see what has been cut and have access to it for reinsertion.

Only content that was deleted from the document ended up in the wastebasket. This filing was not necessary for normal copying or moving in the text. The original text was marked with the right mouse button pressed and the target area was marked with the middle mouse button pressed. If one now pressed the ‘PASTE’ key on the five-finger keyboard, the text was copied from the source to the target area - completely without wastebasket. However, this could certainly come into play during the insertion, because since the text previously present at the destination was overwritten, it ended up in the wastebasket and thus remained available to be used again elsewhere if necessary.

Markup: A pixel-based graphics programme, very similar to the later Apple Paint or Microsoft Paint.

Draw: A vector graphics programme that made it possible to draw texts, figures and lines. Everything drawn was available as an object and could therefore be processed and manipulated as such.

Laurel and Hardy: These two e-mail programmes not only allowed messages to be sent within the office’s internal network (the Alto made intensive use of the Ethernet, which was also developed at PARC), but even the transmission of documents and drawings.

The Alto’s software was extremely innovative. With the computer, early experience was gained in exploiting the graphics capability of the system in the user interface by representing elements of both the user interface and the content to be edited on the screen as spatial objects and also making them operable in situ. One can only properly appreciate the achievement of the Alto’s developers if one considers at what time the devices were manufactured and the software programmed - namely in the early 1970s. At that time, computer use generally meant programming with punched cards in batch mode or terminal mode using teleprinters and command lines. The Alto’s spatial-graphic software was the entry into a completely different world here.

The World in Windows - The Smalltalk Environment

To speak of the one user interface of the Xerox Alto is actually not possible in any meaningful way. Each of the application programmes mentioned above had its own unique modes of control and object interaction. If you use the programmes today, the operation seems quite complicated and unfamiliar. There was not yet the quasi-universal concept of buttons, menus, icons, windows or scrollbars that exists today, but even this playfulness of user interfaces has its beginnings at Xerox PARC and at Alto in the Smalltalk programming environment.

Smalltalk Environment - Image: SUMIM.ST (CC BY-SA 4.0)
Smalltalk Environment - Image: SUMIM.ST (CC BY-SA 4.0)

Smalltalk is first of all a programming language developed at Xerox PARC that has played an important role in the history of object-oriented programming. The Model-View-Controller concept12, which is much used in programming today, also has its origins in Smalltalk. But Smalltalk was much more than just a programming language. From version Smalltalk 76 onwards, the language was closely linked to the environment in which it was developed. As you can see in the illustration, the user interface of the Smalltalk environment consisted of a grey background on which windows could be arranged. These windows were views of Smalltalk objects or running Smalltalk programmes. They could be moved on the background, resized and folded up so that only their label remained. The windows had, if necessary, a scroll bar on the left side with which a section of the content could be selected. A click with the middle mouse button on a Smalltalk object - and in Smalltalk basically everything was a selectable object - opened a menu. Among other things, this menu always contained the entry “examine”. When you called this function, you got access to the definitions of the object. Simplified, you could say that you had access to the programming behind the object and could also change it - and that in the running system.

Programming in Smalltalk was something quite different from dealing with programming environments as you might know them today, for example if you program with XCode or with Microsoft Visual Studio. Let me illustrate this with a small example that explains how you could use your computer if the user interface of your Windows or Macintosh computer worked like the Smalltalk environment. Programmers may forgive my choice of language; I am trying to clarify the features of Smalltalk without using too much programming vocabulary. If your Windows or your MacOS worked like the Smalltalk environment, then everything you see on the screen as an object would also be available to you in its programming behind it, and it would be possible for you to change this programming on the fly. Let’s say you always have a problem with accidentally triggering buttons. You could now remedy this by using the programming environment to work your way through a programme that has a button until you arrive at the basic specification of a button. You now change this so that a button, for example, only triggers when you also press the SHIFT key on the keyboard. From the moment you would have carried out this redefinition, it would apply to all buttons in the complete system. The special thing here would not be that you can change the behaviour of a standard component - you can do that in principle with any programming, although admittedly it is not very easy - but that you could do it at runtime of the system and for the whole system, so you would not have to recompile programmes into machine code and restart them.

The combination of programme code and formatable text in the same document was also very remarkable. Let’s take a window that displays the contents of a document on the screen. The text of this document could be formatted directly in the window, i.e. set in different fonts, font sizes, bold, italic and underlined. That in itself is not trivial, but not overly interesting either. What is exciting, however, is that the documents could also contain Smalltalk code and this could be executed on the spot. To do this, the text only had to be marked, the menu called up and the programme code executed directly on the spot with “Do it”.

The Desktop - Xerox Star and Apple Lisa

The Alto is a milestone in computer history that is often overlooked, and of course I have only been able to do justice to it here in glimpses. Central concepts of current graphical user interfaces including the predecessors of today’s standard software were developed on and with this computer. Much was ahead of its time. The Bravo word processor and the Smalltalk environment (both from 1976) deserve special mention here. But: The computer was designed to be very experimental. About 2,000 units were built during the 1970s and most of them were used within Xerox and PARC. It wasn’t until the late 1970s that the calculator sold for $32,000 ($2021: $118,000).

It is part of popular computer history that Apple co-founder Steve Jobs was shown Smalltalk during a visit to Xerox PARC in the early 1980s and was so excited that he had the design of the in-development Apple Lisa, for which he was responsible, completely revamped. The “Lisa” thus became one of the first two commercially produced computer systems with the desktop metaphor. However, two years before Apple introduced its first computer with the desktop metaphor, Xerox itself launched the Xerox 8010 Information System. The software running on this computer had the name “Star”. However, it has become common practice that the computer as such is also referred to as the “Xerox Star”.

Input and output devices of the Xerox 8010 Star Information System (left) and Apple Lisa with additional external hard disk (right) - pictures: Xerox Corporation via digibarn.com, alker33 on Youtube (CC-BY-3.0)
Input and output devices of the Xerox 8010 Star Information System (left) and Apple Lisa with additional external hard disk (right) - pictures: Xerox Corporation via digibarn.com, alker33 on Youtube (CC-BY-3.0)

The target group of Xerox Star and Apple Lisa were not technicians and certainly not computer hobbyists - both computers were far too expensive for them - but people who had to deal with documents in their work, wrote texts, made illustrations or managed lists. These users were well versed in these areas. Computer technology was not their interest in most cases. Star and Lisa were therefore the first two computer systems worth mentioning that were designed to be operated by users who explicitly understood nothing about computers. It took quite a long time before there were systems again that required as little technical knowledge from the users as these calculators. Only the PDAs of the 1990s could claim that again.

price RAM hard disk floppy disk screen mouse network  
Xerox Star 16,500 $ (2017: 44,415 $) 0.5 - 1.5 MB 10 MB - 40 MB - 1 bit13 17” 1024 x 800 2 Keys Ethernet
Apple Lisa 13,500 $ (2017: 37,515 $) 1 MB 5 MB - 10 MB Yes 1 Bit 12” 720 x 360 1 Button AppleNet

Apart from the fact that the Star and the Lisa both had a desktop concept and their operation was largely based on the mouse, the computers were already quite different in appearance. The Xerox Star was a very bulky device that looked like an overly wide PC tower or a narrow refrigerator. He found space under the table. On the table were only the screen, keyboard and mouse pictured above. A Xerox Star had up to 1.5 MB of RAM, a local hard disk of 10 to 40 MB and a huge, for the time, high-resolution 17-inch monitor with 1024 x 800 monochrome pixels. Although the system could be used in isolation, its concept was geared towards use in a company network. A typical Xerox Star was connected to file servers and other computers via Ethernet, which, among other things, allowed e-mail to be sent in a corporate context. Although using the Star meant intensive use of the mouse, which had two buttons, the keyboard also played a major role. In addition to the possibility of entering numbers and letters, this keyboard had a number of keys for general functions such as deleting, copying, moving or displaying properties.

The Apple Lisa had a very different appearance: it was a very compact all-in-one device. The complete unit was on the desk. The screen and central unit were integrated. The first version of the Lisa had two 5.25-inch drives for 871 KB floppy disks. A later revision, also pictured above, had a single 3.5-inch drive for 400-KB floppy disks. The Apple Lisa was equipped with a generous 1 MB RAM. It was usually used with a 5 MB hard disk that had to be connected externally (placed on top of the unit in the picture). At 12 inches in size and 720 x 360 pixels in resolution, the screen was quite small compared to the Star. Lisa’s mouse had only one button and the keyboard was also simpler than the Star. There were no special keys like on the Star. The keyboard thus played a considerably lesser role in operating the Lisa. All operations and function calls were performed by mouse. The keyboard had to and could only be used for typing text.

The desktop of the Xerox Star

Both the Apple Lisa and the Xerox Star had a complex desktop concept that certainly differed in important ways from current user interfaces that have a concept of the same name, i.e. Windows, macOS and the usual Linux interfaces.

The Star Desktop - Source: Designing the Star User Interface, BYTE Magazine, April 1982
The Star Desktop - Source: Designing the Star User Interface, BYTE Magazine, April 1982

The basic concept of the Star’s user interface was a background surface called “desktop” on which the objects of the current work, i.e. text documents, graphics, database objects and tables were stored. In the following, I will refer to these types of objects collectively as “documents” for the sake of simplicity. The documents were usually displayed on the desktop in a reduced form, as a so-called “icon”. A document could be viewed more closely by opening it. To do this, the object was first selected with the mouse and then the Open key was pressed on the keyboard. The document was then enlarged into a window. The icon itself remained as a white shadow on the desktop. In the picture above you see the document “Star picture” opened as a window and next to it as a completely white icon on the desktop.

Do you find the phrase “The document was then enlarged into a window” strange? Today, you probably wouldn’t put it that way. One would rather say: “A double click opens the file in the corresponding application programme”. But you couldn’t say that about the Star, because in the Star’s user interface world there were no application programmes at all14. They did not appear in the world of use. It was not possible to address and start a word processor or a graphics programme by icon or by name. The world of interaction functioned differently. There were only documents and folders visible as icons or in window form. New documents were created by copying documents lying on the desktop. To do this, an icon was selected, “Copy” was pressed on the keyboard and a target position was selected with the mouse. Consequently, the basic operation of working with the Star was - very appropriately for a copier manufacturer - duplicating originals. An initial set of empty documents of all kinds (including folders) was ready for this purpose in a “directory” on the desk. However, a template did not have to be an empty document from the directory, but could very well be a previously written document. The Star operating philosophy was, for example, that a user created a letter template with a dummy text, stored it on the desktop and later duplicated it as needed when a letter was to be written.

One could work with several documents at the same time at Star by opening several icons in one window. So several programmes were running at the same time. You have probably heard the technical term for this mode of operation: multitasking. The original operating system version of the Star arranged the windows automatically. Later versions allowed overlapping windows. Few computers offered multitasking in 1981 and none of them, apart from purely experimental systems, made the various programmes or documents visible simultaneously in a graphic-spatial display.

The Star’s desktop concept was more consistent than that of today’s computer systems. The desktop was the central place for the objects of the current work. A clear distinction was made between this work area and the storage of documents. The filing system consisted of icons within the interface that looked like filing cabinets. Behind them were file storage servers that were connected to the Star via network. Documents could be moved or copied into these filing cabinets, called “drawers”. If a document was then to be reused from a drawer, it first had to be moved or copied to the desktop, because documents could only ever be edited from the desktop. This may seem complicated, but is ultimately logical in the user interface world if one takes seriously the concept of the desktop as the place for current work objects. The documents of the current work were placed on the desk. If you wanted to work with something, you had to put it there first. One did not start tinkering with documents directly in the filing cabinet. Incidentally, whether this need to put documents on the desk first is really due to the fact that Xerox felt it necessary to mimic the office way of working so closely can certainly be doubted, because there is a very plausible technical reason why documents cannot be opened from a file repository. The file repositories are file servers on the network, whereas the desktop is stored on the local hard disk. Editing files directly in the file storage would have caused a high network load and also provoked problems if different users had accessed the same document on different devices. The restriction that only documents on the desktop could be edited avoided such problems.

The Apple Lisa desktop

At first glance, the user interface of the Apple Lisa resembled today’s user interfaces much more than that of the Xerox Star. It was a typical Apple interface with a menu bar at the top of the screen, as is still common today. On the desktop there was an icon for the hard disk and one for the recycle bin. This visual similarity to today’s systems can easily mislead you, because if you take a closer look at how the Lisa desktop worked, there were nevertheless major differences to today’s systems, because the desktop played a much greater role with the Lisa as the place where the documents of the current work are located than it does today.

In contrast to the Xerox Star, the Lisa had application programmes available as objects. Programmes were delivered on diskette. You could either use them directly from there or, which of course made more sense, copy them to your own hard disk by dragging the icons from the diskette to your hard disk. Apple implemented a pretty hefty copy protection. Each Lisa had its own serial number, which was not just printed on it, but stored in a chip in the computer itself. When the programmes were first accessed, i.e. when they were started or copied, the programme disks were “serialised”. From then on, the programme only ran on exactly this one Lisa.

The desktop of an Apple Lisa
The desktop of an Apple Lisa

Only some tool-like programmes, such as a calculator or the clock, were started directly by double-clicking on the Lisa. The other programmes were on the hard disk as objects, but if you started them directly, you only got a message informing you that you had done it wrong. The actual use of the programmes was similar to the Xerox Star in that documents were accessed and, using the same logic as the Star, enlarged into a window. New documents were created, similar to the Star, by duplicating existing documents. In the Star’s way of working, it was already obvious to declare some documents as templates, i.e. to create a letter template, for example. What was a template, however, was not made technically explicit in the Star. The relevant documents were documents like any other. Apple came up with the much more sophisticated stationery pad interface concept for the Lisa, which is described in the system’s user manual as follows:

Each stationery pad represents an infinite supply of either blank or customized paper, which you use for creating new documents. The design on the pad matches the design on the tool and documents associated with the pad. […] Each tool comes with a pad of blank stationery, and you can make your own custom stationery pads.

So with every software application that worked with documents, a corresponding tear-off pad was supplied. If you tried to open this tear-off pad or double-clicked, a sheet was torn off, which technically meant that the blank document was copied, the copy was filed next to the tear-off pad and given an automatic title. Even your own documents with content or even entire folders including their content could be made into a tear-off block with the Lisa. The mode of operation was always the same. Double-clicking on such a self-generated template created a copy in the same place, which could then be edited subsequently.

With the Lisa, the desktop played a big role as a place for the current work documents. Apple came up with a more sophisticated system here than Xerox did with its Star, which may be partly because the Lisa was a system geared towards a local hard drive rather than file servers. In contrast to the Star, with the Lisa you didn’t have to bring the documents you wanted to work on to the desktop first - rather, the system did this step for the user virtually on the side. Basically, with the Apple Lisa, every document and every folder had a place in the file storage of the hard disk[^lisa_diskettes]. These documents and folders were on the desktop when they were being worked with, but their location within the file storage remained. You can see this clearly in the above illustration in the document “Important Information”, which is located on the desktop, but can also be seen in its actual storage location on the hard disk as a greyed-out icon. This second position in the file repository played an important role in the management of the work objects on the Lisa.

There were a number of ways a document could come across the desk. Perhaps the most obvious option was to drag and drop a document onto the desktop. If one did this, the document was in fact neither moved nor copied, but only made accessible on the desktop and “greyed out” at the original storage location. Another method of placing a document on the desktop was from an application window. With the Lisa, unlike the Star, you could open and edit a document directly at the location on the hard disk. When you had finished editing, however, you now had two options, unlike other systems: They could choose between “Save & Put Away” and “Set Aside”. You chose the former if you no longer needed the document. The latter, i.e. “Set Aside”, closed the application window and made the document available on the desktop for later use - quite independently of whether it had previously been opened from the desktop or not[^lisa_further]. This system ensured that everything that you wanted to continue using was automatically moved to the desktop. If you no longer needed documents or folders that were on the desktop, you could put them in the recycle bin - then they were permanently deleted. If you wanted to keep them for later editing, you had to put them down again. For this, it was sufficient to simply mark them and select “Put Away” in the menu. The documents were then saved and returned to their proper location on the hard drive.

Just like the Star, the Lisa’s desktop was the place for the current work materials. Everything that was being edited was available on the desktop as an icon or was open as a window. The fact that such a work process could of course be longer than a working day is something that Apple certainly foresaw when designing the user interface. It was not necessary to save everything at the end of work, then sort it into the file repository and have to go back to it the next time. The system relieved the user of this task. If you pressed the off button on the Lisa, all changes were automatically saved. The next time the system was started, the documents that were previously on the desktop were automatically put back in place and, if they were previously open, they were also automatically reopened. The work processes thus became independent of the technical-organisational computer sessions.

Conclusion

The Apple Lisa and the Xerox Star created a user interface world that was very independent of technical aspects of the machine. While all interactive computers needed virtual objects and naturally offered application objects as such, most user interfaces also required knowledge of the objects of the technical computer world and their operations. First and foremost, these were programmes as objects and operations such as loading and saving files. Lisa and Star functioned differently: there was no concept of a programme that one used to read a file into the working memory. What was offered was rather a representation of an object in a window, i.e. in its detailed and editable form. The document became the central object of interaction in both systems and the desktop the central location of the current working materials. By the way, there was no “Save as” function in either system. This would not have fitted well into the offered object world either, because through this function an application would execute a task that creates a new document. However, such a function makes no sense if the window is not a programme but a document. The load and save functions themselves, i.e. in the sense of an application that reads a file into the working memory or outputs the contents of the working memory in a file, simply made no sense in the document worlds of Star and Lisa. The Lisa did have a “Save & Continue” and the “Put Away” was always accompanied by a “Save” - but this was more for security reasons, for example to avoid losing a processing status if the power failed or the computer had a malfunction. If everything worked properly, you never actually had to save explicitly, because that happened automatically when the document windows were closed or, in the case of Lisa, the documents were put back.

Later user interfaces with a desktop concept, as we still largely use today, are no longer as sophisticated as Star and Lisa. This can be regretted - but there are reasons why it has come to this, which I will go into in more detail in the following chapter. To put it briefly, the way Star and Lisa worked required considerable hardware input, which made the computers so expensive that both computers must be considered economic failures. Inexpensive computers such as the Apple Macintosh followed, but their considerably more economical hardware equipment made such sophisticated usage concepts as those of the Lisa impossible.

Windows for the Home

The Xerox Alto, Xerox Star and Apple Lisa systems presented in the chapter “Desktop with Window View” have redefined the user interfaces of computers with concepts such as the desktop, windows, menus and with their optimisation for mouse operation and thus on graphic-spatial input. However, the computers were not really widespread. This was not least because they were extremely expensive due to their high technical requirements. Although not a success in their own right, Alto, Star and Lisa became the basis for the 16-bit personal computers of the mid-1980s. Until the introduction of the Apple Macintosh in 1984, all popular home and office computers (i.e. Apple II, Commodore PET, C64, IBM PC, etc.) used the command line paradigm. Individual applications such as WordStar, VisiCalc, later Lotus 1-2-3 and Word made use of the spatiality of the screen, but were basically based on text representations and did not have a uniform user interface. The graphics capabilities of the computers, where available, were used for games and to render content, such as business graphics, but not for the user interface.

Apple Macintosh - The Little One

Apple Macintosh - Image: iStock.com/audioundwerbung
Apple Macintosh - Image: iStock.com/audioundwerbung

One year after the Apple Lisa, i.e. in 1984, Apple presented a small computer called “Macintosh” with great media effort. It is sometimes claimed that the calculator was developed because the Lisa was not successful. But it couldn’t have been like that, because it was simply not possible to develop a completely new computer, including such a complex user interface, in just one year. The development of both computers was largely parallel. After the Macintosh was initially planned as an inexpensive, text-based computer, the development was already changed from about 1981 in the direction of a computer with a graphical user interface. At first glance, this interface was very similar to the Lisa’s user interface. The computer was mainly operated by mouse, there were windows that could be moved around the screen, there was a constantly visible menu bar at the top of the screen and the operating system presented itself to the user as a grey area on which documents could be stored and on which windows could be moved that displayed the contents of data carriers as file icons. This surface was called the “desktop”, just like the Lisa.

Processor RAM Disk Hard disk Screen Sound  
Lisa 7.8 MHz 1 MB 2x 871 KB 5-10 MB 12” 720 x 360 Beep
Macintosh 5 MHz 128 KB 400 KB - 9” 521 x 342 8 Bit Samples

The computer was advertised as “Lisa Technology” in the early years. A typical Lisa cost $10,000 (equivalent to $26,950 in 2021). At 2,500 dollars, the Macintosh was much cheaper, but it was technically very limited compared to it. Two of these limitations meant that the user interface world of the Macintosh, on closer inspection, functioned significantly differently from that of the Lisa. The first limitation concerned the working memory. The Lisa had a memory of 1 MB, which was very large for that time. The first Macintosh made extreme savings here. It only had 128 KB of working memory. This was twice as much as on home computers of the time with their usual 64 KB, but for a graphic system that was also to be used for more complex tasks such as word processing including graphic display of the text, this was very tight. The second major limitation was the mass storage. A Lisa was usually operated with a hard disk. Although this had a size of 5 or 10 MB, which is tiny by today’s standards, at that time it allowed all application programmes and many documents to be accessed conveniently and quickly at the same time. Although the Macintosh could later also address hard disks, the operating system was designed for pure diskette operation. The floppy disk drive was of course not only much slower than a hard disk, with only 400 KB the amount of data and programmes that could be stored on a floppy disk was equally modest.

The limitations on the equipment side had an impact on the functioning of the user interface. Unlike the Lisa, the Macintosh was a single-tasking machine, meaning it could only run one program at a time. Several programmes simply did not fit into the limited memory. A swapping of memory contents to the mass storage - the typical technique today when the working memory is not sufficient - was not possible without a fast hard disk, and the Macintosh did not have one at first. Single-tasking naturally changed the way the system could be worked with. If one had opened a document in an application, one no longer had access to the file management and the desktop. If you wanted to carry out file management operations, you had to leave the application you had just opened. A desktop as a central place that was always available, virtually behind everything, was simply not feasible with the Macintosh.

Since the first Macintosh had no hard disk, the programmes were logically not permanently available, but had to be loaded from their respective diskette. The system did allow a document to be placed on the desktop or opened at its storage location in the file repository, but then there was usually an interruption, because one usually had to take out the data diskette, insert the diskette with the application programme, start the application programme, then insert the diskette with the document again, and only then could one begin to edit the document. Especially since the standard equipment of the unit only included a single floppy disk drive, one became a real disc jockey. Here is an example of the steps necessary to insert an image stored on one diskette into a text stored on another diskette:

  • Insert word processing programme diskette
  • Search for programme icon and start programme
  • Eject word processing programme diskette
  • Insert document diskette
  • Load document
  • Eject document diskette
  • Insert image diskette
  • Load image and insert into text
  • Eject image diskette
  • Insert the document diskette again
  • Save file and wait until the programme reacts again

Working with the floppy disks and constantly ejecting and inserting them was also reflected in the functions of the user interfaces of the application programmes themselves. Unlike the Lisa and Star applications, for example, saving could not be done implicitly on the Macintosh. What does that mean? As a user of a Lisa, you didn’t really have to think about loading files and saving them. Loading was done by opening the file from the file repository and saving was done automatically when the opportunity arose, at the latest when the computer was switched off. The limitations of the Macintosh made this way of working impossible, because the small working memory did not allow the contents of several documents to be held at the same time. So one could not avoid saving and closing documents in the meantime to free up the working memory again. Unfortunately, as with the Lisa, saving could not be done on the fly, on demand or automatically at the end of the programme without explicit user interaction, because writing to diskette was noisy and slow, and above all because the computer blocked during the writing process and further work was not possible. Implicit saving, i.e. without user interaction, was not possible even when closing the applications or closing the window of a document, because it was more than uncertain that the diskette on which a document was to be saved was even in the drive at the time of saving.

The technical limitations of RAM and mass storage size, as you can see, necessarily meant that the implicit storage of the earlier, very expensive systems, was not feasible on the Macintosh. The application programmes therefore inevitably had to be given mechanisms to load files, save them, store them under a different name or on a different diskette in addition.

Atari ST - The Versatile

Atari 520ST+ with monochrome screen
Atari 520ST+ with monochrome screen

This is a good opportunity to introduce you to a rival product to the Apple Macintosh, whose user interface was in many places even more optimised for the limitations of a single-tasking operating system with diskette operation than that of the Macintosh. The picture above shows an Atari 520ST+ that was released in 1986. The only difference to the original ST was that the RAM was doubled to 1 MB. Apart from that, the two computers were identical. The ST appeared in 1985 and, in the version with a monochrome monitor, cost just under 800 dollars (corresponding to 2,000 dollars in 2021), only a third of the Macintosh, but was technically much better equipped.

Price approx. Processor 15 RAM 1985 RAM 1986 Diskette Screen Sound  
Apple Macintosh 2,500 $ Motorola 6800 5 MHz 512 KB 1 MB 400 KB 9” 521 x 342 8 Bit Samples
Atari ST 800 $ Motorola 6800 8 MHz 512 KB 1 MB 700 KB 12” 640 x 400 3-voice synthesizer

Atari’s computer played two roles, if you will. You could buy it with a colour monitor or connect it to your own TV via an add-on device. In this form of use, the computer could display four or sixteen colours depending on the resolution selected and, with its sound chip and fast Motorola CPU, was the modern successor to the 8-bit home computers of the early 1980s. However, if one opted for the monochrome monitor, the ST was above all a competitor to the Macintosh, and was quite convincing as a computer in offices and administration. The ST was not only much cheaper than the Macintosh, but also had a better keyboard with arrow keys and numeric keypad and a considerably larger monitor that provided a very high, flicker-free picture quality.

The operating system of the Atari ST was called TOS (The Operating System). Technically, behind this system was a 16-bit implementation of Digital Research’s CP/M operating system, which you learned about in the chapter on the Altair 8800. Early books about the Atari ST, written before its release, still partly contained descriptions of the command line interpreter of CP/M and described the basic characteristics of this operating system. They also explain a graphical user interface called GEM (Graphics Environment Manager) which, like CP/M, was developed by Digital Research. When the computer appeared in 1985, however, there was nothing left of the CP/M command line. Atari had decided instead to design the operating system in such a way that, like the Apple Macintosh, it launched directly into the GEM graphical user interface and could only be operated in this way. However, the CP/M basis could be recognised in some peculiarities, such as addressing floppy drives via drive letters or the naming scheme of files with eight characters for the name, followed by a dot and three characters to define the file type. In this respect, the Atari ST system was more like the DOS of the IBM PCs than the operating system of the Apple Macintosh, where the user was given much more flexibility with file names.

Compared to the Macintosh, the Atari’s desktop was very slimmed down. When you started the computer, you saw a grey surface like on the Macintosh, on which the contents of disks and folders could be displayed in windows and desktop icons and a recycle bin were also available, but such basic options as storing a file directly on the desktop surface were not possible. This restriction of functions was quite logical with regard to the equipment of the computer, because the usefulness of placing files on the desktop was severely limited in a single-tasking system and the orientation towards floppy disks, as explained above.

1st Word Plus on the Atari ST with the possibility to delete files
1st Word Plus on the Atari ST with the possibility to delete files

Since the desktop was not available as a place for file management in the single-tasking system of the Atari ST, file management functions were provided in many application programmes. The illustration above shows the word processor 1st Word Plus, which, in addition to offering the ability to load, save and print files, also offered the direct deletion of files on the diskette. If you put yourself in the position of a diskette user, this function was very useful. Imagine the situation: You have written an important text, now want to save it, but then realise that there is not enough space left on the diskette. If you are lucky, you have another diskette at hand, but what if you don’t? You cannot switch to the desktop and delete a few files, because it is a single-tasking system. Switching to the desktop would mean quitting the running programme, but that would mean discarding the text you have written because you cannot save it. The function of being able to delete files on the diskette that are no longer needed directly from the word processor can be your last resort.

Commodore Amiga - The Multimedia

Commodore Amiga 500 - Image: Bill Betram (CC-BY-2.5)
Commodore Amiga 500 - Image: Bill Betram (CC-BY-2.5)

Competing with both the Atari ST and, to a certain extent, the Macintosh was the Commodore Amiga, the first model of which also appeared in 1985. Pictured above is the most common model of computer, the Amiga 500 from 1987. At first glance, this computer looks very similar to the Atari ST. The Amiga 500 also has an integrated computing unit and keyboard, both have a mouse and use 3.5-inch floppy disks as storage media. As you can see on the screen, the Amiga also had a user interface with icons and windows and in this respect was also similar to the Macintosh. Instead of a desktop, however, the Amiga was referred to as a “workbench”. Accordingly, floppy disks did not contain folders but drawers. Workbench and drawers for the central concepts are at first just other names, but in the end this name revealed a completely different orientation that, contrary to the mindset of the Macintosh and the Atari ST adopted from the Xerox Star, was not oriented towards office work.

The great strength of the computer was its graphics and sound capabilities. It had countless graphics modes, from high resolutions of 640 x 512 pixels with sixteen colours to 580 × 368 pixels16 with a whopping 4,096 colours that could be displayed simultaneously. No other home or personal computer platform was as graphics powerful at the time. The video signal and screen technology were well-known, solid television technology. The monitor supplied was basically a small colour TV without a receiver. The use of this technology had its advantages in terms of the computer’s multimedia capabilities. However, this design decision was also associated with considerable disadvantages that made it difficult to use, especially in the office, because although the monitor offered a high resolution, which in principle would also have allowed fine text output or graphics, the high resolution was only available in the so-called “interlaced mode”. The disadvantage of this technique, which I will not go into in detail here, was that there was a visible and unpleasant flicker when picture elements with a high contrast met in the vertical. The flickering could be minimised by a clever choice of colours, but then the contrast suffered. For the classic office application word processing with the display of fine, black text on a white background, the Amiga was therefore rather poorly suited. However, due to its outstanding graphics unit together with an equally outstanding sound chip, the Amiga was an ideal platform for games. In addition, the graphics and sound functionalities were also used for image editing and animation. Here, the technical decision to rely on television technology for the graphics output became a great advantage, because the video signals of the Amiga could be directly coupled with existing video and television technology. Many an animation that was shown on television at the time was created by an Amiga computer.

The Amiga and its operating system were superior to their counterparts from Atari, Apple and also IBM in another respect, because unlike them, the Amiga was multitasking. If one opened a text file or a picture from a window in the Workbench, another window opened displaying the text or the picture. However, the Workbench with its windows also remained available. Neither the Atari ST, the Apple Macintosh or the IBM PC could offer that at the time. Even if the windowing technique could not be used because the programme used a different graphics mode than the Workbench, it was possible to switch between the programmes, among other things - here again the outstanding graphics capabilities of the system become apparent - by dividing the screen into several “viewports”. Then, for example, in the upper half of the screen an animation was running in low resolution with many colours and in the lower half another programme was running in high resolution with only a few colours.

Multitasking and windows on the IBM PC

By the mid-1980s, Macintosh, Atari ST and Amiga were three computer systems on the market that established the graphical user interface and, in the case of the Amiga, multitasking in the personal computer sector. Comparing the three systems with the DOS of the IBM PC, it seemed to belong more to the previous computer generation, while Macintosh, ST and Amiga seemed fresh and modern. Of course, the user interface of the IBM PC did not remain at the level of 1981, but caught up more and more with the three window-based systems. The basis for this, apart from faster processors and more working memory, was above all the increasing spread of hard disks.

While at the beginning of the 8-bit home computer era the cassette was still the storage medium of choice, the 16-bit generation was designed from the outset to use floppy disks. However, the time of the floppy disk as the main mass storage device was only an interim period. Both the Atari ST, Commodore Amiga, Apple Macintosh and the IBM PC could be supplemented with a hard disk. In the first generation of devices, such a plate was often still an external device that stood additionally on the table. Later, a plate was often already installed ex works. The IBM PC XT (for Extended Technology) from 1983, for example, could already be purchased with a built-in 10 MB hard disk. But be careful! The fact that the 16-bit computer generation in principle provided interfaces for connecting a hard disk right from the start, and that today’s specimens of these computers in the collections of retro computing fans also usually have a hard disk, should not lead you to believe that hard disks were already common in the home at that time. Retro computing devotees often have a lot of fun upgrading the old computers to their maximum expansion level. Sometimes they even manage to use modern tricks to upgrade the old computers further than was possible at the time. At that time, of course, most systems were not equipped in this way. Much more modest configurations were common, because hard disks and RAM were sinfully expensive. If you ordered the IBM PC XT with the 10 MB hard disk, a monitor and 640 KB of RAM, you had to put 8,000 dollars on the table. If inflation is taken into account, this would have been over $21,500 in 2021, too expensive even for many small businesses - not to mention home users.

When Microsoft licensed the first version of its DOS to IBM in 1981, hard disks were not yet in focus. DOS, like the CP/M it was modelled on, was clearly an operating system for floppy disks. This was particularly noticeable in that a central element of today’s operating systems, the hierarchical file system, did not yet exist in DOS. DOS and CP/M allowed files to be written to floppy disks and made addressable by their file name. A diskette, however, was not further structured in itself. So there were no subdirectories or folders. All files were on one level. With a floppy disk on which 180 KB or 320 KB of data could be written, such structuring was in principle also not necessary. The disk itself was more or less the unit of order for the files. But if you connected a hard disk and had 5 or even 10 MB available, it became impractical to keep all files in one big list. Substructuring became necessary.

The second version of Microsoft’s DOS appeared in 1983. Although the operating system had the same name and seemed quite similar in terms of basic operation, it was basically an almost completely new development. Microsoft extended DOS with a whole range of functions and concepts from the Unix operating system. This included the technique of piping, in which input and output of programmes are redirected, as well as a simple programming option through so-called batch files. You can read more about these Unix concepts in the chapter on Linux and Unix. Probably the most important innovation for most users was the introduction of the hierarchical file system with folders and subfolders. With them came central new commands that became part of the standard repertoire of all DOS users: To the already known dir for listing the contents of a directory were now added cd for change directory to change to another directory and md for make directory to create a subdirectory.

While the early personal computers, with their disk-centric design and little memory, inevitably had to be devices in which only one programme was in the memory at any given time and was processed, well-equipped IBM-compatible PCs with a hard disk and plenty of RAM basically fulfilled the prerequisite of also making several programmes available at the same time. In fact, you could buy the PC from IBM in a configuration that made this possible. The operating system supplied was not Microsoft’s DOS, but Unix, initially in the PC/ix variant from ISC and from 1985 with Xenix from Microsoft. Unix on the IBM PC did not catch on at the time, however, because the hardware requirements were high and the software selection modest due to the low distribution. The vast amount of software that made the IBM PC so attractive, from spreadsheets to word processors to databases or even games, were developed for DOS. DOS did not allow real multitasking, but a very simple, limited possibility to start one programme without having to quit the other one existed from version 2 onwards, because one programme could now start another programme. However, the software used had to explicitly provide for this possibility. In order to understand how the use of multiple programmes was possible, a short excursion into the mechanisms of how a programme is loaded is needed. Don’t worry, I’ll try not to make it too complicated:

When you started a CP/M or an MS-DOS, after a while the command line interpreter greeted you, allowing you to enter commands to copy files, delete them or even to start a programme. But the command line interpreter itself was also just a programme. Its only special feature was that it was started automatically by the operating system. If one used the command line interpreter to start another programme, the programme was copied into the working memory and then started. If the programme was terminated, control went back to the operating system, which then automatically reloaded the command line interpreter into memory and started it. At any given time, there was only one programme in the memory - and consequently only one programme was running at any given time. The other operating systems worked in the same way. On a Macintosh, it was the Finder programme that started automatically. It provided the folder windows and the desktop. If another programme was started from the Finder, it was copied into the Finder’s memory and started. After closing the programme, the operating system automatically restarted the Finder. With version 2 of Microsoft’s DOS, the possibility of starting programmes was changed a little. It was now no longer the case that loading a programme necessarily displaced the previous programme in memory. A running programme, I’ll call it “A”, could now load another programme “B” into memory and start it. However, all data from programme A remained in the memory. If the user was finished with what he wanted to do in programme B and ended the programme, programme A was continued where it was interrupted. This technical change in how programmes could be loaded did not yet make it possible to run several programmes at the same time and switch between them, because for that it would have needed some kind of higher-level control that would have had the various programmes in memory under its control and allowed the user to switch. But there was nothing like that under MS-DOS. What was possible, however, was to run programmes in a kind of chain. Imagine the following situation: You want to write a letter, have opened the word processor for it and have already written a longer section of text. But now you notice that you need information for the letter that is in a spreadsheet in the spreadsheet. If your word processor now allows it, you can start the spreadsheet directly from it and find out the information there. After closing the spreadsheet programme, you automatically return to your text in the word processor - as if nothing had happened - in which you can directly incorporate the new findings. This was not really comfortable, but it at least reduced the need to have to save everything in programme A and exit the programme only to be able to use a programme B.

The prerequisite for this to work was, of course, the availability of sufficient working memory, and that was definitely a problem, because memory was not only expensive, its handling under DOS was also a very complex undertaking. If you have a computer freak in your circle of acquaintances who is old enough that he or she might still have worked with DOS, talk to him or her about it. You are sure to get an hour-long lecture (on the differences between upper and high memory or even between extended memory and expanded memory) or a sudden pallor combined with gasping for breath!

VisiOn, TopView, DESQView

Above I told you that the first computer system after Xerox Star and Apple Lisa that relied on mouse operation and windows was the Apple Macintosh of 1984. In a rough view, that is also true. But if you look closely, you will find VisiOn, a graphical work environment for IBM PCs, which was presented as early as 1982 and went on sale at the end of 1983. The manufacturer of the VisiON user interface was the company VisiCorp, which was also behind VisiCalc, the original version of the spreadsheet I introduced to you in the chapter on the Apple II. In some respects VisiOn was superior to the Macintosh’s user interface because, unlike the Apple computer, the system offered the possibility of having several documents open in different applications at the same time and switching back and forth between them. This capability was of course accompanied by corresponding hardware requirements. The PC needed 512 KB of RAM, a hard disk of at least 5 MB in size, a CGA graphics card and a mouse. IBM’s PC XTs from the same year had a hard disk if you chose the more expensive equipment variant, but usually only 128 KB of RAM. A mouse was not included in the scope of delivery of the computer. So on top of the $5,000 for this top-of-the-line device, there was the cost of the memory upgrade and a mouse.

VisiOn - Image: winworldpc.com (CC BY-SA 4.0)
VisiOn - Image: winworldpc.com (CC BY-SA 4.0)

VisiOn was not a complete operating system. To use it, one first started the computer with Microsoft’s DOS and then loaded VisiOn like any other DOS application. VisiOn then first presented itself with the “Application Manager”, with which applications could be called up. The applications were each displayed in their own window, as can be seen above. The applications in question were not the normal applications for IBM PCs, but programmes created especially for VisiON. In addition to the Application Manager, a word processor (VisiOn Word), a spreadsheet (VisiOn Calc) and software for creating business graphics (VisiOn Graph) were available. In principle, the system could have been extended by further applications, but additional software never appeared.

If you are familiar with modern window-based user interfaces, the operation of VisiOn seems peculiar in some respects. Even at first glance at the illustration, the user interface seems almost a bit upside down. Instead of a menu at the top of the window, as with Windows, or at the top of the screen, as with the Macintosh, system-wide functions are displayed at the very bottom of the screen and application functions in the footer of the respective window. The windows also lack all the small controls for closing, moving, zooming in and out. These can be found instead in the menu in the footer. But not only the position is unusual from today’s point of view; the entire way of working seems downright twisted. All of today’s graphical user interfaces use the object-verb paradigm. You first select an object, such as a file or a window, and then select the action you want to perform, such as opening, closing or deleting that file or window. VisiOn, on the other hand, used a verb-object way of working. If you wanted to close a window, you first clicked on “CLOSE” at the bottom. In the line above, the system then reported “Close. Which Window?”. Now you had to click on the object that was to be closed, in this case one of the windows. This way of working seems totally unusual to us today. That the software engineers at VisiCorp nevertheless came up with it is not at all incomprehensible from the point of view of the time, because the command line interfaces, which were still dominant at the time, also functioned according to the verb-object principle. If one wanted to delete the file “unsinn.txt” under DOS, one wrote the command at the beginning and the object to which it referred as the second: del unsinn.txt.

You can probably guess that VisiOn was not a success. However, this was not due to the peculiar way of operating. It would have been possible to get used to this and it would certainly have been possible to switch to object-connected operation in later versions. The problem was rather that VisiOn probably came a few years too early due to its high hardware requirements. Just to use the mouse and windowing technology, it was ultimately not justifiable to purchase such a highly sophisticated, expensive computer. The software situation was also problematic. Although VisiOn supplied typical office software, it did not run the established software products that made IBM computers and their compatibles so popular in the business world. The word processor was no WordPerfect and no WordStar and the spreadsheet was no Lotos 1-2-3, not even a full-fledged VisiCalc.

If you have read my explanations of the Xerox Alto, you will have learned there that the system, though not a commercial success, had a great influence on the development of the user interfaces, which was also related to the fact that Steve Jobs made a visit to Xerox PARC, where the presentation of the Smalltalk environment persuaded him to make far-reaching changes to the interfaces under development for the Apple Lisa and ultimately the Macintosh. VisiOn also found its inspiration in the Smalltalk environment, but was in turn the inspiration for later developments. According to the story, Bill Gates saw the presentation of the environment at COMDEX 1982, which encouraged him to push ahead with his own project, which would become known as “Windows”. However, the time had not yet come and users of the IBM PC had to be content with MS-DOS and its limited user interface.

If you bought an IBM PC or one of its cheaper compatibles in the mid-1980s and equipped it with sufficient RAM and a hard disk, a wide variety of software was available to you. The PC was not exactly a gaming or graphics platform, but especially in the area of business software, there was hardly anything left to be desired on the PC. The most annoying limitation of the system in daily work was that it was still a single-tasking system. Users were forced to constantly save files, close programmes, start other programmes, close them again and so on. The programmes that were actually needed at the same time could only be used alternately. Users of the Amiga or even the few users of VisiOn had a clear advantage here. They could easily switch back and forth between the programmes. By the way, when I talk about multitasking, I don’t care at this point whether it is true multitasking, where the programmes continue to run in the background, or whether it is only switching between programmes that allows the active programme to run while the other programmes freeze. There are situations where true multitasking is important, such as when you have music playing in one programme and want to work in another programme at the same time. In very many cases, however, there is no difference in practice whether a background programme continues to run or not. If, for example, you switch from word processing to spreadsheet, it does not matter if the word processing is stopped.

DESQview with several windows - Image: winworldpc.com (CC BY-SA 4.0)
DESQview with several windows - Image: winworldpc.com (CC BY-SA 4.0)

A number of software manufacturers released system extensions in 1985 that were supposed to make it possible to run several DOS programmes simultaneously and switch between them at will. IBM’s TopView and QuarterDeck’s DESQView software were widely used. Both products allow MS-DOS programmes to run in their own windows. However, this window mode did not work with every programme. Programmes that used the entire screen, for example to display graphics, could often be used with the system extensions, but could not be displayed in a window. The first version of Microsoft Windows appeared in November 1985. This new operating system environment also allowed several programmes to run simultaneously in windows. In addition to applications written especially for Windows, this also worked with DOS programmes, as long as the programme played along and the working memory was sufficient.

In practice, multitasking on IBM-compatible systems remained limited for quite a long time. This is because limitations of IBM’s original PC and the need to remain compatible with it made memory management under MS-DOS extremely complex. In principle, all programmes running simultaneously had to fit into a memory area of only 640 KB. There was also the problem that hardware-related tricks were often used in DOS programmes to make the software faster. In multitasking environments, these tricks always caused problems because each of the software developers assumed that they had the computer completely under control. If two programmes started to access the hardware directly at the same time, for example to draw something on the screen, there was at best a chaotic display, at worst the computer simply crashed. DOS programmes were simply not designed for multitasking. And Windows? Windows applications were, of course, window- and multitasking-capable from the outset, but it was a good five years before Windows was really to become widespread.

Windows and MacOS

A look at the year 1985 reveals a colourful diversity in the home and personal computer market. The IBM PC and its countless replicas were particularly popular in the American business sector. Of course, the 8-bit home computers such as the Commodore 64 or the Atari 800XL sold a lot. Then there was the more expensive, but also better equipped Apple II. The devices of the successor generation of these computers, the Commodore Amiga, the Atari ST and the Apple Macintosh were already available, found their buyers and gradually broke down the division between home and office computers. C64 and co. remained successful for a long time, however, because they were much cheaper than the new devices, and because there was a huge range of software and games for the devices, which could also be easily copied. In America, a wide variety of computers were available under the Tandy brand, and the British computer market enriched the selection with computers from Sinclair, Amstrad and Acorn. Of course, this list is far from complete. What almost all these types of computers had in common was not only that they were small and - compared to minicomputers of only ten years before - also cheap and based on microprocessor technology, but that they were not compatible with each other. They each had their own operating system with their own user interface, so they operated differently. The programmes of one computer were also not executable on the other. Most of the time, not even the data carriers were readable. The incompatibility even applied to the computers of the same manufacturer. An Apple Macintosh could not run the Apple II’s programs, and a game developed for the C64 would not run on an Amiga.

Computer nostalgics often mourn this time with the many different systems, because each system had its own quirks, its own strengths and its typical characteristics. In those days, however, many were annoyed by this, because the incompatibilities at all levels cost a lot of time and nerves. If, for example, a proud owner of a Macintosh wrote a text in the included MacWrite, saved it on a 3.5-inch floppy disk and gave it to an acquaintance with an IBM PC, then he had the problem that he probably did not have a drive into which he could insert these floppy disks, because 5.25-inch floppy disks and the corresponding drives were still common with IBM PCs at that time. If he gave it to his colleague with an Amiga, he could physically put the disk in the drive, but he could not read it, because the way the data was stored on the disk differed between the systems. Even if someone managed to make the files of one computer accessible on the other - the Atari ST, for example, could read 720 KB floppy disks formatted with a PC - this still did not achieve much, because the different systems each had their own software, which was usually not compatible with that of the other systems. You could then see the files of the foreign system on the desktop, but unfortunately you could neither view nor edit them. Of course, there were ways around the incompatibilities at the time. Files, for example, were often not transferred by floppy disk but by means of a cable that, for example, connected the standardised interfaces of the computers at the point where one normally connected a printer. Compatible transfer software on both systems, which one could even program oneself if necessary, made it possible to transfer the files. If one then agreed on standard file formats or was in possession of practical conversion programmes, it was quite possible to exchange files between the computers. But all that was really not practical.

Diversity in the personal computer market persisted until around the mid-1990s. Atari’s Falcon 030 was the last computer in the ST series in 1993. In the same year, Apple ended production of the Apple IIe. The Commodore 64 was built until 1994 and the last Amiga models were launched in 1996. The previous diversity was lost - the personal computer market became more and more a two-sided affair with Apple and its macOS17 on one side and Microsoft with Windows on the other. Looking at the two systems today, one finds far more similarities than differences. Both systems are mature operating systems with complex user interfaces. Their differences largely fall into the realm of personal preferences. Incompatibilities are also no longer a problem today. Large parts of the software are available on both systems, data carriers of one system are usually at least readable on the other and standardised file formats facilitate cooperation across system boundaries.

In the following, I will explain very briefly how the two operating systems have developed to where they are today, because the preconditions for their development were anything but the same. Apple, with a brief exception in the late 1990s, has only ever had to look at its own devices with its own hardware. Microsoft, on the other hand, faced the situation of having to develop system software that could be used on a wide range of devices. In this respect, Apple still has an advantage today, which may sometimes be a reason why the macOS runs a little smoother and is somewhat less complex than Microsoft’s Windows. My focus in the following historical outline is not on such product policy aspects, but on the user interface world of the systems. I am only marginally interested in what has happened “under the bonnet” of the systems. The fact that Apple, for example, changed its hardware architecture twice is something I am simply ignoring here, as are the multimedia and Internet capabilities of the systems, which did not differ except for minor details.

MacOS - The Desktop and the Finder

The first version of MacOS appeared with the Ur-Macintosh in 1984. This Macintosh was, as described in the previous chapter, a very simple computer with only 128 KB RAM, a single floppy drive and no hard disk. The consequence of this restriction was that the computer had to run in single-tasking mode, i.e. only one programme could be used at a time. The main interface of the operating system was a software called Finder. The Finder allowed access to the file system, offered a desktop on which there was a recycle bin, inserted floppy disks were displayed as icons and files could be stored. Double-clicking on a diskette icon opened a window showing the files stored on the diskette as icons. If there was a folder on the diskette, this could also be opened with the Finder. Double-clicking on a file icon started the programme associated with the document. Often, however, another diskette had to be inserted for this purpose, on which the programme was stored. When the user had finished editing, he had to close the programme and insert the operating system diskette again. The Finder was then loaded again with the desktop.

An early version of MacOS
An early version of MacOS

Typical for MacOS was that files and folders could be moved by drag and drop, for example between open folder windows or from one disk to another. This even worked with only one floppy drive. The Finder made it possible to eject a diskette, but to keep it accessible on the desktop. As soon as this diskette was to be accessed, the system demanded that the diskette be changed. Copying a file from one floppy disk to another became possible, but was of course limited by the small working memory and involved switching disks back and forth many times for large files. It was more convenient to buy an additional external floppy drive and connect it to the computer. Documents and folders stored on the disk could not only be moved between windows, but could also be dragged out of the windows and placed directly on the desktop surface. The fact that this worked with an operating system designed purely for floppy disk operation can be quite astonishing. Where was a file when it was placed on the desktop? You already know the answer to this question if you read in the previous chapter what the Apple Lisa desktop was all about. If files were moved to the desktop on the Macintosh, they were displayed on the desktop background, as with the Lisa, but their actual storage location remained the diskette. With the Lisa, the file continued to be seen as a greyed-out icon at the original storage location; with the Macintosh, this representation of the actual storage location was dispensed with. A file stored on the desktop only appeared there and no longer in the diskette or folder window, so that the impression could arise that the file had been moved from the diskette to the desktop. Apart from this visual difference, however, the mechanism worked in exactly the same way as with the Lisa. If one selected a file stored on the desktop and then clicked on “Put Away” in the menu (titled “Put Back” in early versions of the operating system), the file was removed from the desktop and henceforth reappeared in its original location, it was “put back” according to the metaphor.

The desktop of the Macintosh always showed the files that were listed as being on the desktop on the diskette currently inserted. There was a separate section for this on each disc. Here it was stored which folders of a diskette were currently open in a window of what kind and which files of a diskette were stored on the desktop at which position. Only by saving this information on the floppy disk was it possible that after the termination of a programme the Finder could again display the same files on the desktop and the same windows open. If one inserted a floppy disk while the Finder was running, the files previously stored on the desktop from this floppy disk reappeared in their saved location and the folder windows that were previously open also reappeared. So, in a way, each disk stored its own desktop. This kind of desktop management opened up quite interesting use scenarios for working in different work contexts, such as in different projects: Assuming that a floppy disk always stored the documents of a single project, you could set up a desktop for each of your projects and then load it as needed by inserting the appropriate floppy disk again. This technique was also exciting if you worked in an office where there were several Macintoshes, because you could now take your project disk with you, insert it into the Macintosh at another workstation and still have all the working materials on the desktop ready and also all the folder windows open again in the same way as you had left them on your own computer.

Microsoft Windows - A Failure?

In 1983, even before the release of the Macintosh, Microsoft announced a graphical user interface called Windows. Microsoft was involved in the software development of the Macintosh. The first versions of Excel and PowerPoint and the first Word version with a graphical user interface did not appear for the IBM PC, but for the Macintosh. These development insights naturally allowed Microsoft to be “inspired” for its own system. From then on, the accusation of stealing ideas was always in the air and was the subject of court rulings and not least the material for the religion-like debates about the better system. If you have an opinion in this regard, I would like to ask you to detach yourself from it, at least for the moment, and first look at macOS and Windows as what they are or what they were, because the differences between the systems are actually much more interesting than the inspirations, of which there should be many in either direction in the course of the coming decades anyway.

Microsoft Windows Version 1.01 with open windows for Write and Paint, below minimised the MS-DOS-Executive
Microsoft Windows Version 1.01 with open windows for Write and Paint, below minimised the MS-DOS-Executive

The first published version of Windows appeared in November 1985. This Windows 1.0 had a bad reputation. This was certainly partly justified - more on that later. On the other hand, it is important to recognise what Microsoft has achieved here. The hardware requirements of Windows were quite modest on paper. A user needed only 256 KB of RAM, a computer with two floppy drives, a mouse and an image display that could show not only text but also graphics. With the exception of the mouse, which was not yet standard at the time, this equipment was no longer anything special for computers of 1985. However, as was to be the case with many Windows versions in the following years, this minimum equipment specified by Microsoft was not really usable for the system. A hard disk and above all more RAM were highly advisable. Hard disks were already becoming common in the corporate context, but working memory beyond the 1 MB limit was still extremely cost-intensive. However, if one had an expensive memory expansion card associated with it, Windows could use the extra memory and thus provided its applications with more memory than the 640 KB directly usable under MS-DOS.

Windows 1.0 was not yet an independent operating system. A Windows user started his computer with MS-DOS and then loaded Windows just like any other DOS programme. Depending on the graphics card installed, the system now appeared in different looks. If only a CGA graphics card was installed, Windows was displayed in monochrome. The resolution was somewhat limited at 640 x 200 dots, but quite usable. If, on the other hand, the proud Windows user had an extremely modern EGA graphics card, the resolution rose to 640 x 350 pixels with sixteen colours. The best resolution of 720 x 348 pixels could be achieved with a so-called Hercules graphics card. Since Hercules graphics cards did not support colour by default, the display was of course monochrome here as well. In the illustration above you can see Windows in the EGA representation. Microsoft’s choice of colours leaves a lot to be desired from an ergonomic point of view. I can only speculate about the reasons for this. For one thing, the EGA graphics standard’s ability to display muted colours was limited. The sixteen possible colours were all quite bright and saturated. On the other hand, I can well imagine that Microsoft wanted to emphasise a little that its own system, in contrast to Macintosh computers of the time, supports colours and that this is why such striking colours were chosen.

Windows allowed the execution of DOS applications in the window, but above all, of course, the execution of explicit Windows applications. There was no desktop with Windows and no tool really comparable with the Finder. The central interface was the so-called “MS-DOS Executive”, a very simple file manager and programme launcher. It consisted of only one window in which a file listing was displayed. There were no icons, no spatial positioning and no drag and drop. Since Windows was based on DOS, its file name limitation of eight plus three characters in capital letters and without special characters was also inherited. So you couldn’t call a file “The 5th attempt at a book on computer history.doc”, but had to make do with something like “CGESCH_5.DOC”. Windows 1.0 looked rather crude compared to Apple’s operating system. The colours were extremely colourful, the system used Terminal-era fonts throughout its user interface instead of newly developed fonts for the graphic interface like Apple’s, and even the small icons on the windows, such as those on the scroll bars, looked very old-fashioned compared to those on Apple. In addition, the system ran incredibly slowly on most computers at the time. In many cases, one could literally watch the screen contents being redrawn.

But Windows was also ahead of MacOS in some respects. First and foremost, multitasking must be mentioned here. Several applications could run in windows at the same time in Windows. The windows could be arranged in such a way that the applications could be seen at the same time. Above you can see the Paint and Write applications side by side. A system-wide shared clipboard allowed data to be exchanged between programmes. For example, the graphic drawn in Paint could be copied and pasted into the text in Write. This was also possible in a similar way with the Macintosh, for example with the programmes MacPaint and MacWrite, but with Windows none of the programmes had to be closed first in order to carry out the operation in the other - assuming sufficient working memory, of course.

The window handling in Windows 1.0 differed from that of later Windows versions and also from that of MacOS. If we use the system today, it takes a bit of getting used to. Still quite normal is the possibility to minimise a window, in Windows 1.0 quite correctly called “Iconize”, because a minimised application now appeared as an icon at the bottom of the screen. A display in which a window filled the entire screen, called “Zoomed”, was also already available. What is striking in comparison to today’s systems, however, is that there were no overlapping windows in Windows 1.0, with the exception of message windows and superimposed windows for loading and saving files. From today’s perspective, this characteristic is often described as a disadvantage of the system. But if you try it out for yourself, it becomes clear that the way windows were arranged in Windows 1.0 was quite practical. Unlike today’s Windows or macOS, where the user has full control over window positioning, in Windows 1.0 the system itself did much of the splitting work. If only one programme was running, such as MS-DOS-Executive after start-up, it was given the maximum space. If the user then started another programme, the screen was automatically split so that both applications found their place. However, one did not have to surrender completely to the automatic. The user certainly had an influence on the arrangement. If he picked up an icon of a minimised application and dragged it to the top of the screen, Windows assigned the upper half of the screen to the corresponding window. The same worked with the other edges of the screen and with non-minimised windows by selecting the menu item “Move” or by dragging the title bar to the corresponding edge. Once you had this mechanism figured out, it was very easy in Windows 1.0 to bring several applications or several documents into view at the same time in order to work with them in context. To create an arrangement like the one above of Write and Paint in later Windows versions or even in MacOS meant a lot of effort, under Windows 1.0 it took only a few seconds. It was not until Windows 7, 24 years later, that Microsoft reintroduced a function that made it possible to automatically split windows in this way, thus combining the advantages of automatic window splitting with the advantages of freely positionable windows.

The first Windows version was not a success, although Microsoft’s graphical user interface was quite superior to Apple’s in some respects. It allowed multitasking, the window handling was very practical especially when running several applications at the same time, and unlike the Macintosh, Windows could also be used in colour display. But in many other and perhaps more important areas, Microsoft lagged behind. The visual design of macOS was much more sophisticated. With the Finder, an effective tool for file management was available, the desktop could serve as a place for the current working documents within the limitation of the system, and the system was quite fast due to its restriction to single-tasking and its optimisation to the hardware used. Windows, on the other hand, was able to run on most PCs at the time, but did not have enough power for really smooth working. The fact that hardly anyone bought Windows 1.0 was certainly also due to the fact that, apart from the simple programmes that came with it, there was virtually no software at all. Even Microsoft’s own applications Word and Excel only became available for the second version of Windows.

The possibility of having several programmes accessible at the same time was of course also in demand on the Macintosh. After some special solutions, the Multifinder was introduced in 1987. If you used it instead of the classic Finder, you could switch between the running applications in the menu bar of the Macintosh. For Macs with little memory or no hard drive, i.e. systems where multitasking made little sense, the old Finder with single-tasking operation remained.

Also in 1987, Microsoft introduced the second version of Windows. The system now ran smoother and faster. Just as with the Macintosh, there were now overlapping windows. The practical function of automatic window handling unfortunately fell victim to this. Software for Windows 2 was now also available for purchase. The most important ones are Adobe Pagemaker, MS Word and MS Excel. All three programmes have been ported from the respective Macintosh software.

Windows 3 - Windows comes of age

Microsoft Windows 3.1
Microsoft Windows 3.1

The first really successful Windows versions appeared with the 3 series from 1990. Windows 3.0 was completely redesigned visually. The clunky terminal fonts that were still used in Windows 2 were partly replaced by more modern screen fonts. MS-DOS-Executive was also abolished and replaced by two programmes that shaped the basic way of working with Windows from then on. The programme manager served as an overview of the application programmes installed on the computer. These were presented as icons, which were classified into so-called “programme groups”. The concept of the programme manager was similar to that of the programme launcher and homescreens on iOS and Android. Here, too, the application programme is in the foreground, represented by an icon and started from a central location. A newly developed file manager has been added to the programme manager. This file management programme enabled a considerably more comfortable handling of the files in the file system than the previous MS-DOS-Executive. Several windows could be opened within the file manager. File operations could thus be carried out conveniently by drag and drop.

The very popular Windows 3.1 from 1992 mainly brought changes under the bonnet. Worth mentioning from a user interface point of view was the OLE (Object Linking and Embedding) technique, which allowed objects from one programme to be embedded in another programme while still ensuring the object’s editability. For example, sections of Excel tables could be integrated into Word without losing the possibility of further editing.

The 3 versions of Windows were strongly programme-based due to the strong presence of the programme manager as the main user interface. As a rule, one did not open a document, but started a programme and then loaded the document in it. Although the file manager enabled convenient file management within Windows, it had a completely different character than the Finder. This was due to the fact that the file manager was considerably less central. It was not, like the Finder, the main interface of the operating system, but only one of the many application programmes. Its presentation was also much more technical than Apple’s Finder. One did not work with documents displayed as icons that could be freely positioned within a window or on the desktop, but organised files in list form.

MacOS 7 - The zenith of the classic operating system

At Apple, the era of operating systems that ran on computers that only had a floppy drive ended in 1991 with MacOS 7. The system now required a hard disk and at least 2 MB RAM. Since many old Macintosh computers were still in popular use, MacOS 6 remained in use for a long time and was still supported by newer software. Version 7 introduced some changes in the way the system works. Perhaps more of a small thing here was the introduction of labels. Files could now be marked with a marker such as “ready” or “critical”. This feature was particularly practical on systems with colour monitors, because labels were linked to colours. If a file was labelled “hot”, for example, the icon was coloured red and thus stood out visually from the others. Since MacOS could now safely assume that a hard disk was available, some changes were made to the way the system worked. One of these changes concerned the deletion of files. Up to this point, the MacOS recycle bin functioned in such a way that a file that was deleted was initially only marked for deletion and henceforth no longer displayed at its location in the file system. The actual deletion only happened when the disk was ejected or when the desktop process itself was terminated, i.e. in single-tasking mode when another programme was loaded, in multi-tasking mode at the latest when the computer was shut down. As of MacOS 7, the recycle bin now had a fixed location on the system hard disk. Its contents thus remained from then on until it was explicitly emptied.

Version 7 of the Mac operating system - Screenshot: Marcin Wichary, GUIdebook
Version 7 of the Mac operating system - Screenshot: Marcin Wichary, GUIdebook

One quite profound change, which most users probably did not notice directly, concerned the functioning of the desktop itself. Up to and including MacOS 6, the desktop functioned as described at the beginning. Files stored on the desktop were displayed there, but their actual storage location was a location on an inserted floppy disk or even an attached or built-in hard disk. Files could be dragged to the desktop in the Finder and put back from there via “Put Away”. If you saved a file from within an application, you could save it to disk or to a floppy disk, but you could not put it directly on the desktop. This step had to be taken later in the Finder. From MacOS 7, it was now possible to create files and folders directly on the desktop and save them there. For these files and folders, “Put Away” no longer worked, but the function remained basically intact.

Ten years after the Lisa, the concept of “Stationery Pads” was also reintroduced. Remember: With the Apple Lisa, Stationery Pads were the central concept for creating new files. Instead of saving a file under a new file name in an application, the Lisa created a copy of a template file by double-clicking on a Stationery Pad, which figuratively means “tear-off pad”. A copy was then automatically created, which could now be opened for editing. Now the tear-off blocks were also available for the Macintosh. However, the implementation of the Stationery Pads under MacOS 7 was considerably easier. With the Lisa, clicking on a tear-off block created a copy in the same location and in close proximity. With MacOS 7, only the original file was loaded in the linked programme. Here, the “Save” option was then not available, forcing the user to select “Save as”. By the way, the Stationery Pads function still exists today. It can be activated via the property “Form block” in the file information. Today, however, the implementation is more like that of Lisa again. A copy is created in the same location.

If we look at the two most important operating systems in 1994, we find two different paradigms: On the Macintosh, a document-based way of working had become established. There was no equivalent to a programme manager. The central component of the operating system was the Finder with its graphic-spatial representation of the file system. With Windows, on the other hand, the applications were in the foreground, which was expressed above all by the dominance of the programme manager. File management took place in an add-on tool as one of the many possible functions of the computer system. In 1995, this difference in working methods was to be drastically reduced, because with Windows 95, Microsoft succeeded in combining the two worlds. Moreover, Windows 95 was technically superior to Apple’s operating system in many respects. Apple was increasingly falling behind in terms of operating systems. Microsoft’s technical lead was even stronger with Windows NT than with Windows 95. NT stood for New Technology. NT systems were generally used for servers and on company computers. The operating system emerged from the initial joint development of an operating system called OS/2 together with IBM. Windows NT, unlike other Windows versions including Windows 95, 98 and ME, was not an add-on to MS-DOS, but an independent system. The first version of Windows NT in 1993 had the version number 3.1. It was the first Windows to support user accounts and long file names, but otherwise had the same user interface as Windows 3.1. Windows NT 4, released in 1996, was identical to Windows 95 in terms of operation. The technical innovations of Windows NT were manifold. However, their discussion would go much too far at this point, because they do not concern the system’s user interface and had no influence whatsoever on which user interface worlds were available on the PC.

Windows 95 - The connection of the two worlds

For Windows 95, Microsoft designed a completely new desktop interface. Much of what you know today as the user interface of Windows was introduced in this version of Windows. All Windows versions from 1995 onwards supported long file names with spaces and umlauts. Windows now had a desktop with a recycle bin and an icon called “My Computer”. The recycle bin functionality was completely new in Windows. Files were now no longer deleted directly, as on the Macintosh, but first stored in the recycle bin, from which they could be retrieved. The “workstation” provided access to the computer’s hard disks and floppy drives. The previous file manager has been replaced by Windows Explorer. Its default display was similar to that of macOS. A folder opened into a window with an icon view that was spatially customisable. If a folder icon within this window was opened by double-clicking, a new window opened showing the contents of this folder. This behaviour could be adapted more than with Apple. For example, many set up Explorer so that opening a subfolder did not open a new window but simply displayed it in the same window. This variant became the standard in later Windows versions. If there were many files to organise, the Explorer offered a split view. The directory tree was then displayed in a bar on the left-hand side, while the files of the selected folder were displayed in list form in the main content area. This quite practical view variant of the Explorer has unfortunately been lost over the years and replaced by a display of important folders at this point.

Windows 95 with some windows and opened start menu
Windows 95 with some windows and opened start menu

Windows 95’s Explorer was not only much more customisable than Apple’s Finder, it also allowed for concurrency, unlike the latter. The technical term for this is “multi-threading”. Basically, multi-threading is almost the same as the more familiar multitasking. However, while multitasking today is usually understood to mean the simultaneous execution of several programmes, multi-threading means the simultaneous execution of several actions within one application. The multi-threading was particularly noticeable in long-running operations. For example, if one copied many files, both Windows 95 and MacOS displayed windows showing the progress of the copying process and allowing it to be cancelled. With MacOS, you now had to either wait for the copying to complete or cancel before you could continue working with the Finder. Under Windows 95, on the other hand, Explorer and Desktop remained ready to work even during the copy operation. Copying was completed in the background while the user could do other tasks.

With Explorer, Microsoft caught up with Apple’s operating system in terms of desktop and file handling and was ahead of it in some areas. From now on, Windows could be operated in a document-centric way, just like MacOS. While under Windows 3.1 one still opened Microsoft Word, then selected “Open” in the file menu to select a file, under Windows 95 it became customary to navigate to the folder with the desired file in the Explorer and to open the file in the appropriate programme by double-clicking. The previous possibility of starting an application directly was retained in Windows 95. However, the selection of the application was no longer done from a screen-filling programme manager, but from the newly invented start menu, which, in addition to accessing the system settings and the newly introduced file search, allowed access to the installed applications in a hierarchically organised menu. The illustration above shows the opened start menu with the selected simple word processor WordPad. The start menu was part of the task bar at the bottom of the screen, which was also newly introduced. With this, Windows got a new user interface for switching between the running programmes and windows. In previous Windows versions, this was only possible by means of the key combination “ALT-Tab”, which still works today with Windows (and in the meantime also as “CMD-Tab” under MacOS). The disadvantage of this method is that the list of running applications is not permanently visible and only those who know the key combination have any chance of switching applications without having to constantly reduce or move windows. With the taskbar, there was now a piece of usage interface that consistently showed Windows users which programmes were running and allowed switching between applications with a single click.

With Windows 95, the world of use changed completely for Windows users. One might argue about some of the implementation details and complain about the configuration madness inherited from MS-DOS, but from the point of view of user interfaces, Microsoft had achieved a great success. The document-centred desktop world that existed on the Macintosh was interwoven with the programme-based way of working that was common on Windows in an astonishingly seamless way. Compared to MacOS, Windows now had a more potent file manager in the form of Explorer and a much better task switcher in the form of the taskbar. Apple came under pressure from Windows 95, because the system was not only at least on the same level in terms of the user interface, but also the system structure itself was more solid than that of MacOS, where Apple had always “tinkered” new functionalities onto the original single-tasking system and which therefore carried many old burdens. MacOS, for example, had no memory protection. If several programmes are running at the same time, each of these programmes occupies its own area in the working memory. Under macOS, these areas were not protected against each other. A running programme could read and also change the memory contents of the other programmes. This was not only very unsafe, but also crash-prone. A faulty programme could easily take other programmes or even the whole operating system with it. A new development was needed! However, the development of a successor for the classic macOS was delayed again and again. For Microsoft, on the other hand, the adaptation of the user interface they made with Windows 95 was the default for the next decade. The following Windows versions NT 4, 98, ME, 2000 and XP were each visually adapted to the spirit of the times, but the basic elements of the user interface desktop, explorer, taskbar and start menu remained almost unchanged.

In the following years, Apple tried to develop a new operating system called “Copland”, but got into big internal trouble. Forced to continue with the old operating system, versions 8, 8.5 and 9 of macOS appeared. Apple mainly made many optical changes here and integrated some components of the failed new development into the old operating system. From 1997, the Finder also supported concurrency, so that file operations no longer blocked further work in the Finder. With MacOS 8.5, Apple integrated the search software “Sherlock” into the operating system. This software not only allowed local searches, but also searched internet databases via interfaces. MacOS 9 brought support for multiple users on the same Macintosh. Microsoft already provided for this in NT 3.1 from 1993. Windows 95 also supported multiple user accounts on the same computer, albeit in a limited way. Apart from these changes, the old macOS stayed much longer than Apple planned. Together with a confusing product policy, it brought Apple to the brink of bankruptcy in 1997. The company saved itself by buying Next Computer, a company founded by Apple co-founder Steve Jobs. Part of this purchase was the operating system NextStep, an innovative, graphical operating system based on Unix. NextStep was ported to the Macintosh architecture. Some elements of the NextStep user interface, such as a central dock, were adopted in the newly designed interface called Aqua, which was otherwise strongly oriented towards the way the previous macOS worked. The new operating system, which from a purely technical point of view had hardly anything to do with the previous version, was given the version number 10, which from then on was always written as X. Apple did not get beyond 10 until 2020, because since version 10.0, it has only counted up after the decimal point. Only with the version codenamed “Big Sur” did Apple raise the version number to 11.

MacOS X (10) - MacOS reinvented

MacOS 10.0 with Aqua interface - Screenshot: Marcin Wichary, GUIdebook
MacOS 10.0 with Aqua interface - Screenshot: Marcin Wichary, GUIdebook

MacOS 10.0 from 2001 looked very similar to the previous operating system. However, there were weighty differences in the user interface. Since the underpinning of the operating system was now Unix, there was for the first time a command line that could be called up in a terminal programme. The most striking new element of the operating system was not the command line, however, but the dock at the bottom of the screen inherited from NextStep. To this day, frequently used applications are permanently stored in the Dock on MacOS. It also shows which programmes are currently active and contains an area where minimised windows can be stored. The Finder’s menu bar could hold shortcuts to folders and files and was thus available to the user as DeepL access. In the illustration above, the folders Computer, Home, Favorites and Applications are located here. As of MacOS 10.3, this function was moved to a bar on the left side of a Finder window, where it can still be found in current MacOS versions. The desktop was now, as with Windows, a dedicated folder in the user directory, which was merely displayed in full. Files have been moved or copied to the desktop. Unfortunately, the practical “Put Away” was no longer available.

With macOS 10, Apple was basically back on par with Microsoft. To be honest, however, it had to be admitted that versions 10.0 and 10.1 still had to contend with many “teething troubles” and did not perform well on many of the Macs in use at the time. Even with new devices, Apple initially delivered both systems, which led many users to continue to rely on the old familiar macOS 9 and its extensive software library. In the end, however, the new versions prevailed, because not only were the systems more stable after the initial problems had been eliminated, the Finder was also considerably improved and the Dock also proved to be a great strength of the system. At the same time, the simplicity of operation of earlier systems has been maintained. However, Apple not only caught up, but also went further than Microsoft in some areas, for example with the new central application called “Preview”. The preview could display a variety of files without even having to start the application programme. This function was very practical and useful, because in many cases files should only be viewed and not edited. This preview function in MacOS has therefore been constantly improved over the years.

Microsoft Windows XP
Microsoft Windows XP

Also in 2001, Microsoft’s popular Windows XP appeared. Due to problems with the development of a successor and the poor reputation of Windows Vista, which was released in 2007, Windows XP effectively remained the current version of Windows until 2009. With Windows XP, Microsoft brought together the professional NT line and the classic, MS-DOS-based Windows line. Windows XP was no longer a DOS extension, but a complete operating system. In terms of user interface, Windows XP continued in the tradition of Windows 95, but had a standard look that made the interface very colourful. Together with the default background image of a hilly meadow landscape, this earned the system the nickname “Teletubby Windows”.

In the following years, MacOS and Windows gradually became more and more similar. As of 2003, MacOS 10.3 offered a convenient function to quickly overview all open windows in an overview called “Exposé”. In MacOS 10.4, which was released in 2005, Apple made the search or its results a central element of the operating system. An index system called “Spotlight” was introduced, search was added to the Finder windows and the system menu bar. The search has also become more centralised in Windows. As of Windows Vista (2007), a search box was prominently located in the Start menu. In Windows 10, it is then permanently visible in the taskbar. Also in 2007, MacOS 10.5 again reduced the need to open applications. A Quickview function was introduced that directly displayed a marked file by pressing the space bar and made it disappear again by pressing the space bar again. After the failed Windows version Vista, another Windows version appeared in 2009 with Windows 7, which became very popular. Probably the most striking innovation of this system concerned the taskbar, which now became a combination of task switcher and quick launch bar, just like the Dock of macOS. In response to Apple’s Exposé, a quick view of the windows of an open application could be displayed in the taskbar with the Aero-Peek function. The improvements in window handling were also helpful. With the Aero-Snap function, 24 years after Windows 1.0 there was again a function to arrange windows automatically. If you dragged a window to the right or left edge of the screen, it automatically took up half the screen. Under Windows 10, this function has been optimised again, so that three-window arrangements are now also possible.

MacOS 10.7 - Implicit saving causes confusion

In 2011, Apple introduced a change in MacOS 10.7 that was not all friends, in terms of how I view the idea of user interfaces introduced here, but was very sensible and far overdue. The way files are saved was changed so that explicit saving was no longer necessary. Actually, it is surprising that loading and saving on macOS still worked exactly the same in 2011 as it did in 1984, because the reasons for making it that way back then have long since ceased to exist. At the beginning of the chapter I explained that the hardware limitations of the Macintosh meant that users had to explicitly save edited files. This explicit saving was still common, although with hard disk-based systems and ample memory there was and is actually no longer any need for it. It would have been perfectly performable, let’s say in 1993, to save automatically on a regular basis instead of imposing this burden on the user.

Implicit, permanent storage would have had great advantages for computer users, because having to store explicitly is always a major source of error. You too have certainly experienced working on something for a long time without saving, and then disaster struck, whether the programme crashed, the power went out, or you inadvertently answered the question incorrectly as to whether you wanted to discard or save the changes. However, having to explicitly save is not only a problem for negligent or scatterbrained computer users. Rather, it also contradicts the basic idea conveyed by the user interfaces. The desktop metaphor, which has always existed with MacOS and since 1995 with Windows, conveys the illusion that a document that exists as an icon can be enlarged into a window and edited in it. The system-generated world actually ensures that the computer user should no longer be aware of technical operations such as loading hard disk contents into the main memory. One does not manipulate any memory contents in the world of the user interface, but edits the elements of a document on the screen. However, if one uses operations such as “Load”, “Save” and “Save as”, as was the case with the Macintosh of 1984, the illusion breaks. Users now need to know the difference between hard disk storage and working storage, understand that document contents are copied from hard disk storage to working storage and from there back to hard disk storage. However, this knowledge and these actions have nothing at all to do with the object world presented, but with the technical functioning behind it and should actually remain hidden from the user. Unfortunately, the explicit saving from operating system generation to operating system generation has been copied away both on the Mac and in Microsoft’s Windows.

After 27 years, Apple has now changed this basic way of working with its operating system. An open document has now been saved continuously. Explicit saving was no longer necessary. One opened a file, edited it and closed it again. It was always saved automatically and without user interaction. The new paradigm was actually a much more consistent implementation of the object world of a document-based user interface. However, the change of this very basic function brought with it its problems. For one thing, not all programmes worked according to the new paradigm. The Mac versions of Microsoft Office and LibreOffice, for example, continue to work in the old way until today (2021). An even bigger problem, however, arose primarily from the procedures learned over decades, which could now lead to a loss of data. An example makes this clear: You intend to write a letter. To make your work a little easier, do this by revising an existing letter and filing it under a new name. For almost thirty years, this was done by opening the old letter in the word processor, making the necessary adjustments and then saving the text as a new file via “Save as”. If you now do this with a programme that uses Apple’s new save paradigm, you will continually change the original letter, the contents of which will then be lost and you won’t find a “save as” at all 18. The henceforth correct sequence of actions is now again the same as with the Xerox Star. The original letter is first duplicated, the duplicate renamed and then opened for customisation. Apple has certainly noticed the problem of accidental revisions. Fortunately, at the same time Apple introduced file versioning, which can be used to bring back the accidentally changed version of the original letter. Many accidental overwrites were also prevented by the system automatically applying special write protection to files that have not been changed for a long time. If you open it and try to change the file contents, you are presented with the choice of unlocking the file or duplicating it and editing the duplicate.

With the revision of the save function, Apple caused some problems for the users of its operating systems, although the developers, with the change to implicit saving, actually only adapted the Macintosh applications to the way applications on iPhones and iPads had always worked. In Apple’s iOS, in Google’s Android and even before that in the PDAs, explicit saving and with it the need to deal with its technical background has never existed, and hardly anyone misses it here either, because in iOS and Android it was always clear that you were editing an object in an app. The mindset of using a programme to load a file into an invisible working memory was never part of this user interface world.

Windows 8 - The Failed Experiment

The Windows 8 tile interface
The Windows 8 tile interface

The change of file storage in macOS was certainly a change that changed the way of thinking about the user interface and its objects. However, this was nothing compared to the changes that Microsoft dared to make in 2012. The conversion from the previous Windows 7 to Windows 8 seemed even more drastic than the one from Windows 3 to Windows 95, and unlike the latter, it was quite a failure.

Windows 8 largely did away with the central element of the user interface that gives the whole system its name, namely the windows. Even the desktop and taskbar were now no longer as central as they had been for the past seventeen years. The start menu has even been abolished altogether. In the place of all these concepts, Microsoft adopted the user interface developed for the phone operating system Windows Phone and for the game console Xbox. Instead of windows, desktop and start menu, the system now presented itself with large coloured tiles that could display all kinds of information. For example, the news app could display the latest headlines, the mail app the subjects of the latest messages and the calendar app the upcoming appointments directly. If you tapped on a tile or clicked on it, the corresponding application opened. One of the goals of Windows 8 was to equip all of the manufacturer’s systems with the same user interface, from phones to tablets, laptops, game consoles and the classic PC on the desktop. The result was that, in effect, a user interface adapted to touch operation on small devices was declared the standard for all target systems. The break was radical.

The Windows 8 applications, at least those that were newly developed for the system, looked different from previous Windows applications and were also operated differently. Windows, contrary to what its name suggests, no longer had windows. From now on, programmes no longer had a title bar or a visible, central menu. Previously central elements of the user interface were no longer visible. The standard menus of an application only appeared when they were “slid in” by touch gesture or when the right mouse button was clicked within the application. They also no longer appeared in the form of a menu bar, but as a wide bar on the right edge of the screen. Even more than the menus, the user interface for quitting programmes was downright hidden. There was no longer any visible indication of how to end a programme. The established X button no longer existed. Instead, you had to click on the programme at the top of the screen and then drag it down. At this point I will spare myself the explanation of how it was possible from then on to switch between programmes, display the counterpart to the taskbar or shut down the system. You can imagine that all this became similarly complicated. The problem was not so much that it no longer worked in the usual ways. The transition from Windows 3.11 to Windows 95 showed that it is possible to change over quickly. What was really problematic was that there was no longer any indication visible on the screen of which operations were possible, or that they were offered at all.

The desktop did not disappear completely under Windows 8. Many users who used the Windows version in practice will probably even have seen it almost constantly, because whenever one used a classic Windows application, and this included Microsoft Office and the popular Firefox browser, Windows 8 switched to the desktop, which was equipped as always with a My Computer icon and a taskbar that also included the quick-start function for programmes introduced under Windows 7. However, there was no start button and no start menu any more. If you were on the desktop, Windows worked almost as usual. Applications appeared in windows and had the typical minimise, maximise and close control panels. However, only the classic Windows applications worked like this. If you started a modern, new application, which was also possible from the desktop, you switched back to full-screen mode. Windows 8 therefore felt like using two operating systems at the same time. On the one hand, one with a desktop, windows and taskbar; on the other, one with tiles, full-screen apps and lots of gesture interaction. The system operated one way one moment and quite differently the next. The feeling of being “divided in two” was reinforced by the fact that the two parts were hardly connected. The new task switcher of Windows 8, for example, also showed the applications running in the desktop and allowed switching to them, but the task bar of the desktop interface completely ignored the modern applications and only showed the classic Windows applications.

Windows 8 was a fiasco! The attempt to merge the user worlds of classic Windows with those of smartphones and tablets resulted more in an absurd balancing act than in real integration. Microsoft followed up with Windows 8.1 in 2013, which made up for some of the most annoying changes in Windows 8. For example, it was now possible to set the system to display the desktop immediately after start-up, and the reintroduction of the Start button in the taskbar now allowed users of desktops and laptops to easily access the application overview with the mouse or trackpad again. However, the changes were no longer of any use to Microsoft. Windows 8 and 8.1 were almost completely ignored by companies and little used by private users either. Windows 7 remained de facto the most up-to-date Windows operating system in use until the introduction of Windows 10. There never was a Windows with version number 9. Probably also to represent the departure from the unloved Windows 8, Microsoft jumped straight from version 8 to Windows 10 in 2015.

Windows 10 should also work just as well for classic desktop computers and laptops as it does for tablets and phones. This time, however, Microsoft handled it more skilfully. Instead of providing a single usage interface for both types of interaction as in Windows 8, there were now two modes: In desktop mode, the user interface is very similar to that of Windows 7. The tiles have been banished to the Start menu, which has been reintroduced with this version of Windows. All applications, whether modern or classic, reappeared in windows and had normal window controls to minimise, maximise or close. If you use Windows in tablet mode, on the other hand, the situation is completely different: the central user interface element is the tile screen, which is, however, less colourful than under Windows 8. All applications, whether modern or classic, appear in full screen in tablet mode, but the window controls remain the same. Programmes can therefore be closed by clicking on the X in the upper right corner. Not only does the system as a whole adapt to the input mode, the apps optimised for Windows 10 can also recognise the current mode and use a touch-optimised usage interface in tablet mode. In Explorer, for example, more space is left between files in list form in tablet mode to prevent accidentally tapping on the wrong file.

Conclusion

The remaining two major graphical user interfaces from Apple and Microsoft, apart from the outlier Windows 8, have converged greatly over the years. There are differences in some detail, but on the whole, users of the systems face very similar object worlds that can be manipulated and managed in very similar ways. If we look at the above outline again in retrospect, we see less revolution than many small evolutionary steps that over the years led to user interfaces that made it less and less necessary to concern oneself with the technical details of the computer systems. However, the two operating systems started out with completely different concepts.

Start Applications Switch Applications Open Documents  
MS-DOS command line - -
Lisa/Star - directly via document icon  
Windows 3 Programme Manager - -
MacOS <10 - Finder  
Windows 95 Start Menu Taskbar Explorer
MacOS 10 Dock Finder  
Windows 7 Taskbar Start Menu + Taskbar Explorer

Under DOS, as in every command-line oriented operating system I know, you had to explicitly start the programme with which you wanted to edit a file. In the user interface world of Lisa and Star, on the other hand, the document was in a central position. Starting applications directly was not possible at all. These different focuses in the way of working can still be found if you look at the Windows and MacOS versions from the early 1990s. Under Windows 3, the programme manager was the central control point of the operating system. Consequently, one used the system by starting a programme rather than opening a document directly. This was possible in principle in the file manager, but quite unusual. Under MacOS, on the other hand, there was no central administration of programmes. Programmes could be started explicitly from the Finder. But that was rather unusual here. Rather, you navigated with the Finder to a file you wanted to edit and opened it directly. With Windows 95 and Mac OS 10, both operating system manufacturers managed to unify the two ways of thinking and working. Start menu, taskbar and Dock provide direct access to applications, while Finder and Explorer are so central to the system that a document-centric way of working is still possible and common.

Incidentally, in the latest versions of the systems, one can identify a new trend that could be abbreviated as the “abolition of files” and that is certainly promoted by the popularity of the mobile operating systems iOS and Android. There are more and more applications that can display and manage objects that no longer appear directly as files. Apple’s iTunes and Photos (formerly iPhoto) applications and cross-platform note-taking software Evernote are good examples of this new kind of software. Music tracks, videos, photos and text notes can be managed or edited in the applications. Although these objects are located as files on the hard disk of the computer, they are usually no longer directly accessible, because the applications themselves take care of all administrative operations. Not everyone agrees with this development, as probably with every development in the user interface. The user is deprived of some of the flexibility offered by free file management. In turn, using the computer is simpler and less technical, as people now view and edit photos, videos, text documents or notes instead of generic file objects. Some people talk about the so-called “golden cage” here and complain about the restrictions, while some others are only enabled to use the computers at all through such simplifications. I have my opinion on this, but I won’t tell you, because there is certainly no such thing as true or false at this point.

Unix and Linux

In my previous accounts of the development of user interfaces, I have mentioned the Unix operating system now and then, but by and large ignored it. In fact, I had originally intended to leave it at that, because Unix, as a representative of the text-based time-sharing operating systems, came relatively late to the market and, as far as graphical user interfaces are concerned, is not widespread, at least in the area of PC operating systems. It is probably very often found on computers that are not used directly by end users, but provide services in the background. Such computers are generally called “servers”. Almost all the websites you can access today run on a unix-like operating system, but that may not ultimately matter to the user of the computer running the web browser. In the course of researching many of the other chapters of this book, I then decided that it needed its own Unix chapter after all, because on the one hand the operating system has had a strong influence on other operating systems and on the other hand user interfaces are developed in the Unix area that are not used very much, but which nevertheless sometimes have such interesting properties that they should not simply be dropped under the table.

From Multics to Unics - The Early History

Let me start with the history of Unix about five years before the system first appeared, because the prehistory of the system explains some of the typical Unix peculiarities. Having arrived in the present in the previous chapter, let’s take a leap back to 1964, about ten years before the advent of the first personal computers, to the time when time-sharing technology was being developed and researched on a large scale. In this year 1964, the Massachusetts Institute of Technology (MIT), the General Electric (GE) company, then a manufacturer of computers, and Bell Labs began developing a complex time-sharing system. Most important for the history of Unix here is the involvement of the research laboratory Bell Labs. This facility was part of the American telephone monopoly AT&T at the time, a fact that was to become important for further history.

MIT, GE and Bell Labs began developing a time-sharing operating system called Multiplexed Information and Computer Service, or Multics for short, in 1964. The operating system was extremely modern for the time and equipped with features that were very useful, especially when used in a large computer system with many accessing users. This included, for example, the management of mass storage devices such as hard disks, magnetic tapes or removable disks. The system had interesting concepts in terms of volume management. Today’s PC operating systems such as Windows or MacOS manage data media as independent units in most cases. If you connect an external hard disk or a USB stick to the computer, a new data medium appears in the system. You can now access it and copy files up or down, for example. However, you must organise what is stored on which data carrier yourself. If the hard disk in your computer is too small and you install another one, you have to take care of which files you save on which hard disk yourself. With Multics it was different: there, a new, connected hard disk or an inserted tape increased the amount of available mass storage. The system automatically rearranged the stored files so that the frequently used files were on a faster storage medium, such as a fast hard disk, while the less used ones were moved to slower media such as removable disks or magnetic tapes. These plates and tapes did not have to be kept permanently. Operators in the data centre could add and remove disks and other media almost at will. If you were to do this on a current system, files and directories would normally disappear or appear as if by magic. However, Multics’ file system made this technical reality of storage media completely invisible to the user. Let’s say a user had created a file months earlier but had not reused it since. In the meantime, it was copied from the system to a magnetic tape and archived. If our user wanted to access this file again, it was no problem. It was still shown to him as present. If it was now to be loaded or issued, it did not work immediately. The user was initially put off with a system message, while the operator in the data centre was shown that he should insert a specific magnetic tape. As soon as he did so, the file was available again and, if necessary, was automatically moved back to a faster storage medium. No one had to manually copy the file from the magnetic tape to a work hard drive or anything like that. This task was completely taken over by the system itself.

The Multics system was large and complex, with many features for large time-sharing computing facilities, from the volume management just described to sophisticated user management. Further development dragged on, which did not suit everyone involved in the project. In 1969, Bell Labs therefore withdrew from the project. On the one hand, this freed up resources, but on the other hand it led to the problem that other operating systems now had to be found or developed for the company’s own computers. Ken Thompson and Dennis Ritchie, two technicians at Bell Labs therefore began, simply out of necessity, to develop their own smaller operating system for the PDP-7 and later PDP-11/20 minicomputers from DEC used at Bell Labs. In 197019 the system was initially christened Unics (Uniplexed Information and Computer Service) as a corruption of Multics. The name was later changed to “Unix”, although today it is no longer possible to clarify exactly when the renaming took place and who was responsible for it. The early Unix versions were written in the machine-oriented assembler programming language for the PDP-7 and the PDP-11, so they only ran on these computers. From 1973 onwards, this changed, as large parts of the system were rewritten in the C programming language developed by Dennis Ritchie, which is still widely used today. This step not only increased the maintainability of the operating system on the part of the developers, but later also allowed the system to be ported to other hardware architectures.

To each his Unix - The fragmentation

The new operating system Unix was initially only used and developed within AT&T. The system grew naturally in functionality and complexity, from a simple single-tasking operating system still on the PDP-7 to a multi-user multi-tasking operating system, which it still is today. The developer Bell Labs was, as already indicated, part of the telephone monopoly AT&T at the time. One of the conditions for AT&T to be allowed to have its monopoly was that the company limited itself to telephone services and communications and did not expand into related areas of the economy. The sale of computers and operating systems was thus taboo for the company. Of course, no one could forbid AT&T to develop an operating system for internal purposes. Only AT&T was not allowed to sell such a system. What was allowed, however, and also intensively pursued, was the licensing of the system to interested parties. Computer manufacturers looking for an operating system could purchase a licence from AT&T and thus gain access to the system’s programme code, which they could use relatively freely and adapt to their needs. This licensing was particularly attractive for educational institutions and universities, which only had to pay $200 once for the licence. One of these licensees was the University of Berkeley. Computer scientists and students at the university adapted AT&T’s system for their research purposes, expanded the network functionalities and improved the performance. The resulting system was called BSD (Berkeley Software Distribution). Successors to this system are still widespread today in the form of FreeBSD and NetBSD.

The fact that Unix licensees could adapt the system to their own needs meant that there was no longer a uniform Unix. There was AT&T’s UNIX, which was usually written in capital letters. There were also many other Unixes, such as the BSD from the University of Berkeley, SunOS from Sun Microsystems, Xenix from Microsoft, AIX from IBM and many more. All these systems were somehow Unix, but they were different. On the one hand, of course, this was the charm of the project, because each Unix was optimised for its intended use; on the other hand, this fragmentation brought the danger of absolute chaos. In order to avoid precisely this, a standardisation was sought from 1985 onwards, which is known today as the “POSIX standard”. The standard describes a kind of minimum requirement for Unix systems that should be the same for all derivatives.

AT&T’s telephone monopoly was broken up in the 1980s. From 1984, the system developed at Bell Labs was therefore also allowed to be sold as a product. From then on, AT&T tightened its licensing policy. New versions of AT&T’s UNIX could no longer be licensed and adapted as cheaply. This step naturally caused a backlash. Unix was widely used at universities, where it was highly valued for its flexibility. The University of Berkeley, with its important Unix derivative BSD, wrote all existing AT&T parts out of its system and replaced them with functionally identical code of its own, i.e. compatible with the POSIX standard. From 1991, BSD was then available AT&T-free under an open source licence. “Open source” means, put very simply, that anyone can obtain the source code and adapt it to their own projects as they please, without having to pay licence fees. A lot of software works according to this principle today, such as the Firefox browser, the basis of the Chrome browser and the LibreOffice and OpenOffice office programmes.

The open source BSD versions of 1989 and 1991 were not the first attempt to create a Unix without AT&T dependency. As early as 1983, Richard Stallman started the GNU project specifically for this purpose. Stallman’s goal was free software20 - and here in particular a UNIX-compatible operating system without a single line of AT&T’s original UNIX code. The abbreviation GNU therefore also meaningfully stands for “GNU is Not UNIX!”. GNU as a complete operating system is still not ready today. However, a whole host of important tools and programme packages have emerged from the project, including the command line interpreter “bash”, which is widely used in many Unixes and beyond. The whole range of GNU software tools, together with an open-source operating system kernel developed by Linus Torwalds in the early 1990s, formed the basis for the Linux operating system21.

Linux is thus in the Unix tradition, but does that make Linux a Unix? One can only answer the question with a counter-question: Is BSD a Unix, although all program codes were explicitly removed by AT&T? What do you want to use as a criterion? One possibility would be the POSIX standard mentioned above, because it describes unix-like (or also “unixoid”) operating systems. If one calls all operating systems that completely or at least largely fulfil this standard a Unix, then if one calls the current variants of BSD (FreeBSD, OpenBSD) a Unix, one would also have to consider Linux a Unix. In practice, however, this is often not done. Quite the opposite: if you use a Linux distribution and call it Unix, many computer scientists will reprimand you. Once again we have a dispute here that I cannot resolve, do not want to resolve and fortunately do not have to resolve, because I have chosen the user interface glasses to describe the history of computers and from this perspective there is no significant difference between a FreeBSD, i.e. a Unix, and a Linux. You can hardly tell the difference between a command-line oriented BSD and a command-line oriented Linux, and the same applies to Unixes or Linuxes with the graphical user interfaces such as KDE or Gnome. They look very similar, behave similarly and are therefore, at least in my view, also similar enough to group them both under “Unix”.

The Power of the Command Line - The Unix Shell

I have now told you a lot about the history of the system, but have not yet said a word about how Unix is in use. Let’s start with the basic concept, the most important types of objects a Unix user has to deal with. Unix is very simple at this point. The most important objects are files and directories or subdirectories. Today, a subdirectory is often referred to as a “folder”. However, this is actually anachronistic for Unix, because this term originates from the repertoire of terms of the graphical user interfaces of the 1980s. In 1970, most computer users did not yet imagine their file system as analogous to offices with documents and folders. They were much more likely to see files listed in an index. This index was called a “directory”. Unix is an operating system designed around the file system. The basic concept of Unix file storage, a hierarchical directory structure, was adopted by Multics. “Hierarchical” means that a directory can contain not only files but also directories in turn. In such a file system, each file can be identified by specifying a path. This starts at a general root and then follows the subdirectories to the file. A / is used as a separator. A typical Unix path is something like /home/User/texts/history.txt.

Interaction with the Unix operating system is classically done with a command line interpreter. In the Unix area, this is usually called “shell”. Users of the shell are always in a certain path of the file system hierarchy at a point in time and can change this position with the help of the command cd. cd dirname changes to the subdirectory dirname, for example. The string .. stands for the parent directory, also called parent directory. cd .. thus changes to the directory that contains the directory you are currently in. This concept, including the cd command and the .. parameter for the parent directory, has also been adopted in other operating systems. Microsoft ported it to the second version of its DOS as early as 1983. Only the separator in the path was reversed by Microsoft and the backslash was used. In a current Windows command prompt, you can use both \ and / as separators.

In the world of Unix, everything appears as a file. Even devices connected to the computer are represented as a file and accordingly have a path in the file system. This has peculiar consequences for the modern user. For example, you can have a file printed on a connected printer by copying it to the printer. The command for this was, for example, cp ./test.txt /dev/lp0. With a similar command, however, you can also send the file to the sound card or output it on the screen of another logged-in user, because all these resources are available just as files, i.e. via a file path.

Unix is very modular. Almost every component can be replaced by another. For example, there is not one command line interpreter, but several alternatives that can be used. Very common today, for example, is bash from the GNU repertoire. The available commands can also be exchanged and extended because, with a few exceptions, they are not built into the command line interpreter as in MS-DOS, but are available as small executable programmes. Across all Unix derivatives, however different they may be, there is a typical set of small tools and shell commands. These include, for example:

  • ls to output a list of entries in a directory
  • cat to output a file content
  • cp to copy files
  • mv to move or rename files
  • rm to delete files
  • mkdir to create directories
  • cd to change to directories

Most of these commands do not make sense on their own. cat, for example, you must at least specify which file you want to have displayed and if you enter cp, you must at least say which file you want to copy where. These additional details are called “parameters”. Very typical for Unix systems is the often very large number of possible parameters. The command ls for example, standing alone, outputs all non-hidden files in the current directory as a simple list. If you add the parameter a, i.e. ls -a, hidden files are also displayed. If, on the other hand, ls -l is specified, a more comprehensive list is output, which also includes file size, date of creation, etc. If you want the hidden files to be included in this list, you can combine both by ls -a -l or ls -al. These two parameters are by no means the only possible ones. All parameters of ls can be found in the help system integrated in most Unix systems, the so-called “man-pages”. On a BSD Unix, for example, it will say ls [-ABCFGHLOPRSTUW@abcdefghiklmnopqrstuwx1] [file ...]. Each letter stands for a possible parameter.

Software from common operating systems such as Windows or even MS-DOS differs greatly from typical Unix software. A typical DOS or Windows application is interactive, has its own user interface and usually covers a wide range of functions. Under Unix, on the other hand, programmes are usually quite small tools that do a particular task particularly well without further user interaction, but do not go beyond that. These small programmes can be combined by making the output of one programme the input of the other. Besides files, data streams are thus also a basic concept of the operating system. To understand this, it helps to go back to the prehistory of the computer. Think of each small Unix tool as a small computer equipped with a paper tape reader and a paper tape puncher. If you now let such a computer run, an input punch strip is read and an output punch strip is created in the process. You could now give this output punch strip to another small computer with another programme as an input punch strip. If you then run the , the previous output is taken as input and again a new output punch strip is created. This is exactly how Unix tools work in principle. Let us take the command cat ./events.log. This command outputs the file “events.log” from the current directory - in the thought experiment we just made, it creates a paper tape with the contents of the file. We now give this punched tape as input to another of our small computers. Of course, this also includes a command, namely grep ERROR. If you enter this command for yourself - try it if you have a MacOS or a Linux at your disposal - little will happen at first. grep expects input, and since you have not provided any input, the tool expects you to do it yourself. For example, if you type “Hello world!” and press Enter, nothing happens except that this text appears on the screen. But if you write “This is an ERROR!” and press Enter, something happens. The line is now on the screen twice. What grep ERROR does is to take the lines of the input that contain the given word, i.e. ERROR, into the output. All other lines are discarded.

Now you already have two small computers. One creates a paper tape with the content of the file “events.log” and the other works as a filter by transferring all entered lines containing the word “ERROR” to the output paper tape and ignoring everything else. Now you still have to tell the system that the output punch strip of the first computer should become the input punch strip of the second. For this purpose, Unix uses the vertical line, also called “pipe character” according to the name of the technique. With the | you create a connection, a pipe, between the output of one tool and the input of the other. The result cat ./events.log|grep ERROR thus outputs all lines from the file “events.log” that contain the word “ERROR”. With a special kind of pipe, you can ensure that this output is not simply on the screen, but is written back to a file. This can be done with the “greater than” sign followed by the file name. The command cat ./events.log|grep ERROR>errors.log thus creates a file “errors.log” with the filtered out lines of the file “events.log”.

With piping, once you understand it, you can combine the many little tools of Unix in a clever way. However, some tasks are more complex than could be understood as simply executing several programmes one after the other while passing on the output. As an example, let us consider the problem of renaming files. Surprisingly, Unix is not particularly well equipped at this point. Imagine you have a folder full of image files called “image001.jpg”, “image002.jpg” and so on. You now want to rename these files so that “image001.jpg”, “image002.jpg” and so on come out. If you are a Windows user and use the command prompt, this is easily done with a single command ren image*.* image*.*. Unfortunately, it is not so easy under Unix. Why this is so has to do with how the asterisks, which stand for any other characters, are resolved. I’ll spare myself a detailed explanation at this point, because I’m only using renaming as an example and don’t assume that you want to use my book to delve deeper into the secrets of Unix shells.

In order to rename under Unix, you can write a small programme, a shell script. This could look like [^script_quelle] for example. The line numbers printed here are for the following description only. They are not part of the shell script.

 1  1: #!/bin/bash
 2  2: # renames.sh
 3  3: # basic file renamer
 4  4:
 5  5: criteria=$1
 6  6: re_match=$2
 7  7: replace=$3
 8  8:
 9  9: for i in $( ls *$criteria* );
10 10: do
11 11: src=$i
12 12: tgt=$(echo $i | sed -e "s/$re_match/$replace/")
13 13: mv $src $tgt
14 14: done

Let’s go through the programme briefly: The first three lines are basically just comments to help you quickly see what this shell script does. The first line plays a special role, because it specifies which command line interpreter is to execute this shell script. This is the already mentioned bash shell from the GNU project. Lines 5 to 7 are mainly for better readability of the programme. Here, the parameters that are given to the script are stored in variables with meaningful names. If a user starts the script with the call rename.sh *.jpg image, *.jpg is stored in criteria at this point, the variable re_match receives the value image and accordingly image is found in the variable replace. The actual programme does not begin until line 9. First look at what is written there inside the parentheses. There the command ls is called, which, as you know by now, is responsible for the output of a directory content. Here in the script, however, the result is not output, but stored in a variable. In front of the bracket is for i in. This is one of the most typical structures of any computer programme, namely a so-called “loop”. Everything between do in line 10 and done in line 14 is executed not just once, but several times, exactly as many times as the directory listing from the parenthesis is long. The listing is therefore gone through line by line. A variable called i then contains the current file name. What happens inside the loop is basically quite simple. In line 11, the original file name, i.e. the one that is currently up from the listing, is stored in a variable src. Line 12 determines what the new file name should be. The result is stored in the variable tgt. In line 13, the file is renamed with the help of the command mv.

1 12: tgt=$(echo $i | sed -e "s/$re_match/$replace/")

Line 12 is tricky, because here the Unix tools interlock again. Let’s first look at what happens inside the bracket. First of all, there is the command echo. This command can be used, for example, to output something on the screen in a script, because echo creates exactly that: an echo. If you type echo Hello in a Linux console, Hello is output in the line below. In the script, of course, Hello is not output here, but the content of the variable i, i.e. the file name that is to be adapted. In this case, echo does not write on the screen either, because behind it is the already known pipe character. The output thus becomes the input for another Unix tool. This tool is the editor “sed”. “Sed” stands for “stream editor”. It works similarly to the “ed” we discussed in an earlier chapter, but unlike the latter, it knows no concept of lines. A text file in “sed” is simply a long sequence of characters. As a practical editor, “sed” is thus rather useless, but for the purpose of automated processing it is ideal. “Sed” gets the file name just output with echo as input. Let us assume that this input is imagination.jpg. “Sed” was started with the parameter -e. Behind this is a so-called “regular expression”. These strings are commands to find something in a text and, if necessary, to change something in it. The expression given here looks for what is in the variable re_match and replaces it with what is in replace. So in our example, we search for image in image.jpg and change this occurrence to image. Normally, the result image.jpg would end up back on the screen, but here in the shell script it ends up in the variable tgt instead.

Unix is not the only operating system that supports shell scripts, nor is the technique of piping limited to Unix. Microsoft adopted both techniques as early as 1983 for its second version of DOS. Scripts are known there under the name “batch file” or also “batch file”. The term does not have much to do with batch processing from the punch card era. The name is also somewhat misleading in many cases, because of course such a script can be used to process many files or data sets in one go, i.e. as one batch. That is exactly the kind of example you have just seen. However, not every shell script and certainly not every batch file from DOS or Windows fulfils this characteristic. Often it was simply a matter of five commands that were to be executed one after the other and whose eternal repetition was to be saved or, especially with DOS, it was a matter of simple menu structures that many users made up in order to be able to quickly start the programmes of their choice without having to enter commands on the command line.

Piping and shell scripting is very powerful under Unix because of the many small tools. In systems such as DOS or the Windows command prompt, similar techniques are possible but not nearly as common or as often used. Unix, with its shell, is a very powerful operating system. However, the user interface world of Unix is quite different from the one that users of current operating systems usually use. A teletype-like terminal style still prevails. The only really interactive programmes you need in a classic Unix are the shell itself and an editor. Apart from this editor, there is in principle no need for software in the Unix world that would exploit the spatiality of the screen. While files are the central objects of interaction in the Unix world, piping also requires a data stream-oriented mindset. It is precisely this way of thinking, which I tried to explain to you above through the metaphor with the many small computers and the punched tape, that is difficult to understand for users of today’s systems, at least if they are not also programmers, because it is very technical. The genius of the classic Unix interface then lies not in its simplicity, but in its unrivalled flexibility, because with shell, file system, piping and scripting, you can always put together the many little Unix tools just as you need them.

X-Window System - Unix can also be graphically

The classic Unix interface I have just described to you can be found on all unixoid operating systems. No matter whether you use a Xenix from the 1980s or start a terminal under an Ubuntu Linux or under MacOS: you are always in the usage interface world described above, which is also always quite similar. Of course, the development of graphical user interfaces has not left the Unix world unscathed. The X-Window system from MIT, which has been available in a stable version since 1987, can be seen as the basis for the wider spread of graphical user interfaces under Unix. A look at computer history reveals earlier graphic interfaces, but these often remained experimental or were limited to individual Unix derivatives. The X-Window technique, on the other hand, was a general standard that has persisted to this day22. Probably because of the similarity in name with Microsoft’s Windows, the X-Window technology is itself sometimes regarded as a graphical user interface. However, this is a mischaracterisation. X-Window provides only a general mechanism that running programmes can use to draw on the screen and get input from the mouse, keyboard, etc. What this looks like, what the user interface is like, what objects there are, and ultimately even whether there are windows at all, is not the subject of X-Window.

Windows and macOS are more or less monolithic systems. They provide the methods for displaying on the screen in general, offer screen elements such as buttons, input fields etc. and also take care of the management of windows. In addition, there are user interfaces for file management, for starting programmes and for switching between these programmes. With graphical user interfaces under Unix, all these areas are separated. This looks something like this:

  • The most basic level is the so-called X server. It provides applications with capabilities to draw on the screen and receive the user’s mouse and keyboard interactions. The term “server” is somewhat confusing here. A server is usually a computer in the network or a programme on such a computer that provides services that can then be used by means of user interface programmes. These are usually called “clients”. The concept of the X-server turns this on its head a bit. It is the X server that provides general functionalities for spatial-graphical input and output. The software programmes make use of these capabilities and are thus the clients. The user therefore uses the clients by means of an X server.
  • User interfaces are composed of very basic elements such as screen areas, buttons, menus and input fields. So that not every software application has to reinvent the wheel here, developers make use of libraries with such elements. These are called Widget Toolkits.
  • The basic functionality of windows and their management is the responsibility of the Window Managers. The scope of these window managers varies. In any case, they include the possibility of sliding windows as well as closing windows. Most window managers implement the familiar methods for this with a resizable window frame and a series of buttons for minimising, maximising and closing. Many window managers also provide a task switcher for switching between windows and an application launcher for starting programmes.
  • The combination of Window Manager, Widget Toolkits and such basic programmes as a File Manager and an Application Launcher forms a so-called Desktop Environment.
  • An operating system usually comes with a set of typical applications, some of which are generally available, others of which are specially adapted to the system in question. In the Linux field, these collections of installed software products and management tools are called distributions. Each distribution makes a pre-selection regarding the desktop environment, the included tools and management tools and adapts them according to the philosophy of the distribution, adds own plug-ins, exchanges modules and adapts the appearance and settings.
GNOME 2.2.0 in RedHat 9 from 2004 - Screenshot: Marcin Wichary, GUIdebook
GNOME 2.2.0 in RedHat 9 from 2004 - Screenshot: Marcin Wichary, GUIdebook

The popular Linux distribution Ubuntu, for example, uses the desktop environment GNOME as of 2021. GNOME’s window manager is called “Mother” and the file manager is “Nautilus”. GNOME also includes the GTK toolkit, which provides user interface elements. The openSUSE distribution, on the other hand, relies on the desktop environment KDE Plasma with the window manager KWin and the widget toolkit qt. KDE’s file manager is called “Dolphin”. Linux Mint uses the window manager Cinnamon with the file manager Nemo, which is a fork of an earlier version of Nautilus. So you have Linux here three times, but three times it is different, with different tools and with a different user interface. But that is not the end of the confusion, because you can not only exchange the desktop environment, but also use the tools crosswise. So even if you are using a KDE Plasma environment, you can still use GNOME programs as long as you have installed the appropriate toolkits, which the operating system’s management tools usually take care of themselves. So you can also run a Nautilus under KDE or install the partly comprehensive KDE applications under Linux Mint. This great flexibility and compatibility is why it is simply not possible to talk about the graphical user interface of Unix and Linux in general. In order to be able to say anything at all, you need to know the distribution, the version and, in the end, actually always the details of all the adjustments that a user has made.

KDE Plasma

A look at the graphical Unix user interfaces can therefore only be exemplary, but these examples can have a lot going for them, because in some cases much more is possible with them than Windows and MacOS have to offer. To prove this, I will show you the Linux distribution Kubuntu 19.03 with the desktop environment KDE Plasma 5.

The distribution Kubuntu 19.03 with the desktop environment KDE Plasma 5
The distribution Kubuntu 19.03 with the desktop environment KDE Plasma 5

The user interface of KDE Plasma is basically quite similar to the user interface of current Windows versions. There is a start button and a start menu, a task bar and a desktop. The Dolphin file manager looks like a mixture of Apple’s Finder and Microsoft’s Explorer, but it has interesting features such as a split view, which is not available on Windows or the Mac. What is exciting about the user interface is the desktop concept. In Windows and also in current macOS, the desktop is a folder on the hard disk whose only special feature is that its contents, i.e. the files and folders it contains, are displayed as a large background behind all windows. In KDE Plasma this is also possible and in Kubuntu this setting is the default. However, the layout can be switched from “folder view” to “workspace”.

In the desktop view, small active components can be placed in their own areas on the desktop. These mini-programs, known as “widgets”, can perform all kinds of tasks, from displaying the current weather to directly integrating websites on the desktop. If you own an Android phone, you will be familiar with this concept, because Android also makes it possible not only to place icons on the overview screens to launch the desired applications, but also to have objects directly on these screens that display something and can be interacted with. In Windows Vista, too, you could put little programs on your desktop. Widgets alone would therefore not be something that would make me rave about the KDE desktop to you. However, things get exciting when you think a little about the possibilities of a very simple widget, namely the widget that displays the contents of a folder. Above you can see an example of this. Two folder display widgets each display one folder. The upper area shows the contents of the download folder, while the lower area shows the contents of a folder called “Project 1”.

These folder widgets, much more than Windows and macOS, allow you to make your desktop your own workspace. If you use KDE Plasma, you can set up the projects you are working on as separate areas on the desktop, work with the files and even move them back and forth between the folders. If you need another area directly accessible in the meantime, you simply set up another widget. If a project is finished and no longer relevant, you can simply remove the widget. The files are then no longer on the desktop, but are of course still available in the file storage.

The “Activities” function is also very practical when working with a large number of projects. It can be used to set up different desktops for different tasks. Each desktop has its own setup, with its own widgets, background image and also its own open applications. The functionality of offering more than one desktop and thus enabling multiple working contexts is very old, especially in the area of Unix GUIs. In the meantime, similar functions are also available in MacOS and Windows. However, their implementation is very limited in relation to KDE’s “Activities”, because neither under Windows nor under MacOS are you really provided with different desktops with different elements. You can only switch between sets of open windows. This is different with KDE Plasma: there, for example, you can set up a home desktop and a work desktop and adapt to both contexts accordingly without the objects of one context coming into conflict with those of the other. If you need the same elements on more than one of these desktops, this is no problem at all with the widget setup, because no one prevents you from pushing a display widget onto the desktop for each of the same folders on the hard disk.

The number of people using Unix or Linux systems with a graphical user interface at home or at work is vanishingly small, in the low single-digit percentage range. It is all the more astonishing that, as here for example with KDE Plasma, very interesting user interfaces are being developed for the systems, and it is of course all the more regrettable that hardly anyone gets to enjoy these user interfaces. There are certainly many reasons why this is ultimately the case, about which I can only speculate here. Some of this certainly has to do with the market power of the big companies Microsoft and Apple and the resulting wide distribution of their software infrastructures. In my opinion, however, another important reason is that Unix and Linux, with their fundamental philosophy, are somewhat denying themselves the possibility of success. The openness of the configuration outlined above and the possibility of combining almost anything with anything certainly seems to inspire developers to incorporate new innovative functions. But it also ensures that as a Linux or Unix user you still feel a little more alone than others when problems arise. If you have a problem with your Windows 10, you are sure to find someone in your circle of acquaintances who can understand the problem and help you. If you use Linux, this becomes more difficult because of the distribution, but even if you find someone who also uses Linux, they probably use a different distribution with different window managers, different settings and different system tools.

Unix on smartphones?

The fact that Unix and Linux have low penetration in the end-user sector is only true if one considers only the area of desktop operating systems and if one counts MacOS not as Unix but as a system in its own right. Generally, things look quite different if you include smartphones and tablets, because Apple’s iOS, like MacOS, is based on BSD Unix and Google’s Android uses Linux as its operating system under the bonnet. So should I now introduce macOS, Android and iOS to you here as Unix systems? As always, there are several points of view. Mine is that of the user interface worlds. For example, it makes sense, as pointed out in previous chapter, to describe Windows as something other than MS-DOS, because it has its very own usage interface world. If you tell an experienced computer scientist something about the Windows 3.11 operating system, he will enlighten you that Windows 3.11, just like Windows 95, 98 and ME, were not stand-alone operating systems, but extensions for MS-DOS. From the point of view of operating system technology, this may be true, but for me it simply doesn’t matter. The worlds of use of pure DOS, with only one programme running at a time, a command line and applications in text mode, and of Windows, with several applications running simultaneously, a programme manager and graphic objects that can be spatially manipulated, are quite different. A Windows user does not actually still use DOS - at least not if you have the usage interface glasses on. The same is even more true for Unix, Android and iOS. On the technical level of process and data carrier management, one may be an add-on to the other, but from the user’s point of view, they are each separate user interface worlds. Android and iOS are in a completely different tradition than Unix and Linux. They are not the great-grandchildren of an old time-sharing system, but successors to the mobile personal digital assistants of the 1990s, and it is in this context that I will describe them to you in the following chapter.

From the PDA to the Smartphone

The computer story I have told you up to this point has led, among other things, to the desktop PC you have in your home or office, or the laptop you may carry around with you. But these computers make up a rather small part of today’s computer market. Statistics show that about 260 million such PCs were sold in 20191. This compares with a staggering 1.6 billion smartphones and tablets sold in the same year2. Among these, smartphones of course account for the largest share, but even the number of tablet computers sold alone is larger than that of desktop PCs and laptops combined.

The technical basis of today’s operating systems for tablets and smartphones is Linux (Android) and Unix (iOS and iPadOS). However, the user interfaces of these systems have a completely different characteristic than the Unix and Linux systems with their hierarchical file systems and complex shells and desktop interfaces. Under Unix, Linux, but also under Windows and MacOS, file storage plays an extremely important role. The programmes for accessing the file repositories are central, if not the most important element of the user interface in all systems. The user worlds of Android, iOS and iPadOS do not know such central storage or at best as an additional option 3. In the foreground of the systems are the applications, recently called apps. The apps manage their content objects themselves. What does that mean in concrete terms? Let’s take Apple’s word processor Pages, which is available on the Mac as well as for iPad and iPhone. How do you open a file in Pages? On the Mac, you have at least two options. You can either open the Pages application, then choose “Open” from the “File” menu (equivalent to “File” on Windows) and select the file, or you can use the Finder file manager, locate the file and open it by double-clicking from there. In both cases, the central file storage plays a major role. The text document appears in this file store as a separate object, independent of the Pages application. However, if you use Pages on an iPhone or an iPad, it first happens as in the former case. You open the app - usually by tapping the app icon on a home screen or in the app overview. Now, however, you do not open an externally available file as above. Rather, Pages itself shows you all the documents it has under its management. You never have to click on “Open” or “Save”, but simply select the document you are interested in, edit it and when you have finished, close the app or switch to another one. Loading and saving happens implicitly without your intervention.

The fact that smartphone and tablet operating systems and applications operate so differently from classic PC systems is no coincidence. Smartphones and the modern generation of tablet computers with Android or iPadOS, which have developed as a larger, more versatile variety of the smartphone, stand in a very different tradition than that of the PC with its roots in time-sharing systems and the pioneering work of Xerox PARC in the 1970s.

Computers en Route

The history of the computer’s user interfaces, as I have presented to you in this book, has ultimately always been a history of miniaturisation, from mainframes to minicomputers to personal computers. Miniaturisation often meant much more than that the devices became smaller. A modern PC is not a small version of an ENIAC or a Zuse Z3. These are machines with very different characteristics. They are operated quite differently and also used for quite different purposes. Of course, this is not the case with every miniaturisation. If you use a small compact PC the size of a book instead of a desktop PC, there is no really significant difference. The same is broadly true when you look at laptops. Of course, they are much smaller and make it possible to use PCs in situations and places where this would hardly have been possible with a desktop computer, but it is still a miniaturised personal computer that requires a personal computer mindset to operate. In addition to this miniaturisation of the personal computer, however, there has been another line of miniaturisation development that has produced devices that represent their own class of device with their own operation and mindset. You probably also have the temporary end product of this development in your pocket or on the table. I would like to present the line of development of miniaturisation that led to the smartphone in this last section of my book.

A central aspect of today’s smartphones are certainly the “apps”, i.e. the applications that take centre stage. Each app of a typical smartphone operating system has its own clearly defined function. The files that are so ubiquitous on a PC, stored in a hierarchical file repository, do not in principle appear in the user interface of a smartphone. A note app manages the notes created with it, a music app manages the music, a word processing app manages the documents. Note files, text files or music files do not exist in the user interface world. This limitation may be less flexible, but it also makes the devices much easier to use, which benefits the acceptance of the technology. Many people who say they can’t handle a computer use a smartphone as a matter of course and get on quite well with it.

Of course, even before the smartphone, user interfaces were devised to enable non-computer-savvy users in particular to benefit from computer technology. For example, in the chapter “Desks and windows” you learned about the Xerox Star, whose target group was employees with office jobs. However, the way of using the Star that the Xerox developers introduced at that time, the desktop and the document-centred work, was the absolute opposite of the app idea of the smartphones. The applications were not in the foreground for the Star, they did not even appear in his world of user interfaces. The typical way of using the Apple II came much closer to today’s app-based user interfaces. Although you could start the Apple II with a DOS and then use the command line to manage the files on floppy disks, you could also avoid the command line altogether if you did not want to use it. If one wanted to use the spreadsheet VisiCalc, for example, one did not have to enter any commands, but only insert the appropriate diskette and start the computer. VisiCalc then started automatically. Every software manufacturer worth mentioning did the same. Software was always delivered on a floppy disk that was set up so that the computer could be started directly with it. So, by selecting the appropriate programme disk, one virtually chose one’s “app” to use the modern expression. The object selection, i.e. the loading, saving or even deleting of files, was taken over by each of these programmes. I can’t prove it, of course, but it seems very plausible to me that, apart from the existence of programmes like VisiCalc, it was precisely this design decision of automatically booting into an application that was one of the reasons why the Apple II became attractive to the business world, because it made it very easy to use an Apple II for spreadsheets or word processing without having to have major computer skills. One did not have to bother with a somehow cryptic operating system, but only insert the diskette with the right programme. Everything else was taken care of by the user interface of the application programme itself, which, unlike DOS, knew how to exploit the potential of the spatial interface.

Mobile Personal Computer

Users from commerce and business were among the first to see sense in using mobile computers. For a travelling salesman, for example, it was of great value to be able to do typical office work on the spot, in the hotel or even in the car if necessary. Mobile personal computers were therefore produced relatively early on. These computers basically ran the same operating systems as the “big” personal computers. For example, the Osborne 1, which I will introduce to you in a moment, ran CP/M, which was widespread at the time and which you got to know in the chapter on the Altair 8800. Users could insert a CP/M diskette into the Osborne’s floppy drive and then use the keyboard and tiny screen to operate the command line, but just as with the Apple II, you could also largely avoid this rather technical interface by launching a program directly.

Osborne 1 (1981)

The Osborne 1 was, as far as I know, the first significant mobile computer for the office worker target group. Although there were already some mobile devices, such as the IBM 5100 from 1975 and the HP85 from 1980, these were not yet computers that could be equipped with standard software like a PC. The IBM 5100 could execute programs in the programming languages BASIC and PL/1, which were also widely used on IBM’s large computers, among others. The HP85 also offered the user a BASIC environment. Its focus was primarily in the scientific field with a column printer4 and the possibility to plot functions on the screen. Neither the IBM 5100 nor the HP85 were designed to write letters on the go, create data sheets in a spreadsheet or update an address database. The Osborne, on the other hand, was designed precisely for this.

Osborne 1 from 1981 - Image: Bilby (CC BY 3.0), colour corrected
Osborne 1 from 1981 - Image: Bilby (CC BY 3.0), colour corrected

The Osborne 1 appeared in 1981 shortly before the introduction of the IBM PC. It was a fully equipped computer that could use the established CP/M operating system. The computer was equipped with an Intel 8080-compatible Zilog 80 CPU, which was also used in many home computers of the 1980s. Osborne equipped the computer with 64 KB of memory, which was the maximum possible with CP/M and the computer architecture. The computer had a fully equipped keyboard which, unlike many CP/M computers, even had arrow keys. Since it was a computer for office work, there was also a number pad. The keyboard was also the lid of the computer. Two floppy disk drives were also part of the equipment. The weak point of the computer was certainly its screen. In the middle of the large, bulky device was a ridiculously small 5-inch monochrome tube monitor that could display 24 lines of text of 52 characters each. Today’s smartphone displays are usually much larger than this screen. In stationary operation, one did not have to use the screen, because the unit also allowed the connection of a larger monitor. The Osborne 1 weighed about eleven kilograms and was thus anything but a lightweight. It was a mobile computer, but at that time it was understood to be something quite different from what we think of as a mobile computer today. No one wanted to have this computer with them all the time like a tablet or notebook, but a travelling salesman could easily transport the Osborne in the car, take it with him to a hotel or perhaps place it on a desk at a customer’s site and work directly on the spot. The computer did not have a battery. So one was dependent on a 220 volt or 110 volt power supply.

Central to the computer was the software that came with it. Apart from Microsoft’s MBASIC, which was mostly delivered with CP/M, these were the spreadsheet SuperCalc, the word processor WordStar and the database dBase. Later, three accounting programmes were also included in the package. The fact that software was supplied with the computer was not unimportant with CP/M systems, because although the operating system was a set standard, so that in principle the software that ran on one CP/M system could also be made to run on another, on the one hand there was no uniform disk format, so that it was not possible to use the programmes stored on Altair disks, for example, on the Osborne without a great deal of extra effort, and on the other hand the systems often differed greatly in their terminal capabilities. In the chapter “Terminals instead of teleprinters” you read about the advantages of screen terminals over the old teleprinters. They made it possible to edit objects directly on the screen and update them on the spot. For this to be feasible, it had to be possible to control, delete and overwrite the individual characters directly on the screen. This was done by sending special characters called “control codes” to the terminal. However, different terminals unfortunately used completely different control codes. With the Osborne, there was also the peculiar line length of 52 characters. If one used software here that assumed 80 characters, the representation would have been incomplete at best, and completely chaotic at worst.

The reason why I see the Osborne as one of the first steps towards the emergence of a new kind of mobile computer is, on the one hand, the fairly typical choice of software that we are about to find on almost all devices. Secondly, it is because of the way the Osborne company delivered the programmes, because, as with the Apple II, no one who didn’t want to had to bother with a command line. If you wanted to edit a data sheet, all you had to do was insert the diskette with the corresponding “app” and start the computer. Each diskette was set up to contain a trunk operating system, i.e. it could be used to start the computer and then launch the respective application programme directly. This made the Osborne very easy to use, much easier than, for example, the 1983 Compaq Portable, which had a much larger screen, more memory and was equipped with MS-DOS. Although this device had a much larger software base open to it, which could also be copied from one computer to another thanks to established standards, the user of the Compaq had to deal with the MS-DOS operating system more than he might have liked. The Compaq Portable was not a step towards PDA and smartphone, but rather towards the notebook or laptop that nowadays runs Windows 10 or macOS.

EPSON PX-8 (1984)

The Osborne 1 was a mobile computer in the sense that it did not necessarily have to be stationary in the office or on the desk at home. But it was not a device to “always have with you”. But such devices also already existed in the 1980s. In the picture you see the 1984 PX-8 from EPSON. When closed, the unit is about the thickness of a reference book. So the computer definitely finds room in a briefcase. If you put the computer on the table in front of you, you could fold out a small display. Then a drive for microcassettes also comes to light. Such cassettes were used in answering machines and dictation machines in the 1980s, 1990s and even in the early 2000s. The equipment of the PX-8 was quite similar to that of the Osborne, despite the big difference in appearance. Here, too, a Z80 processor was installed and the memory configuration of 64 KB was identical. Of course, it was not possible to install a picture tube in a compact, mobile device, as was the case with the Osborne or a Compaq Portable. Instead, a liquid crystal display was used. It produced a sharp, high-contrast image and could display 8 lines of text of 80 characters each. 8 lines is not much, but the display was already quite suitable for word processing on the go, because the 8 lines are quite enough to be able to see a little of the text context, and the line length of 80 characters made it possible to write text without having to move the text section horizontally.

Epson PX-8
Epson PX-8

EPSON also relied on the established CP/M operating system for the PX-8. Here, however, it did not have to be loaded from a diskette or even from the microcassette, but was permanently built into the computer as a ROM module (Read-Only Memory). In addition, two further slots could be equipped with fixed memory modules. Customers could choose, among others, the word processor Portable WordStar, the spreadsheet Portable Calc, the appointment management Personal Scheduler or the database dBase II. However, only two of these software modules could be used at any one time.

Users of the PX-8 usually did not need to deal with their operating system. They could comfortably start the application programmes via a menu system. For the most part, the application programmes took care of file management themselves. However, users did not quite get around the file system access properties of CP/M with its drive letters. Files could be stored on the microcassettes or, if an external drive was connected, also on floppy disks. In many cases, however, this was not necessary, because even without external storage media, the computer could store files internally on a battery-buffered RAM disk. RAM disk means that a part of the working memory is chopped off and used as if it were a data medium. The main advantage of a RAM disk is that it is very fast. Their major disadvantage, however, is that the memory space is very limited, depending on the computer equipment, and that the contents of the RAM disk are lost if the power supply is interrupted. Users of the PX-8 were therefore well advised to regularly back up their data to more persistent storage media.

Pocket computer

In the tradition of simple, mobile computers like the PX-8 are the pocket computers of the late 1980s. These calculators, sometimes known as “palmtop PCs”, were quite similar to the PX-8 in terms of functionality, but were scaled down to fit in a slightly larger jacket pocket.

Atari Portfolio (1989)

The first device of this kind was the Atari Portfolio shown here, which Atari itself described as a “16-bit personal computer”. It was launched in 1989 and cost $399.95 at the time (about $866 in 2021 purchasing power). It was about the size of a VHS video cassette and weighed about 500 grams. Although the computer was branded Atari, it did not run the TOS of the Atari ST, but the much simpler DIP DOS, an operating system that was largely compatible with Microsoft’s MS-DOS. So technically, this was indeed a small IBM PC-compatible computer. With its hardware equipment, an Intel 80C88 with 4.92 MHz and 128 KB RAM, it was about as powerful as the original IBM PC from 1981. In such a small case, that was quite respectable. Compared to the IBM PC, the screen was of course limited once again. There was only room for 8 lines of text of 40 characters each on the liquid crystal display.

Atari Portfolio
Atari Portfolio

Just like the PX-8, you did not have to load the Portfolio’s software from a data carrier, because it was permanently built into the unit on a ROM chip. Accessible via menu were an address management (including telephone dialling assistance5), an appointment management, an alarm clock, a simple text editor as a word processor, a spreadsheet and a calculator. All this software and the DIP-DOS system were stored on a 256 KB ROM chip. As with the PX-8, it was not necessary to use the command line interpreter. The programmes could be started by menu and even by keystroke directly from the keyboard. Like the PX-8, the Portfolio could store data on an integrated RAM disk. However, the space there was severely limited. Therefore, memory cards according to the Bee Card standard were available as external storage media. These cards were about the size of a card from a standard pack of cards or a credit card, but about as thick as a beer mat. The picture above shows such a card, which could hold a whopping 32 KB of data. The card looks a bit like an oversized SD memory card, like those used in digital cameras today. However, the technology is quite different, because the flash memory technology used today was not yet available on the mass market at that time. The memory cards therefore contained a button cell battery and electronics that permanently supplied the built-in memory cells with power. On comparable cards, but in a powerless fixed-memory version, additional software such as a chess game and financial software could also be purchased. These software modules only had a fixed memory chip, so they did not need a buffer battery. Additional modules connected the small computer to a PC, a modem or a printer via a serial interface.

The Portfolio did not remain the only device in its class. The HP 95LX from Hewlett Packard, for example, dates from 1991. It was quite similar to the Portfolio in terms of features and functionality, but had more memory and a screen that could display 16 lines, which certainly made the calculator more suitable for tasks like spreadsheets, where a little overview of the spreadsheet is helpful, after all. Jumping ahead a few more years to 1994, we find the HP 200LX now featuring CGA-compatible graphics output, albeit on a liquid crystal display in greyscale. With the device, it became possible to display a full 25 lines of 80 characters of text.

Early PDAs

Many of the small, mobile computers I have just introduced to you are now retrospectively classified as PDAs (Personal **Digital Assistants), the class of devices popular in the 1990s. However, this term did not yet exist at that time. It only came up with Apple’s Newton, which I will only introduce to you in the following chapter.

With the PDAs, at least according to my understanding of computer history, the new characteristic described at the beginning becomes clear. They are not miniaturised PCs, but something of their own with their own world of user interfaces. Does that already apply here? The presented devices are on a borderline in answering this question. The clunky Osborne 1 is probably still the closest thing to a transportable version of a typical CP/M-based computer system of its time. The devices I showed you afterwards are all not only more compact, their operation is also far less PC-like. You no longer have to load software from floppy disks, you don’t have to use the command line interpreter and overall you have very little to do with the operating system, as the focus is on the software supplied and its usage interfaces. Behind this, however, are still the standard operating systems CP/M or DOS. The software of the systems is PC standard software, although sometimes a little bit adapted to the small devices. This is most obvious in the way files are loaded and saved within the applications, because at this point at the latest, the file system of the operating system with its drive letters and paths affects the user interface. The computers mentioned are important steps towards the Personal Digital Assistant, but I would not attach the label itself to them yet, because in my opinion the important step of a completely own user interface is missing, which is no longer based on CP/M or DOS and thus no longer shares their characteristics. The first steps in such a direction were already taken in the mid-1980s.

PSION Organiser

PSION Organiser II from 1986
PSION Organiser II from 1986

One of the first mobile computers that was not based on a PC operating system was the PSION Organiser from 1984. The computer had the appearance of a pocket calculator that was too big and, above all, too thick. The unit had only one line with 16 characters for output. With this and the very small memory of 2 KB, the possible applications were naturally limited. In its basic configuration, the device only offered a very minimalist database, a clock and calculator functionality. Further functions could be made available by plugging in modules that could be purchased additionally. Among these, there was also a module with a simple programming language with the awkward name POPL, at least in German.

The second edition of the Organiser from 1986 was already more practical in use, first with a two-line display, later with three and four lines. These organisers were particularly popular in companies where a lot of data had to be collected. The devices were almost indestructible and could be well connected to interfaces and input devices such as barcode scanners with expansion modules.

Compared to the Pocket PCs mentioned above, PSION’s organisers were of course limited. But this had the advantage that their user interface was of course inevitably simpler. Standard software from the PC sector was no longer used, but software with user interfaces that were specially developed for this class of device.

PSION Series 3 (1993)

The successors to the PSION Organiser II lost the Organiser part of the name, which is actually a little strange, because the devices from the Series 3 onwards in 1993 actually supported much more than before the tasks for which paper organisers were classically used. The external design of the devices was more similar to the pocket computers described above than to the early EPSON organiser. In contrast to the Pocket PCs, however, CP/M or MS-DOS was no longer used, but a newly developed system called EPOC. The picture shows EPOC in use on a Series 3a unit. The device offered the already familiar applications for calendar, database and calculator, but also spreadsheet and word processing. With the programming language OPL, own applications could be written for the devices, which could also access all graphical capabilities of the user interface.

PSION Series 3a - Image: Public Domain
PSION Series 3a - Image: Public Domain

The figure illustrates well one of the interesting features of the Series 3 user interface. Unlike PC operating systems, EPOC is not based on a file system visible to the user. There is therefore no file management programme equivalent to Explorer or Finder. Accordingly, there is no equivalent of a general open and save dialogue in the application programmes. The applications themselves rather manage the documents created with them and offer them in an appropriate way. The user interface of the Series 3 devices has a feature that was not available in later devices and, as far as I know, was never available in any other PDA or smartphone. On the main overview, you not only have the applications such as “Address” or “Accounts”, but also the objects last used with this application directly on offer. These objects could be selected directly with the arrow keys and opened by pressing “Enter”. This mechanism not only clearly illustrates the allocation of these usage objects to their applications, but was also a very practical shortcut and overview.

The devices from PSION and other manufacturers completed the step from one device class to another, as indicated at the beginning. They are the product of miniaturisation, but they are not simply scaled-down desktop PCs, they are something else. The basic structure of the device with keyboard and screen is still typical of a PC. This too was to change in the 1990s with the Apple Newton and Palm PDAs.

Personal Digital Assistants

I have presented the mobile devices from PSION, Series 3 and the subsequent devices, as a separate device class. They are no longer small, compact PCs, but devices with their own unique user interface and object world that are used very differently from a PC or laptop. The devices are now classified as Personal **Digital Assistants, or PDAs for short. As trend-setting as these devices may have been, they were still quite old-fashioned in terms of input technology, being based entirely on keyboard input, which also accounted for a considerable proportion of the device in spatial terms. On the PC, the mouse, which enabled the direct, spatial selection of objects on the screen, was already further along. In the devices that followed, which were not only subsequently designated as PDAs, but were also sold under this designation - first and foremost the Apple Newton - the focus was on the screen as an input device. In many cases, there was no longer a keyboard. Instead, inputs were made directly on the screen with a stylus or, in some cases, with the fingers.

Newton MessagePad - The Pioneer

As is so often the case, Apple piled high when launching the Newton. The company wanted to claim nothing less than the reinvention of personal computing. The new device, the Personal Digital Assistant called Newton MessagePad from 1993, was indeed an innovative device that enabled new forms of input and use that were not previously possible with a PC, a Macintosh or with earlier compact computers. However, the Newton was not a success, mainly due to the poor performance of a key feature. More on that in a moment. From a purely functional point of view, the Newton was very much in the tradition of the devices considered in the previous chapter. The software housed on the 4 MB ROM chip offered a word processor, the possibility to create notes and checklists, a calendar, address management, a calculator, clock and alarm clock as well as a programme for reading electronic books. There was no file manager. The concept of the file itself did not appear in the Newton’s usage interface.

However, from the point of view of the input methodology and thus also from the basic mode of operation, the devices differed greatly. The Newton’s screen was 6 inches and could display 320 x 240 pixels in 16 shades of grey. All inputs were made via the screen interface. The Newton OS operating system was therefore completely designed for operation with the stylus supplied, which could conveniently be slid into the device on the right side for transport. Functions could be selected and buttons clicked with the pen. Above all, however, it could be used for writing and drawing. The Newton had several input modes for this purpose. The simplest was the “Sketches mode”. Here, all line drawings made with the pen on the screen surface appear directly as line drawings in the document. The geometric figures you see in the illustration were created in “Shapes mode”. In this mode it was possible to draw figures such as rectangles, triangles, circles and lines quite roughly. The figures were (in most cases) recognised and created as clean figure objects. If, on the other hand, the “ink text mode” was used, a text recognition system worked to recognise the text written in handwriting on the display and convert it into digital text.

Apple Newton MessagePad 100
Apple Newton MessagePad 100

All geometric figures, texts and also the line drawings of the freehand drawings were available as objects. They could therefore be edited, moved, deleted and, in the case of graphic elements, also changed in size and orientation on the screen. All these operations were again performed with the pen. Some very interesting forms of interaction were devised for this. For example, a word could be deleted by crossing it out in a zigzag gesture. The Newton then made a “poof” sound and the word disappeared from the screen. The same worked with graphic objects. Special input gestures existed to insert single words or new lines. If several objects, whether text or graphics, were to be marked, this was done by holding the pen on a spot on the screen for a second and then underlining the text or framing the graphic objects. As soon as you took the pen off the screen, the objects were marked and could be moved, for example.

Insert a sentence into the calendar of the Newton MessagePad. Source: Newton 2.0 User Interface Guidelines, Apple Computer, 1996
Insert a sentence into the calendar of the Newton MessagePad. Source: Newton 2.0 User Interface Guidelines, Apple Computer, 1996

From a user interface point of view, Apple’s implementation of the clipboard was fascinating, because it also functioned completely spatially. Let’s say you wanted to move a sentence from a note document to another place. To do this, one marked the sentence as described above and then moved it to the left or right edge of the screen. There it was then displayed in short form. In the picture on the right you can see what that looked like. Here the sentence “Joe is in Suite 302” was pushed from the notebook to the margin. In the meantime, the calendar has been called up. The copied sentence is still at the edge of the screen. It can now be inserted into the calendar by simply dragging it to the place where it should be afterwards. Copying was similarly easy. The only difference to moving was that the object was marked with a kind of half double-click (down-up-down) before moving. The Newton interpretation of the clipboard is certainly unusual, but from the point of view of someone interested in usage interfaces, it is exciting because it is ahead of many other clipboards. Only the clipboard of the Bravo editor of the Xerox Alto, which I described to you in the chapter “Desktops and windows”, is similar in terms of functionality. In contrast to MacOS, Windows, iOS and Android, with the Newton you always have an overview of the objects that can be moved or copied. The contents of the clipboard are available as visible objects on the screen and are pasted or copied by spatial operations. In all other systems, however, they are located in an invisible clipboard and are inserted by a non-object-related command selection. If you click on “Insert” in Word, you can only see what has been inserted after the action, i.e. what the command has done, which object it referred to. It is not only here that the Newton clipboard has an advantage. It is also not limited to a single object, like clipboards on modern systems. With the Newton, no one prevents you from first collecting various snippets at the edge of the screen and then inserting them elsewhere one by one. This is not possible with current systems - there you have to constantly switch back and forth between source and destination.

Once again I have described to you in the highest terms a device that went down in computer history as a major flop. This time, however, it was not because of the price. At 900 dollars, the device was not exactly a bargain, but it was not unaffordable either, and there were certainly many customers who were interested in the device. But the early customers quickly noticed a big problem. The Newton’s character recognition did not work properly. In practice, it was simply impossible for many users to enter text on the Newton, even after lengthy training and correction of the incorrect entries. With the release of the second version of the Newton operating system in 1996, the original character recognition was replaced by a completely new development, which now worked much better, especially on newer, more powerful Newton devices. By this time, however, the Newton’s reputation was already pretty much ruined, which did not help Apple’s already difficult situation at the time. With Steve Jobs’ return to Apple in 1997, the entire Newton product line was discontinued - and Jobs promised that Apple would never make PDAs again. A statement that was to be held against him when the iPad was introduced in 2010.

Palm Pilot - The Reliable

At the time the Newton was discontinued, another manufacturer was preparing to conquer the market. In 1996, the Palm Pilot appeared, the first of a series of compact devices usually grouped under the name “Palm”. They were built by various manufacturers until well into the 2000s. If you compare the first Palm device, the Palm Pilot from 1996 with a Newton from 1993, the Newton seems to have an advantage in every respect. Compared to the Newton, the pilot had a downright tiny screen with only 160 x 160 pixels, which, unlike the Newton, could not display greyscales. The Palm was also operated with a stylus. However, the interaction was much less multi-faceted than that of the Newton.

This was perhaps most obvious when entering text. In the Newton, the text was handwritten in the place where it would later appear. With the Palm, on the other hand, a special font called “graffiti” was used, which was written letter by letter in a field below the screen. Characteristic of graffiti writing was that the letters were unique and could be written in only one stroke. From the software side, Palm devices were comparable to the range of earlier compact mobile devices. Here, too, one found an address book, a calendar, an application for keeping notes and to-do lists. Later software versions also included a programme for composing e-mails and for personal financial management. The device was of course not limited to these applications. A 2008 list lists no less than 50,000 software titles for Palm PDAs. On the hardware side, the devices have also evolved over the years. You could buy Palms with WLAN, mobile phone or even GPS modules.

Left: Palm Pilot; right: graffiti gestures - image right: IMeowbot~commonswiki assumed (CC BY-SA 3.0)
Left: Palm Pilot; right: graffiti gestures - image right: IMeowbot~commonswiki assumed (CC BY-SA 3.0)

The Palm devices were considerably less ambitious than Apple’s Newton, but unlike the latter they were very successful on the market. The simplicity of the devices and the software may have been one of the main reasons for the success, because unlike Apple’s Newton, the Palm’s technology kept the promises it made.

Windows CE - The Conventional

For the sake of completeness, PDAs that were operated with the Windows CE operating system should be mentioned here. Windows CE was an operating system from Microsoft whose user interface was very similar to that of Windows 95. Thus, the system had a task bar and a start menu. The applications were also similar to those of the PC. The systems had an Explorer, a Pocket Internet Explorer, Pocket Word, Pocket Excel, etc. This software equipment naturally made the units compatible with the classic file exchange formats of the PC world.

NEC MobilePro 400 with Windows CE 1.0. Source: Dmitry Brant (CC BY-SA 4.0), exempted
NEC MobilePro 400 with Windows CE 1.0. Source: Dmitry Brant (CC BY-SA 4.0), exempted

The user interface of PDAs with Windows CE was surprisingly poorly adapted to mobile computers. The devices were usually operated with a stylus, but even with this it was not always easy to operate them quickly and without errors, because the objects of the user interface were sometimes quite small. The descent of the user interfaces of the operating system and the software from the software world of the PC was also reflected elsewhere. In the meantime, I have explained to you several times that implicit storage is a very typical feature of PDAs. With the Palm devices, the Apple Newton, the PDAs from PSION and the comparable devices, the user was not required to explicitly save the changes made. If, for example, one used the notebook function on the devices, made a note, then switched to the calendar and made an entry there, in order to finally call up the calculator function, a save function did not have to be called up explicitly at any time. Every change was always saved automatically without the user’s intervention. With Windows CE, on the other hand, it was the same as with Windows. If you made a change to a text file in Pocket Word, you had to explicitly save the file if you wanted to keep it. PDAs with Windows CE were very powerful on the one hand, because they were usually well equipped and offered a wide variety of software, but from the point of view of their user interface they offered nothing new and were even in some ways a step backwards for the device class.

Conclusion

The personal digital assistants presented here were very interesting from the point of view of the user interface. Of course, the devices and their software were not as feature-rich as their PC counterparts. The compact design took its toll in both aspects. However, the user interface of the devices was also interesting in that a completely new user interface was developed for this new generation of devices, starting with the PSION Organiser. The usage worlds of these small computers were neither in the tradition of the WIMP interfaces of Xerox and Apple from the late 1970s and early 1980s respectively, nor in the tradition of the terminal-oriented command lines of the 1960s. The special conditions of their use brought about a new way of use and thus also a new world of user interfaces.

As described above, many capabilities have been added to PDAs over the years. From the early 2000s, for example, Palm produced devices in the Treo series, which extended the classic PDA with a GSM module. The devices thus enabled both telephoning and the use of the Internet via the GPRS standard. The PDA thus became more and more what we now call a “smartphone”.

Smartphones

PDAs of the kind I described earlier no longer exist today. They have merged with devices like the mobile phone, mobile pagers and compact cameras to form the smartphone. The main developers of user interfaces for smartphones today are Apple and Google. While Apple had experience in the field of PDAs with the Newton, but discontinued its own PDA line in 1997, ten years before the introduction of the iPhone, Google was not active in the field of operating systems and user interfaces for PDAs or similar devices until then. Neither company had anything to do with user interfaces for mobile phones until then.

You can think of a smartphone in computer history in a number of ways, such as a PDA with a cellular option added, or a mobile phone with PDA functionality added. On closer inspection, various early smartphones are sometimes more of one interpretation, sometimes more of another, and sometimes still another. I briefly touched on an example of a PDA with added telephone functions in the previous chapter. With the Treo series of the manufacturer Handspring - a spin-off of Palm, which later merged again with its original company - a telephone module was added to the PDA. At that time, the mobile phone functionality was limited to the pure telephone function. It was not until 2004 that a model followed that could also receive and send data via mobile radio, i.e. that had access to the Internet. In 2005, a Palm PDA with WLAN functionality appeared. However, this was a pure PDA without telephone function. It was not until 2008 that a Palm smartphone was available that had both WLAN and mobile phone capabilities.

Early smartphones

Palm expanded its PDAs with mobile phone modules to smartphones, but was comparatively late with this. The first Blackberry smartphone from Research In Motion appeared in 2002, and Nokia launched its first smartphone in 1996.

Nokia Communicator

Nokia was the market leader for mobile phones for many years. In 2008, one year after Apple introduced its iPhone, Nokia’s market share reached almost 40 percent. The company had been in business for a long time by then. The company’s first car phone was sold from 1982 and the first mobile phone in today’s sense, as a one-piece device without a separate handset, was introduced in 1985. Nokia also had a smartphone in its range early on. Almost ten years earlier than Palm, Nokia enriched the mobile phone market with its first communicator - the Nokia 9000.

Nokia 9000 in closed and opened state - Image: textlad (CC-BY 2.0)
Nokia 9000 in closed and opened state - Image: textlad (CC-BY 2.0)

The first Communicator was an amazing device for its time. At first glance, it looked like a normal mobile phone of the time, albeit significantly thicker and larger. There was a small LCD display in the upper part of the device, and a number keypad underneath. The device had all the functions that a simple Nokia phone had. The device became really interesting when it was opened, because then a PDA with telephone and internet connection was revealed to the user. The PDA functions were operated via a small keyboard. The screen was very decent for the time. It had a resolution of 640 x 200 pixels and could display four grey scales. The Communicator offered the familiar functionalities of PDAs, such as an application for taking notes, a calendar, contact management, etc. The outstanding features of the device were the keyboard and the keyboard itself. The outstanding feature of the device was not the PDA or mobile function alone, but their combination, above all, with the possibility of data exchange via the mobile phone network. The Communicator allowed sending and receiving e-mails as well as analogue fax copies. Terminal connections to remote computers could also be established with the device and even a web browser was available. From today’s point of view, this may not sound spectacular, but you have to realise that we are talking about a device from 1996. At that time, the World Wide Web was not yet very widespread among the general public. Hardly any users of home computers or office computers were connected to the Internet, and thus web browsers were still hardly widespread. Now here was one - and on a compact, mobile device!

Blackberry

Nokia offered its communicator very early on and continued to develop it. However, the brand did not primarily stand for these powerful devices. Nokia mainly served the mass market and offered mobile phones with various features and earned its money with them. The situation was different with the Blackberry company. The word “Blackberry” became an equivalent for the word smartphone in the early 2000s. At the time, most of these devices, similar to Nokia’s Communicator, were still not used by private individuals, but mainly by commercial travellers and executives. Blackberry offered a strong integration into the central infrastructure of companies with functionalities for company-wide calendar, mail and contact synchronisation.

A Blackberry Quark
A Blackberry Quark

The origin of the smartphones from the company Research In Motion (RIM) lay in 1996 in a device that did not yet bear the name “Blackberry”. The “Inter@ctive Pager 900” was a so-called two-way pager. It used early mobile phone data networks in the USA. Pagers were very common at that time among people who always had to be reachable. They came in different designs. Very simple devices only allowed you to display a phone number. If you wanted to send something to someone on a pager, you called the phone number of the pager service and gave them the number of the pager and the number of the person you wanted to call back. You have probably seen pagers of this kind in films and on television. They were used, for example, to “page” doctors. More advanced devices also allowed the display of a message text or even the playback of a sound message. In all these cases, the communication was one-way. They were purely receiving devices. The Inter@active Pager 900 was much more sophisticated compared to these devices. Communication took place via the mobile phone data networks over the Internet. The devices allowed both receiving and sending messages via the Internet. In principle, the devices were mobile e-mail machines.

The company RIM further developed their pagers into smartphones, which they named “Blackberry”. Shown on the right is the first version of these devices, a Blackberry Quark from 2003. The e-mail component was still very central to these devices. Blackberrys allowed mail to be sent via normal internet email accounts as well as via Microsoft Exchange servers and later via their own server infrastructures. Around the email functionality, the Quark offered both the typical functions of a phone, i.e. the phone function with phone book and the SMS function, and those of a PDA, from notes to calendar. A web browser was also part of the device’s range of functions. Palm continued to develop the devices in various directions. The characteristic feature of many Blackberrys remained the fully equipped keyboard. These devices were especially popular with users who wanted to write a lot of email correspondence on the go. Other Blackberrys were more like typical mobile phones of the time, but with a numeric keypad extended by a few keys that could also be used for text input. Individual keys were assigned several letters, similar to simple mobile phones. The fact that it was still possible to type with only one keystroke at a time was due to a technology called SecureType, which was ultimately based on a dictionary of 35,000 words that the device recognised from the key sequence. Other phone manufacturers used T9 technology, which ultimately worked in a similar way. Devices like the Blackberry Pearl from 2006 also had a media player and an integrated camera with 1.3 to 2 megapixels of resolution. However, even more limited than the photo resolution on these devices was that of the screen with only 260 x 320 pixels.

Smartphone rethought

I have roughly traced some lines of development of the smartphone for you above. Palm added mobile phone functionalities to its PDAs, Nokia equipped a mobile phone with additional PDA functionalities, and Blackberry developed its mobile e-mail machines into smartphones with telephone and organiser functionalities. If we look at the state of development in 2008, the devices had definitely converged and all had a fairly similar offering, although the original characteristics were still visible. A Blackberry still had its strengths in mobile text messaging, while a Palm was still primarily a PDA. All these smartphones were not devices for the mass market. Blackberrys in particular were expensive devices for managers. They were integrated into corporate structures and had a noble appearance compared to typical mobile phones. The situation was similar with the devices of other manufacturers. They, too, were primarily aimed at executives and travelling salesmen.

What people thought of as a smartphone changed fundamentally at that time, however, because in 2007 Apple’s first iPhone appeared. Unlike the other smartphones, this smartphone did not have a keyboard, but a touch-sensitive screen. Apple had changed a lot since the return of Steve Jobs ten years earlier. It was no longer the Macintosh computers that were the focus of interest, but above all the mobile media players called iPod and the accompanying iTunes software, by means of which content could be purchased and transferred to the player. The line of origin as a media playback device was clearly evident with the iPhone and was also explicitly advertised. At the official launch, the new device was presented as three devices in one. The very first of these devices was a “Widescreen iPod with Touch Controls”. The second device, the “Revolutionary Mobile Phone” seems to me to be the least interesting of the three, because apart from an interesting new approach to mailbox messages, there was nothing really revolutionary to be seen. However, the “Breakthrough Internet Communication Device” was quite interesting. Apart from the e-mail functionality, this was mainly aimed at the web browser. Its outstanding feature was the ability to display normal websites. The web browsers on most smartphones could either only display extremely limited mobile websites in the so-called WAP standard or displayed normal websites in a very idiosyncratic way, because since there was no usable standard for mobile display until then and web use on phones was still an absolute exception, the designs of almost all websites in 2007 were designed for display in PC browsers. The iPhone was able to display these pages, which were not intended for phone use, and with a cleverly integrated zoom function, even make them quite easy to use despite the small screen.

Today it is almost forgotten that the first iPhone did not have something that later became typical for the new smartphones: There were no installable apps yet. Unless you hacked the device with a so-called “jailbreak”, there was no way to upgrade the apps that came with it. Google’s Android operating system, which was already in development, allowed the installation of additional apps from the beginning. However, when the first Android smartphone, the HTC Dream, appeared in September 2008, Apple had also gone so far and introduced its app store.

Most smartphones today use either Apple’s iOS or Google’s Android. In a way, these systems are very similar to the PDAs of the 1990s and early 2000s. The first iPhone, for example, had a calendar, a photo viewer, a voice messaging application, a classic notebook, a clock application including an alarm clock, a calculator function and the ability to play music, in addition to the SMS and MMS writing applications and the phone function. All these applications were more or less also the standard repertoire of PDAs. In addition, there were of course Apple’s internet-based applications consisting of a web browser, a YouTube player, Google Maps, an app for the weather forecast and of course the iTunes Store for buying music. The screen size and the input focus on direct pointing were also much more like the PDAs than the smartphones of the time, which largely relied on small hardware keyboards. In terms of operation, an iPhone was much more like an early Palm PDA or even a Newton than a Blackberry. However, one fundamental characteristic of the new generation of smartphones is quite different from that of PDAs. Typical PDAs were connected to a PC to synchronise data. Well-equipped PDAs and smartphones did have options for accessing the Internet via the mobile network, but as a rule they rarely did so. When they did, the amount of data transferred was minimal. One of the reasons for this was the exorbitant fees that were charged for internet use in the mobile network at the time. In principle, the new smartphones were designed from the outset to be permanently online. While the devices can also be used offline, much of their functionality is designed to ensure that an internet connection is always available at all times. The presence of the devices on the market thus also had a strong influence on the availability and affordability of the mobile phone infrastructure - a fine example of how different technical developments and their ways of use influence each other.

The new generation of smartphones certainly follows in the footsteps of its predecessors and inherits features from the PDA, the classic mobile phone and the pager. But in many respects, the devices have actually achieved a kind of revolution. People who were never interested in computers, never owned one or always needed help setting them up and maintaining them, now own the small, powerful smartphones and can usually operate them quite competently. There are many reasons for the success of the devices. Among them is not that a smartphone would enable anything revolutionary technically. Nor does its user interface have any special features that one would not have known before. Rather, as with the PDA, it is extremely simplified, especially in comparison to the PC. Apps on the smartphone usually have fewer functions and settings than PC applications. An important factor in the success of smartphones is certainly also that their hardware and software features, coupled with a permanently available internet, replace a large number of devices. Today, a smartphone is much more than a mobile phone and a PDA. It is also a dictaphone, a compact camera, a navigation device, an alarm clock, a torch and much more, and all this is always with you in just one small device.

The user interface of a smartphone has a completely different feel than that of a PC. Equating a smartphone with a PC is similarly nonsensical as viewing a modern Windows PC as a mini-computer as we have come to know it from the 1970s. The devices may function in a similar way internally, but the usage interface offered to us by the hardware, but especially by the software, creates a completely different usage world.

The comparison of smartphone versus PC with personal computer versus minicomputer is of course somewhat misleading, because while minicomputers have meanwhile been more or less completely superseded by personal computers, this is not to be expected with the smartphone. Although for many users the range of functions of a smartphone is sufficient for their use cases, there remain many usage scenarios in which not only the large screen of a PC is necessary, but in which the flexibility and wealth of functions of PC usage interfaces are also needed. In addition, smartphones and tablets, especially those from Apple, restrict the user’s freedom quite a bit. Installing alternative operating systems is only possible with a great deal of effort, if at all, and the apps usually, in the case of Apple even exclusively, come from app stores controlled by the manufacturer. For this reason alone, the devices are also rather unsuitable for programming, i.e. creating your own application programmes according to your own preferences, and even if you use a more open architecture, you certainly don’t want to create a complex programme on a smartphone keyboard. You would probably always want to use a device like a PC or a laptop with a well-equipped development environment for this, even if the programme is to be used on a smartphone or a tablet in the end.

Notes

Preface

1Mahoney, Michael Sean; Thomas Haigh (Ed.): Histories of Computing. 1st Edition. Harvard University Press, 2011.

2For example, Katie Hafner, Matthew Lyon: Where Wizards Stay Up Late: The Origins Of The Internet. Simon & Schuster Paperbacks, 1996.

3Tognazzini, Bruce: Tog on Interface. Boston, MA. Addison-Wesley Longman Publishing Co. Inc, 1992.

From the ENIAC to the Minicomputer

1The difference between electrical and electronic is not completely clear. One usually speaks of electronics when semiconductor elements such as diodes or transistors or electron tubes are used. Relays are called electromechanical because in them a magnetic field generated by an electric current (electrics) is used to close a switching contact (mechanics).

2Operating instructions for Zuse Z4, written at the Institute for Applied Mathematics at ETH Zurich, summer semester 1952, ETH Library Zurich, http://dx.doi.org/10.7891/e-manuscripta-98601

3Rechenpläne für das Rechengerät V4, Dipl.-Ing. K. Zuse, ca. 1945. Available in Deutsches Museum Digital.

4Whether or not the ENIAC was a stored program computer is still a matter of debate, because even after the conversion, the ENIAC did not have writable memory from which to run the program. Rather, the programme consisted of control elements with numbers representing the programme commands. A jump to a programme position was possible and the exchange of a programme now meant the connection of other control elements.

5The programme that converts the language consisting of mnemonics into binary code executable by the computer is called assembler. In German usage, it is customary to also call the language and the programme described in it assembler. In English, on the other hand, we speak of “assembly language” and “assembly code”.

6Von Neumann, John: First Draft of a Report on the EDVAC. IEEE Annals of the History of Computing, 1993, Vol. 15, No. 4. pp 27-75.

7“inforum” of the University Computing Centre at Münster University, issues April and July 1977, available at www.uni-muenster.de/ZIV/inforum.

8The first version of Fortran appeared in 1957, but only the 1966 version had the “logical IF statement” that is used here. Earlier versions could only check whether a number was less than, greater than or equal to zero and, depending on this, jump labels would be activated in the programme. This construct was still very close to the internal workings of the computers on which the language was applied.

9“Flurbereinigung” means that agricultural land fragmented by inheritance processes is re-cut so that individual farmers receive a few, larger and contiguous areas of the same type and quality.

10Reference Manual IBM 1401 Data Processing System. IBM. 1962.

11Programming for the UNIVAC Fac-tronic System, Remington RAND, January 1953.

12Wilkes, Maurice Vincent: Time-Sharing Computer Systems. Third Edition. London/New York. MacDonald & Co./Elsevier Science Inc. 1975. page 8.

13Shneiderman, Ben. Direct Manipulation: A Step Beyond Programming Languages. In: ACM SIGSOC Bulletin. ACM, 1981. P. 143.

14Since PDP stands for Programmable Data Processor, it should actually read “the PDP”. However, it has become customary to say “the PDP”.

15example taken from “DEC’s FOCAL 1969 Promotional Booklet

Personal Computers

1In practice, compatibility sometimes doesn’t look so good. One problem, for example, is the speed of today’s processors and the way they were programmed in 1981. Some programmes of the time were designed for the processing speed of an Intel 8088 with 4.77 MHz at that time and are not equipped to run on faster machines. Running on today’s computers, it then runs several thousand times faster, making it simply unusable in some cases.

2The fact that today many servers are also based on PC architecture and that PCs have thus also become time-sharing systems need not interest us at this point, because this development happened much later and was not a development factor in the early days of PC history.

3I use the kilobyte (KB) for 1024 bytes, which was common at the time and is still often used today. Strictly speaking, this is not (any longer) correct. In 1998, this unit was renamed Kibibyte (KiB). Accordingly, the kilobyte now corresponds to exactly 1000 bytes, in line with the international system of sizes.

4There was even a time-sharing BASIC for the Altair, with which up to four teleprinters or terminals connected to the computer could be operated simultaneously.

5The use of an interpreter meant that a BASIC programme was not first translated into machine code before it was executed, but that the BASIC system itself evaluated the high-level language commands and reacted accordingly. The consequence of this design decision was that BASIC programmes ran quite slowly compared to programmes developed in other programming languages.

6Models like the Apple IIc and the Apple IIGS are known to me, but do not provide any further insights for my considerations here. I will therefore leave them out of the equation here.

7I’m not making this personal comment as an Apple hater. I have been using Apple devices for many years, but I often wonder about the company’s exaggeration - both in terms of its glorious past and its current developments.

8The Commodore 64 was the most widely used home computer in Germany, the USA and many other countries. In Britain, where there was a strong home computer industry of its own with Sinclair, Amstrad and Acorn, the C64 was just one of many.

9Thacker, Charles P., Edward M. McCreight: ALTO: A Personal Computer System. Xerox Palo Alto Research Center. 1974

10That’s not quite true, because at PARC Alan Kay developed the concept of the Dynabook, a tablet computer explicitly aimed at children. It is one of the myriad concepts of PARC that I cannot explain here. His exciting work is available on the internet: Kay, Alan C.: A personal computer for children of all ages. Proceedings of the ACM annual conference-Volume 1. ACM, 1972.

11Tesler, Larry; Timothy Mott: GYPSY: The GINN Typescript System. Xerox PARC. 1975

12How exactly is specified with Model-View-Controller has changed over time. Generalised, one can say that it is a separation of tasks in the programme structure. The model, i.e. the data and associated logic, is separated from the programme logic, the controller, and the user interface, the view. This separation makes programmes modular. In accounting software, for example, one can make updates in the calculation module without changing the presentation, or offer several different user interfaces that use the same calculation logic in the background.

13So it is black and white graphics in the strictest sense. A pixel was either black or white. Not only was there no colour representation, there were no greyscales either.

14The fully integrated concept later became a disadvantage for the Star because, unlike the Alto, it was not possible to easily add or update software on the Star. As a result, there was no software market where software with fundamentally similar features competed with each other. This competition could hardly be combined with the one-size-fits-all documentary world that the Star offered.

15The same 68000 processor from Motorola was installed in both computers.

16The possible resolution depended on the television standard used. The values given refer to PAL units used in Europe. American Amigas, which used the NTSC television standard, had slightly lower resolutions but a higher frame rate, so animations were potentially a little smoother here.

17I use the name MacOS in this notation for simplicity in general for the operating system of the Apple Macintosh. The name was not used in this way from the outset. Particularly at the beginning, the numbering was sometimes chaotic. Earlier versions of the operating system were called “system software” or just “system”. This was later followed by “Mac OS” with spaces “Mac OS X”, “OS X” and then today “MacOS” without spaces.

18The “Save as” function can still be called up and appears in the applications’ storage menu when the ALT key is pressed. However, this does not change the problem that the opened file has already been edited at this point and the original content has thus been overwritten.

19The year 1970 is still recorded today in the way time points are stored on Unix systems. The date and time are usually stored as seconds from 1.1.1970.

20There is a dispute about what exactly “open source” is, what free software is, whether there is a difference and if so, what it is. The discussion does not lead me any further here. A summary can be found, for example, in the Wikipedia article on Free Software.

21Actually, Linux only refers to the operating system core, the so-called “kernel”. The correct name of the system with Linux kernel and GNU tools is “GNU/Linux”. In the meantime, however, it has become customary to use the name Linux for the operating systems as a whole.

22At the moment, this X-Window standard is being replaced by a modern system called “Wayland”. However, this does not fundamentally change the described mode of operation.

From the PDA to the Smartphone

1See https://www.statista.com/statistics/273495/global-shipments-of-personal-computers-since-2006/

2See https://de.statista.com/themen/581/smartphones/

3It has been possible for some time on Android and more recently on Apple to access the files stored on the respective device via file managers. The existence of such programmes makes work easier, especially when connecting external data carriers to the device. Nevertheless, the applications do not have “dialogues” for opening and saving, as is usual in PC software. What may exist in the background as a file always appears in the applications in the form of the objects for which the application is responsible, for example as pieces of music, photos or text documents.

4The computer had a printer that could print graphics and text on a narrow continuous paper strip similar to a receipt.

5The unit could output the DTMF tones via the loudspeaker, which used to be used to inform the telecommunications company’s exchange of the desired telephone number.