The Mind's Machines:
The Turing Machine, the Memex, and the Personal Computer.

By Peter Skagestad<1>
Department of Philosophy
University of Massachusetts—Lowell
E-mail: Peter_Skagestad@uml.edu

Published in Semiotica, Vol. 111, No. 3/4, 1996, pp. 217-243.

Introduction

Among the most urgent present-day tasks for cultural interpretation and criticism—i.e. for semiotic—is that of reaching an understanding of how the personal computer is affecting, or even transforming, our culture and civilization by fostering new habits of intellectual work (i.e. of thought) and by enabling new types of communication, commercial transactions, and social interaction through networking. It has long been considered an anomalous and interesting feature of Hegel's Phenomenology of Mind that, while the Introduction was written at the outset, the Preface was added only after the entire work had been completed; today word-processing has decisively severed any connection between the sequence in which thoughts appear in a document and the temporal order in which they were put there—in a hypertext document, moreover, there is no preordained sequence in which the thoughts appear. The distinction between the once very different intellectual operations of composing and editing has thereby become attenuated, to say the least. Hence, whatever role logic may play in the psychology of reasoning, there is no longer any temptation to equate the logical sequence of a written argument with the temporal sequence of the reasoning process through which that argument was constructed. And the practice in many intellectual circles of posting manuscripts on conference bulletin boards, where they can be annotated by anyone who feels s/he has something to add, is placing the very concept of authorship under a cloud.

It is not an entirely novel observation that today's philosophy needs to absorb these changes much the way Plato absorbed the emergence of literacy and the Enlightenment philosophers absorbed the emergence of the modern printing press. Yet, while there is a massive philosophical literature devoted to issues raised by the first computer revolution — the one that took place in the nineteen-fifties — surprisingly little philosophical attention has so far been given to the second computer revolution, the revolution of the eighties.<2> One obstacle to the present-day interpretive task may be the rapid speed at which the technology is evolving, rendering journalistic currency incompatible with both the time distance and the leisure required for serious reflection. God may live in the details, as it has been said, but these details are shifting at a dizzying pace. For instance, at the time of this writing, it is taken more or less for granted that a personal computer has a graphics-quality screen, WYSIWYG (what-you-see-is-what-you-get) word-processing capability, a mouse, windows, icons, and either a modem or a LAN connection, or both. My personal view—hardly a unique one—is that the introduction of graphical user interfaces (GUIs) by Apple Computer in 1984 was at least as consequential as the introduction of personal computers in the first place. Yet the online world, a.k.a. Cyberspace, is still mostly text-based, and online communication will doubtless change beyond recognition when the graphics revolution comes online. By the time these words reach the press, the state of the art may include full-color interactive video and speech recognition and synthesis, and within a decade the personal computer may have been superseded by the personal reality-simulator, as discussed in Rheingold (1991), as well as in Skagestad (1993a). But, if so, the personal computer will have been transformed as much as it will have been replaced, and the pivotal role played by today's technology—crude as that technology will seem in a decade—will still stand in need of understanding, interpretation, and assimilation.

We situate objects in the cultural landscape as much by our conception of where they came from as by our conception of what they are for. The purpose of this essay is to explore what understanding can be gleaned about the personal computer and about ourselves as personal-computer users, by examining the origins of the personal [218] computer in two very different notional machines: Alan Turing's Universal Machine (1936) and Vannevar Bush's Memex (1945). It will be shown that while the construct known as the Turing machine spells out in idealized, simplified terms the essential features of what we today call 'the computer'—i.e. the stored-program, digital computer—Bush's Memex, which is not actually a digital computer, was a major source of inspiration for the use to which the computer was put by the pioneers of the personal-computer revolution, right down to providing many of the details incorporated in present-day systems. Since both Turing and Bush articulated explicit conceptions of what it is to be a cognitive agent, and of how human intelligence relates to intelligence in machines, a critical comparison of their constructs can be expected to shed some light on the nature of the human mind.

[219] Both the Turing machine and the Memex attempt to mechanize specific functions of the human mind. What Turing tried to mechanize was computation and, more generally, any reasoning process that can be represented by an algorithm; what Bush tried to mechanize were the associative processes through which the human memory works. But the two machines also represent very different approaches to mechanization. Specifically, the Turing machine is a digital machine while the Memex is an analog machine. Digital machines work through discrete states and can represent continuous processes only in functional terms, i.e. in input-output terms: if, given the same input, the machine produces the same output as some natural process, the machine will be said to simulate that process, irrespective of whether the internal processing inside the machine in any way resembles that of the natural process.

Analog machines, by contrast, utilize internal processes that resemble the natural processes they are simulating. In the words of the philosopher James Fetzer (1990: 17), a model simulates 'by effecting the right function from input to output,' or it replicates 'by effecting the right functions by means of the very same—or similar—processes.' In terms of this distinction the Turing machine is a simulator while the Memex is a replicator. This distinction lends support to a conclusion long argued on independent grounds by John Searle, namely that there is little temptation to equate the simulator with the object simulated, as is done by adherents of 'strong artificial intelligence'(AI) (Searle 1990: 32). The Memex, which attempts to replicate human memory, and hence may be said to embody 'artificial memory', was not intended to rival the human mind but to extend the reach of the mind by making records more quickly available and by making the most helpful records available when needed. This idea directly inspired the research program known as 'intelligence augmentation' (IA), which was formulated in 1962 by Douglas Engelbart with explicit indebtedness to Bush, and which eventually resulted in the two complementary inventions of networks and personal computers.

The relationship between AI and IA is a complex one. It is an empirical question what specific machine features will in practice prove most effective in augmenting the human intellect. Such technologies as intelligent character recognition and voice processing and synthesizing, which have their roots in the AI research program, in which they were conceived as steps towards making the machine fully intelligent, are today finding commercial applications as means of enhancing the user interface of personal computers, i.e. as augmentation means. So the two research programs are to that extent complementary rather than conflicting. However, IA, like AI, comprises not just a research program, but also a comprehensive philosophical outlook on the nature of human-machine [220] interaction. It will be shown that the outlook of IA, unlike that of AI, has important philosophical antecedents in the semiotic of Charles Peirce.<3> This philosophy, finally, is an nonmechanistic philosophy, as opposed to the mechanism that has inspired, and continues to inspire, AI Peirce conceived of the mind as essentially a sign-interpreter, and the act of sign-interpretation, while explicable in pragmatic terms, cannot be—or at least has not yet been—explicated in mechanistic or behaviorist terms. All mechanical action, as Peirce observed, is either dyadic or reducible to binary sets of dyadic relations, whereas the sign relation—the relation of something standing for something else to someone—is irreducibly triadic (Peirce 1935: Vol. 5, §484; Fetzer 1990: 31).

The purpose of this article, however, is not to refute mechanism, but to place before the reader an alternative cluster of ideas with which and through which to conceptualize mind-machine interaction in general and our interaction with contemporary cognitive technologies in particular. I believe, and have argued elsewhere (Skagestad 1993a, 1993b), that the ideas about human cognition that have directly inspired the personal-computer revolution require engagement by students of philosophy, whether or not these ideas themselves can be called 'philosophical' in a conventional sense of the word. It is in the spirit of fostering such engagement that I shall here discuss the ideas embodied in the Memex and the Turing machine from the vantage point of an historian of philosophical ideas, a role I take to be inseparable from the critical evaluation of those ideas.

Turing Machines

As described in Alan Turing's classic paper 'On Computable Numbers' from 1936, as well as in numerous secondary accounts, a Turing machine is a hypothetical machine defined in terms of its logical or effective features, rather than its physical properties, i.e. in terms of what it does, not in terms of how it is constructed or what it is made of. And what it does is scan a tape divided into squares; it scans one square at a time, and it can do four things: print a symbol on the square, erase a symbol printed on the square, move one square either left or right, and come to a halt. The tape is supposed to be infinite in length or, which practically comes to the same thing, capable of being indefinitely extended in either direction. At any time, what the machine will do is completely determined by what appears on the currently scanned square, together with the predefined state of the machine. A particular Turing machine is thus defined by a finite set of instructions which, for [221] each initial state, specifies the action to be taken, as well as the final machine state. Such an instruction could be, for instance, 'When in state S1, and scanning an empty square, print '1', move one square to the right, and go into state S2', abbreviated , 'S1, 0, 1, R, S2'.

One very simple Turing machine for adding the numbers 2 and 3 would print and erase zeroes and ones in such an order as to transform the sequence 1101110 into the sequence 0111110, replacing two distinct sequences of two and three ones, respectively, with a single, unbroken sequence of five ones.<4> More complex machines, i.e. machines with longer instruction tables, can be constructed for adding any two (or more) arbitrary numbers, for multiplying, dividing, extracting square roots, and so forth—all of them involving no elements or operations other than those described above. The unproven and unprovable, but overwhelmingly probable proposition known as Turing's Thesis (a.k.a. Church's Thesis) says that, with enough time and an indefinitely long paper tape, any computation for which there exists an algorithm can be carried out by a machine of the above description. The general description of the Turing machine thus provides an operational definition of 'computability' or, more generally, of the concept of an 'algorithm' or 'effective procedure'.

Having introduced this definition, Turing proceeded to produce his famous proof of the existence of a universal machine (Turing's Theorem), the technicalities of which will here be omitted:

The above definition of 'computability' is what we call recursive; that is to say, it is self-applicable without being circular—without utilizing the concept of computability, either explicitly or implicitly, in the definition. Specifically, each symbol in the table of instructions of a Turing machine can be denoted in ASCII code by a unique sequence of zeroes and ones; thus the instruction: S2, 1, 1, R, S2 could be coded as:

0100100 0001011 0001011 1100100 0100100

Now a machine could be constructed which reads a tape containing both an 'input' sequence of zeroes and ones and the string of zeroes and ones into which our adding machine instruction table has been encoded, and which, after scanning each string of five septuples constituting an instruction, executes that instruction on the input sequence, shuttling back and forth between the input sequence and the instruction-table sequence. This Turing machine, which is far more complex than our original one, would also add the numbers two and three—hardly a remarkable result. What is remarkable is that the instruction table for this machine was not written for the specific task of adding two and three; it was written to carry out [222] any computation defined by the zeroes and ones appearing on the relevant segment of its paper tape.

What this means is that there exists a universal machine which, given the coded description of any Turing machine as its input, will compute the number computed by that Turing machine. In other words, there is a universal Turing machine which will simulate any particular Turing machine we might construct (Turing 1965: 127-132; McArthur 1991: 406; Hopcroft 1984: 91-92).This machine, of course, is logically equivalent to the programmable digital computer—what we today call simply 'the computer'. Turing did not invent the digital computer or the programmable computer, but he gave the clearest general articulation to date of the all-important concept of software, otherwise known as virtual machinery, i.e. the concept of instructions that are themselves coded as data to be processed by the machine.<5> Turing's universal machine is defined as an all-purpose simulator, and one implication of his existence proof is the conclusion that what one computer can do, any stored-program computer can 'in principle' do; thus, given enough time—which is admittedly more time than anyone ever actually has—the simplest, cheapest home computer could carry out any computational task which would ordinarily be entrusted to a multi-million dollar supercomputer.<6>

We pause here to note Turing's pivotal contribution to the currently ubiquitous cultural phenomenon known as 'virtuality'. The adjective 'virtual', practically unheard of a few years ago, may have become the number one buzzword of the nineteen-nineties. Yet it has a long and venerable history. In 1902 Charles Peirce noted that the term derived from the thirteenth-century philosopher John Duns Scotus and was later exemplified in Edmund Burke's doctrine of virtual representation, which is not representation but is supposedly as good as. Peirce also proposed this definition (Peirce 1935: Vol. 5, §372): 'A virtual X (where X is a common noun) is something not an X, which has the efficiency (virtus) of an X.'<7> Outside of political science and physics, the term 'virtual' has lain fairly dormant until recently. Evidently, however, Turing's concept of an instruction table is a virtual machine, irrespective of what Turing actually called it, and it is the direct ancestor of today's virtual memory, virtual typewriters, virtual reality, and virtual communities. Turing's Theorem may be a highly technical result in mathematical logic, and its exact philosophical significance may be elusive, but its significance as a catalyst for technological and cultural change can hardly be overstated, as noted e.g. by Bolter (1984).

On the philosophical side of the ledger we note, first, what was noted by Kenneth Ketner a few years ago, namely that Turing's machine provides a definition of 'effective procedure' that is pragmatic in the broad, Percean sense that it defines its object in terms of possible actions [223] and their consequences (Peirce 1935: Vol. 5, §402; Ketner 1988: 52-53). More specifically, and in a spirit somewhat different from Peirce's, it is a mechanistic definition, which defines its object in terms of the behavior of a deterministic, finitistic machine. Such definitions have their ancestry in the physicist James Clerk Maxwell's dictum that he could understand a physical theory only once he was able to construct a mechanical model of it; what Turing did was construct a mechanical model of the reasoning process of a (human) computer. Whether this was Turing's chief objective, there is no doubt that this is part of what he thought he had accomplished. Thus, in the 'appeals to intuition' by which he defends Turing's Thesis, he seeks to show in detail how the essential elements of what a human does when computing is captured by the machine.

For instance, paper (or some functional equivalent thereof) divided into squares (or other discrete units) is essential to computation, but it is not essential to have a certain number of lines per page, as in a notebook, rather than one long line, as in the paper tape of a Turing machine. It is further essential that there be only a finite number of symbols, and that the person go through only a finite number of states of mind. States of mind, finally, are essential only because, at each step of the computation, the recollection of the last step partially determines the next step. If the human computer were to take a break, she would have to write a note to herself reminding herself what the last step was, and this note would then take the place of the conscious recollection. But that means that it is the record contained in the note that is essential, rather than the conscious recollection of anything. As Turing notes, a human computer could take constant breaks, e.g. after every step of the computation, and she would then have to write a new note at each step; this is mimicked by the Turing machine completing each step by going into a predetermined machine state which, together with the scanned square, determines the next step, thus 'summarizing' all the preceding steps of the computation (Turing 1965: 135-140).

The bearing of Turing's results on mechanism or determinism in the philosophy of mind remains a hotly debated topic, which we shall not here enter into, except to note that Turing's own view, articulated in his (1950), was clearly a mechanistic one, that drew no essential distinction between people and machines, in terms of their mental capacities.<8>

Bush's Memex

In turning from the lofty abstractness of the Turing Machine to the concrete details of the Memex we need to shift gears mentally. [224] Philosophers, logicians, and computer scientists have long depended on the universal Turing machine as an idealized archetype highlighting the theoretically interesting features of the computer while abstracting from the supposedly inessential features of the hardware. Among historians of computer technology, by contrast, the cognitive role played by particular types of hardware has increasingly come into focus, and there is a long-standing and still growing awareness that the personal computer owes some of its defining features to the unlikely source represented by Vannevar Bush's Memex, which was described in the landmark article 'As We May Think', originally published in the Atlantic Monthly in 1945 and reprinted at least ten times since in various publications.<9> Unlikely, because the Memex was not a digital computer, and because its author is perhaps best known for having staked his career on the rapidly obsolescent analog technology to the point of actively opposing the development of the first digital computer, the ENIAC (cf. Kurzweill 1990: 198).

It may, however, be well to remind ourselves at this point what an unlikely invention the personal computer itself was. Today the evolution of the personal computer, like that of the personal automobile, appears inevitable because it happened; if they had not happened, both might now be considered science-fiction fantasies. The personal computer was made possible by the miniaturization of electronic relays that was a spillover effect of the space research of the nineteen-sixties. But there was nothing in the miniaturization trend per se that especially favored the development of small and initially practically useless general-purpose computers, rather than miniature special-purpose computers like the one controlling the fan belt in your car, or more powerful mainframes capable of faster and cheaper execution of conventional computer tasks, such as scientific calculations and accounts payable or receivable. In the sixties computers were huge, expensive machines usable only by an initiated elite; the idea of turning these machines into personal information-management tools that would be generally affordable and usable without special training was advocated only by a fringe of visionaries and was regarded as bizarre not only by the general public, but also by the mainstream of the electronics industry.<10> The second computer revolution obviously could not have taken place without the first one preceding it, but the first computer revolution could very easily have taken place without being followed by the second one.

Especially telling in this respect is an anecdote told by Douglas Engelbart, who pioneered networking, word-processing, and graphical user interfaces, who invented the mouse, and wrote the influential manifesto-like 1962 report, 'Augmenting Human Intellect', which spelled out the basic goals and assumptions of the IA (intelligence augmentation) [225] movement. Having arrived at his vision of developing electronic tools for augmenting the human intellect, Engelbart started seeking a place to put his vision into practice, and at one point had an interview at Hewlett-Packard, where he expounded his vision and was offered and accepted a position. In the car driving home it suddenly occurred to him that digital computers had not been explicitly mentioned during the interview; to be on the safe side he pulled over by a phone booth and called his interviewer to verify that they understood that his plan involved the development of new digital computers. No, they had not understood that, and moreover Hewlett-Packard had no plans to enter the computer business (Rheingold 1985: 179). This was to change dramatically, but the connection between intelligence augmentation and computers was anything but obvious in 1957. Engelbart later received government funding for his research, largely on the strength of his 1962 report, which was inspired by, and included a lengthy portion of, Bush's 1945 article.

Bush had spent the pre-war years developing several successive analog computers—including the Differential Analyzer for solving differential equations—and the war years as science advisor to the President. Time and again in his various writings he returned to the problem that is at the center of 'As We May Think', the problem of how information overflow may serve as an obstacle to scientific progress. To Bush the problem was epitomized by the fate of Gregor Mendel's classic paper on plant hybridization, which had been published in a scientific journal and hence was 'available' but which, for an entire generation, did not reach those in a position to benefit from it and build further on Mendel's discoveries (Bush 1991: 89; 1991c: 197).<11> And that was in the mid-nineteenth century. How many more Mendels might not have drowned in the information explosion of the mid-twentieth century? The problem Bush set himself was that of creating records in such a way as to maximize each individual's access to the information which would be maximally useful to that individual. And his solution, concretized by the Memex, was to construct personalized analog machines which would extend the reach of human memory by orders of magnitude by storing and retrieving information in a manner analogous to the workings of human memory.

The Memex was to consist of a desk, on the top of which were two slanted, translucent screens. The significance of having two screens was to allow the machine to display simultaneously two items of text or a text item and an image—what personal computers today achieve either with split screens or with multiple open windows. The desktop would further contain a keyboard, a set of control buttons and levers, and a scanning surface for scanning in images by means of dry photography. Inside the desk would be a camera, a massive microfilm storage unit with [226] a capacity well exceeding that of today's CD ROMs—Bush's informal specifications have later been estimated to require anywhere from 100 gigabytes to 3.65 terabytes per year—and a selection device that would search the microfilm for records.

[Note: A line drawing of the Memex should be inserted here, if feasible.]

Records would be created by entering text with the keyboard, by scanning in drawings, photographs or other images, and, most importantly, by linking text items with each other and with images to form associative trails like those through which the human memory works. In Bush's words:

When the user is building a trail, he names it, inserts its name in a code book, and taps it out on his keyboard. Before him are two items to be joined, projected onto adjacent viewing positions. ... The user taps a single key, and the items are permanently joined. ... Thereafter, at any time, when one of these items is in view, the other can be instantly recalled by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been joined together from widely separate sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails. [Bush (1991a: 103-104)

To say that we have here the first detailed description of a personal workstation, as well as of hypertext and hypermedia technology, would be to state the obvious. The Memex itself was never built, the chief reason being that the analog selection system for searching microfilm could never be made to work with sufficient speed and reliability for the purpose Bush had in mind, raising doubt whether this objective could be met by any analog device, as has been noted by Colin Burke (1991).

It is unclear whether it ever occurred to Bush, even in his later years, that digital machinery could provide the appropriate storage-and-retrieval mechanism for the Memex. In 1960 Bush described, with remarkable prescience, both online journal publishing and online library searches, but in both cases the text was to be stored on microfilm and transmitted via facsimile (Bush 1991b: 172-174). That same year the idea of 'mechanically extending man' through the use of interactive computing with interfaces combining graphics and natural language, was outlined in print by J.C.R. Licklider (1960). The idea of the personal digital computer was formulated and explicitly traced to 'As We May Think' in a letter Engelbart wrote to Bush in 1962, requesting permission to quote from Bush's article; such permission was granted by Bush's secretary, and we do not know whether Bush actually read Engelbart's letter (Engelbart 1991; Nyce and Kahn 1991: 129). In 1965 Licklider published his Libraries of the Future, which described in detail a decentralized, packet-switched [227] computer network not unlike the Internet—no coincidence, as the Internet grew out of the ARPANET, in whose development Licklider was one of the pioneers, along with Engelbart. Libraries of the Future does not cite Bush's article, which Licklider had not yet read, but at the last moment Licklider's indirect indebtedness to Bush's ideas was pointed out to him, and he dedicated the book to Bush (Licklider 1965; Nyce and Kahn 1991: 136-137).<12> Finally, in 'Memex Revisited' in 1967, after reading Licklider's book, Bush granted that digital machines can do anything that can be done by analog machines, 'although often not so neatly or flexibly or inexpensively', and in the same article he cited the ability of digital computers to learn from experience, demonstrated by chess-playing programs, as evidence that this particular feature of the Memex was technologically feasible (Bush 1991c: 203; 211-212). So, by 1967 Bush saw parallels between the Memex and the digital computer. But I have found no explicit statement that the Memex itself ought to be digitized.

Today Bush's description of the Memex reads like a description of a personal computer.<13> We have already noted the strangeness and novelty of the idea of the personal computer once it was conceived by Engelbart; but even had this idea occurred to Bush at the time, it would have been unlikely to recommend itself to him, given the direction he was coming from, which was from a dissatisfaction with the abstractness and artificiality of existing information technology. From that perspective, the extremely artificial and abstract organization of data into binary tree structures which is characteristic of digital machinery might not have appeared as a step in the right direction.

Conventional systems of data storage, Bush argued in 1945, were burdened with an artificial system of indexing by alphabetical or numerical order, forcing the user to wade through lengthy and cumbersome searches to find that unique path through the hierarchy of subclasses that would eventually lead to the desired datum:

The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain.

There is more than a hint of neural networks here, but that is an aside—albeit a fascinating one. Bush goes on, a few lines further on:

Selection by association, rather than by indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively with [228] regard to the permanence and clarity of the items resurrected from storage. (Bush 1991a: 101-102)

It has been stressed by James Nyce that what, in Bush's view, made conventional systems burdensome and cumbersome to use was specifically their unnaturalness, and that the solution lay in attempting as far as possible to mimic nature, more specifically human nature.<14> This idea was nothing new to Bush, who had originally earned his reputation by inventing the Differential Analyzer which solved differential equations through a process which, in Bush's view, mirrored those natural processes we describe in differential equations.<15> Nyce annotates the above passage as follows:

Conventional indexing systems then do not work because they are artificial and by definition inadequate. These systems represent human conventions of work, labor, and technology. They are cultural artifacts. In the end, they fail for just these reasons. These systems break down because they neither duplicate the mind nor its processes. (Nyce 1994: 419)

In further support of this statement Nyce quotes a later comment on the Memex made by Bush in 1959:

In bringing machines to man's aid, we have thus far built them on patterns which fitted the technical elements at hand and our habitual ways of doing things, rather than to cause them to imitate and extend the actual processes by which the brain functions. Many years ago I described a machine, called the Memex, which I conceived of as a device that would supplement thought directly rather than at a distance. (Nyce 1994: 419; Bush 1991b: 166)

The idea of extending the reach of the mind by the direct use of machinery that imitates the natural processes of the mind was Bush's core idea. How does this idea relate to the core ideas motivating Turing's research program?

Comparisons and Contrasts

We have told two different stories about two very different notional machines, and the time has come to remind ourselves what these two stories have got to do with each other. The first observation to make is that, in today's technology, Turing's and Bush's research programs have to a large extent converged: the Turing machine is the ancestor of the inference engine under the hood of the personal computer (as of any other computer), while Bush's Memex is the ancestor of many of those [229] features we refer to, collectively, as the user interface. (It is, however, interesting to note the imperfection of the obvious analogy indicated by the simile 'under the hood': in fact neither the Memex nor the Turing machine contains any reference to the computer's physical engine, i.e. the electronic relays.) Bush and Turing had very different purposes in constructing their respective machines. In 1936 Turing was not apparently seeking to describe a machine that would actually be built. He realized, of course, that a universal machine could be built, and he soon became personally involved in the construction of Britain's first digital computer. But his purpose when writing 'On Computable Numbers' was a purely theoretical one: to solve the Entscheidungsproblem. Even without being physically constructed, Turing's hypothetical machine served as an intellectual aid to the solution of an abstract, metamathematical problem. Bush, by contrast, intended the Memex as a first approximation toward a blueprint of a machine, the development of which he advocated—in vain—as a practical and useful undertaking. So Bush's paper is rich in concrete descriptions of hardware—levers, screens, cameras, microfilm, etc.—none of which we find in Turing's paper. Turing does describe a paper tape, but it is the tape's logical features—discreteness and directionality—that matter, not what the tape is made of.

There is one common thread in both constructs: what we have already referred to in Turing's construct as a pragmatic doctrine of meaning. Turing's objective in constructing his machine was to define the concept of an effective procedure, and his description of the action of the machine constitutes his definition of 'effective procedure': anything that can be computed by a Turing machine has an effective procedure, and nothing else does. More specifically, Turing's pragmatism is mechanistic: instead of defining 'effective procedure' in terms of what a human does when carrying out a computation, he defined it in terms of the behavior of a discrete, finite-state machine, and then argued that the behavior of this machine constituted an analogue to human computing behavior, capturing all the 'essential' features of the latter. A superficially similar pragmatism was evinced by Bush when he claimed that the student of his Differential Analyzer would gain 'a grasp of the innate meaning of the differential equation.' In Bush's view the dynamic re-enactment by the machine of the natural processes which we use differential equations to describe, provided a truer or deeper understanding of the equations than the study of their formal structure. Thus, at one point he claimed that a machinist who had helped build and manage the Differential Analyzer had thereby learned calculus without any formal study of the subject:

[230] I never consciously taught this man any part of the subject of differential equations; but in building that machine, managing it, he learned what differential equations were himself. ... he had learned the calculus in mechanical terms—a strange approach, and yet he understood it. That is, he did not understand it in any formal sense, but he understood the fundamentals; he had it under his skin. (Owens 1991: 23-24)

As Turing thought that the description of a machine capable carrying out all effective procedures conveyed a complete understanding of the concept of an effective procedure, so Bush thought that the workings of a machine capable of solving differential equations provided an exceptionally deep understanding of the calculus. Conversely, in Bush's view, a machine whose function is to aid the human mind in its work, such as the Memex, would have to embody—i.e. reflect in its operations—a deep understanding of how the mind works.

But that is as far as the similarity goes and, as I have already hinted, this similarity is somewhat superficial. To Bush, as has been stressed by Larry Owens (1991), the pedagogical or hermeneutic power of the Differential Analyzer did not stem from the mere fact that it was a machine that solved differential equations, but from the fact that it was an analog device, which solved differential equations by mimicking the dynamical processes described by those equations. The Turing machine, by contrast, is a digital device, and, while Turing advocated its analogy to the human mind, this analogy was always presented as a formal and abstract one, as in the case we have seen of the steps that are essential to computation, whether done by a human or a machine. Human computers, we recall, may take breaks occasionally, and will then need to write themselves notes to remind themselves of exactly where in the computation they were; to accommodate this need for a record, the Turing machine 'writes itself a note' after every step, thus mimicking the extremely unlikely case of a human computer who breaks after every step of the computation. Turing clearly had no interest in the question of whether the Turing machine and the mind solved problems by the same internal processes, so long as they both exemplified the same functional—i.e. input-output—relationships. This view is epitomized in the famous 'Turing Test' for intelligence, according to which a machine must be said to think if it has at least a 70 per cent success rate in tricking human interlocutors in teletype conversations into believing that it is human. In his argument for this test Turing actually mentions the capability of digital computers to simulate differential analyzers to a sufficiently close degree of approximation; hence the human brain, in Turing's view, may well be an analog device, which can nonetheless be sufficiently closely simulated by a digital computer (Turing 1950: 451).

[231] The difference between Turing and Bush in this respect can be summed up by utilizing Fetzer's distinction, already referred to, between simulation and replication. Using this distinction, the Turing machine was intended to simulate the human mind, while the Differential Analyzer was intended to replicate dynamical processes in nature. The Memex, finally, was intended to replicate, not indeed the human mind, but one very important function of the human mind, namely memory, as is signified by its name (Nyce and Kahn 1991: 61-64). By linking text items and images in networks of multiple parallel associative trails, it would automate the kind of apparently spontaneous and intuitive retrieval processes of which the human memory is capable, and thus extend the reach of human memory in a way not accomplished by conventional indexing and cataloguing systems. Bush repeatedly made it clear that he did not believe machines could be made to think, but in his view they could effect tremendous savings of intellectual labor by replicating specific functions of the human mind, and thus supplement people's intellectual activities. What Bush in 1945 failed to see—what it took the singular genius of Douglas Engelbart to see—is that, once the Memex had been described with a sufficient degree of specificity, it could itself be simulated by a digital computer.

Finally, the contrast between the two machines—the Turing machine and the Memex—epitomizes the contrast between the two computer revolutions. Whether Turing's mechanism be true or false, it has often been observed that the acceptance of it can make us more like machines in the sense of basing our choices and actions only on reasoning that can be simulated by a machine, leaving out such elements as judgement, intuition, spontaneity, empathy, or compassion. And it has been argued by Joseph Weizenbaum (1976) and by David Bolter (1984) that our growing dependence on the digital computer has in fact to a large extent had this effect on us. In Bolter's phrase, we have become 'Turing's Man': utilitarian, calculating, and superficial. This may sound harsh, but the point can perhaps be illustrated by the prevalence in recent decades of cost-benefit analyses as tools for public policy making. In a cost-benefit analysis you identify all the groups affected by a proposed policy, you identify their various preferences, as well as ideally the respective strengths of those preferences, and you identify the expected costs and benefits, to each group, of implementing the policy. Adding all the benefits and subtracting all the costs finally tells you whether the policy yields a net social benefit, or, in the case of several proposed alternative, which one yields the greatest net social benefit. This is admittedly crude; there are all sorts of wrinkles and refinements that can be introduced, and there is a technically highly sophisticated literature on the subject.<16> The bottom [232] line, however, is that (a) values can enter the equation only in the form of preferences held by groups affected by the policy, and (b) all preferences count equally. So, for instance, in a discussion of whether to halt the deforestation in the Pacific Northwest, the concern for the environment cannot directly enter the cost-benefit analysis, but must be represented either as a psychological cost to the environmentalists or as a (discounted) loss in earning power to future generations of loggers, or perhaps as a combination of both. And in a discussion of policies to discourage smoking, the potential loss in earning power to tobacco growers and cigarette manufacturers is as much of a social cost as is the health loss to smokers. The size of each cost must obviously be estimated, but the value judgement that cigarette manufacturing is itself socially undesirable cannot be admitted into the analysis.

The methodology of cost-benefit analysis (like other similar decision-support tools) divides policy evaluation into two parts: the data collection which can be carried out by pollsters, and the analysis, which can be cranked out by a computer. Now, it is easy to see how, honorable intentions notwithstanding, the methodology almost irresistibly seduces its practitioners into thinking about values only as someone's preferences—the way a computer would 'think' about values. Moreover, the greater the installed base of cost-benefit analysis software and other decision-support software of the same general type, the stronger the economic pressure within institutions to substitute cheap machine time for expensive person time in the evaluation and analysis which lay the groundwork for decisions and policy recommendations. And, of course, if one believes that what the computer does is 'think', the net effect will be to revise one's conception of what counts as thinking.

Well into the nineteen-eighties Weizenbaum deplored the personal computer as a further invasion of the machine into human affairs (Weizenbaum 1984). Since today's personal computers can do anything that could be done by a UNIVAC thirty years ago, there is of course no reason why one cannot use personal computers for the purpose of running cost-benefit analyses, or for any of the other purposes for which computers were habitually used in the nineteen-sixties. Undoubtedly thousands of personal computers are so employed. But the personal-computer revolution has not been driven—either technologically or in terms of market forces—by the desire for cheaper ways of doing what was already being done by mainframes. Instead, personal computers have been sold as vehicles for new types of software, such as spreadsheet, word-processing, desktop-publishing, and telecommunications—i.e. applications where the computer does not itself carry out any task, but instead assists the user in carrying out his/her tasks. The ancestry of the personal computer [233] in the Memex highlights what has been observed by Rheingold and Levy, among others, to wit, that the personal-computer revolution has been an exercise in humanizing technology by a deliberate effort to make the machine complement human thought and human work habits, in ways that the earlier generations of computers did not (Levy 1994; Rheingold 1991: 81; Norman 1990: 181-183). It is an empirical question what degree of success this effort has had to date. No doubt computer systems are still routinely designed and implemented whose operating principles derive from the way computers work, rather than the way people work—online library search systems, as currently implemented, are arguably an example.<17> However, to the extent that the effort to promote human-centered computing has been successful, this must be considered a reversal of whatever dehumanizing effects the first computer revolution may have had. We may actually be becoming 'Bush's People', rather than 'Turing's Man', but if we are, Turing will still have to be given the credit for his contribution to what was to become the enabling technology for the latter-day descendants of the Memex.

To sharpen this point a bit: we routinely use digital computers to run command-and-control systems, such as the cooling system in your car or the thermostat controlling the central heating in your home. But we do not think of a computerized cooling system as a type of computer; we think of it as a type of cooling system. Why, then, think of the personal computer as a type of computer, rather than as a computerized memex, putting the word in lower case to use it generically? Put differently, which is the more illuminating image: that of the universal simulator which has not been designed for any particular purpose of its own but which can simulate any sufficiently precisely described special-purpose machine, or that of a special-purpose simulator, specifically designed to mimic particular processes of the human mind? For the practical purposes of designing and programming computers, it is obviously valuable at certain stages to be guided by the concept of the Turing machine as a universal simulator, but the memex may well be a more valuable overall conception guiding the design.<18> The question remains, is there any ontological or epistemic compulsion to regard the personal computer specifically as a type of computer rather than as a computerized memex? Only this, as far as I can tell: reliable command-and-control systems existed prior to the digital computer, whereas memexes did not. So, as a matter of fact there are no memexes that are not computers, but we can nonetheless conceive of a memex that is not a (digital) computer, which is precisely what Bush did.

The conclusions of this section can be summarized in three points:

[234] First, the Turing machine and the Memex each provided an indispensable piece of the technology that has become known as the personal computer, which we may today opt to conceptualize either as a personal Turing machine or as a computerized memex;

Second, the two constructs are not rivals in the sense of offering conflicting solutions to the same problem; Bush and Turing were attacking entirely different problems, and so their respective solutions do not directly conflict with each other; but:

Third, the two constructs embody different conceptions of the human mind in general and of human-machine interaction in particular. These conceptions do come into conflict, as we have seen, and we will turn, finally, to a brief look at some of the philosophical issues involved in this conflict.

Two Paradigms of the Mind

Turing regarded the human being as essentially indistinguishable from a machine; Bush regarded the human being as essentially a machine user, and sought to construct symbol-manipulation machines that would be 'thinking machines' in the sense of machines to think with, not machines that think. While Bush's vision has served as the inspiration for a vast industry that is rapidly transforming our culture and society, Turing's vision has become the governing paradigm of the research program known as artificial intelligence (AI), and indeed for the entire interdisciplinary field known as cognitive science.<19> So pervasive is the influence of this paradigm that one frequently hears it said that the computational model is the only comprehensive and fully articulated model of the mind available.

There is, however, a different model of the mind available—one which, while not articulated by Bush, is fully supportive of the research program Bush initiated, the program today known as 'intelligence augmentation' (IA). The model I have in mind is one which was articulated in the nineteenth century by Charles Peirce, and which has recently been advocated by Fetzer as the semiotic model of the mind. As Fetzer puts it, the human mind is a semiotic system; i.e. a system capable of utilizing signs, in the Peircean sense of 'something that stands for something (else) in some respect or other for somebody.' (Fetzer 1990: 31; Peirce 1935: Vol, 1, §228; Skagestad 1992) There is thus an irreducibly triadic relation among the sign, that for which the sign stands, and the one for whom the sign stands for something. This relation of 'standing for' is not replicated in a Turing machine, which can manipulate tokens according to predetermined rules, [235] but which cannot take these tokens to stand for something else. This failure would not in itself have struck Turing as a shortcoming since he was interested only in functional, i.e. input-output, relations. If a machine exhibited the same input-output relations as a human, the machine must be said to think, in Turing's view. Fetzer argues, however, that the sign-relation cannot be completely specified in terms of input-output relations; a semiotic system, therefore, is an indeterministic system, in the sense of a system 'for which, given the same input, one or another output within the same class of outputs invariably occurs (without exception).' A Turing machine or a digital computer, by contrast, is a deterministic system, i.e. a system 'for which, given the same input, the same output invariably occurs (without exception).' (Fetzer 1990: 37; cf. also Fetzer 1992) Fetzer's neo-Peircean articulation of the concept of signification in dispositional and probabilistic terms appears to this writer to provide an exceptionally strong basis for indeterminism .

Peirce's own reflections on the relationship between the mind and the logic machines of his day have been documented by Ketner, as well as by the present writer (Ketner 1984; 1988; Skagestad 1993b). Suffice it here to note two salient features of Peirce's thinking on the subject. First, Peirce thought that reasoning essentially consisted in the manipulation of signs, that consciousness was inessential to this activity, and that some types of reasoning—specifically the drawing of simple inferences—could be observed not only in logic machines, but in any kind of machine, including blocks of wood dragged through water by yacht designers to determine hydrodynamic properties. Second, though, he recognized that the logic of relations, i.e. the polyadic predicate logic, required a type of deduction he called 'theorematic', where 'it is necessary to experiment in imagination upon the image of the premiss in order for the result of such experiment to make corollarial deductions to the truth of the conclusion.' (Quoted by Hintikka 1983: 107; cf. also Peirce 1935: Vol. 5, §641) Jaakko Hintikka has noted, with particular reference to the predicate logic, that the realm in which theorematic deductions are made is the realm of mixed quantifiers, and he has defined a theorematic deduction as one which increases the number of layers of quantifiers (Hintikka 1983: 110) While Peirce, as far as I know, did not actually prove the claim that predicate logic includes theorematic deductions in his sense, it has, of course, been proven and is nowadays known as Church's Theorem. To say that the polyadic predicate logic requires theorematic deductions is to say that the construction of proofs in this system requires creative experimentation, i.e. that there is no algorithm for proof generation or, a fortiori, for determining provability.

[236] So, in Peirce's view, reasoning in the fullest sense of the word could not be represented by an algorithm, but involved observation and experimentation as essential ingredients. Thus, in a letter from 1887, he writes:

Formal logic centers its whole attention on the least important part of reasoning, a part so mechanical that it may be performed by a machine, and fancies that is all there is to reasoning. For my part, I hold that reasoning is the observation of relations, mainly by means of diagrams and the like. It is a living process. ... reasoning is not done by the unaided brain, but needs the cooperation of the eyes and hands. (Quoted by Ketner 1984: 208-209)

While this passage may sound like a rejection of formal logic, it was written by one of the pioneers of modern formal logic, and in fact Peirce went on to reform logic by inventing a graphical notation that would enable observation and experiments on iconic representations of one's premisses. Not only, by the way, did Peirce regard a graphical user interface as essential to logic; in 1906 he even opted to add color, in his 'tinctured existential graphs'.<20>

Peirce, who died in 1914, does not appear to have made up his mind whether a machine could ever be constructed which would be capable of reasoning in the full sense of making theorematic deductions—what Ketner calls a 'Peirce Machine'—although Peirce did note that the logic machines of his day, including Babbage's Analytical Engine, were incapable of this feat. The last passage quoted, which granted that the least important part of reasoning may be performed by a machine, implies that more important parts cannot—but, again, Peirce may have been referring only to machines then existing (cf. also Peirce 1935: Vol. 3, §618). The question whether future machines might be able to reason is explicitly left open in Peirce (1887). Today's digital computers, to make this point clear, can perform a wide range of theorematic deductions, utilizing what is known as heuristic problem solving, but one way of restating Church's Theorem would be that they cannot perform all possible theorematic deductions.

On the other hand, Peirce made it abundantly clear that human reasoning depends on a variety of machinery, including blocks of wood, cucurbits, alembics, pendulums, telescopes, and pens and ink. A sidelight on just how seriously Peirce took the external vehicles of reasoning as essential elements of the reasoning process is provided by his obituary of Charles Babbage, the inventor of history's first programmable computer, the Analytical Engine. After praising Babbage's engine as 'the most stupendous work of human invention,' Peirce goes on to instance Babbage's publication of a volume of logarithms, where Babbage tried [237] fifty different colors of paper, and ten of ink, before settling on the combination that was easiest to read (Peirce 1871: 459). From Peirce's perspective, Babbage's choice of the right paper-ink combination for his table of logarithms was itself as much a contribution to scientific knowledge as the computation of the logarithms themselves. Knowledge, in Peirce's semiotic doctrine, consists less in states of mind—'ultimate, inexplicable facts,' he once called them (Peirce 1935: Vol. 5, §289)—than in the potentiality of external objects to induce certain states of mind, and this potentiality depends on the specific physical characteristics of said external objects. Peirce never denied the existence of consciousness, but he did deny that consciousness is an essential attribute of mind (Peirce 1958: Vol. 7. §366). In an especially topical passage Peirce goes on to emphasize the dependence of our language faculty on external tools for linguistic expression:

A psychologist cuts out a lobe of my brain (nihil animale a me alienum puto) and then, when I find I cannot express myself, he says, 'You see, your faculty of language was localized in that lobe.' No doubt it was; and so, if he had filched my inkstand, I should not have been able to continue my discussion until I had got another. Yea, the very thoughts would not come to me. So my faculty of discussion is equally localized in my inkstand. (Peirce 1958: Vol. 7, §366)

Peirce is not here making the trivial point that without ink he would not be able to communicate his thoughts; nor, as far as I understand him, is he going to the other extreme of placing inkstands and brains exactly on a par as conditions for thought and discussion. The inkstand, he notes, can be replaced if stolen, and Peirce would undoubtedly grant that, in the absence of all inkstands, a pencil, a typewriter, or a word-processor would serve as well. His point is that some of his thoughts come to him in and through the act of writing, so that having writing implements is a condition for having certain thoughts—specifically those issuing from trains of thought that are too long to be entertained in a human consciousness. The above passage notes the mind's dependence on external hardware; elsewhere Peirce noted the dependence of abstract thought on such 'soft' technologies as notation, exemplified especially by the parenthesis as used in algebra (Peirce 1887; cf. Havelock 1963; Levinson 1988: 128-135). While this Peircean idea may not be explicit in Bush's writings, it was in fact later made explicit by Bush's intellectual heir Engelbart, in his Neo-Whorfian Hypothesis: 'Both the language used by a culture, and the capability for effective intellectual activity, are directly affected during their evolution by the means by which individuals control the external manipulation of symbols.' (Engelbart 1962: 24)<21> In seeking to devise new symbol-manipulation technologies—word-processors, mice, networks, etc.—so as to improve [238] human thought-processes, Engelbart was directly influenced by a theory which, unbeknownst to him, had been articulated by Peirce sixty years earlier and placed within the context of a comprehensive philosophical doctrine of signs. In Peirce's philosophy, then, we find not only, as Fetzer has noted, a model of the mind that constitutes a credible alternative to the computational model, but also a fully articulated view of reasoning that anticipates, yet is also more comprehensive than, the one later formulated by Engelbart, with inspiration from Bush. Peirce has long been recognized as the father of pragmatism, of modern formal logic, and of modern semiotic, among other things. The time may have come to recognize yet another Peirce: the philosopher of intelligence augmentation.

Conclusions

The Turing machine and the Memex, as we have seen, are not rival conceptions in the sense of offering competing answers to the same question. Turing’s question—one of Turing’s questions—was: how can a machine be constructed to mimic a certain class of human intellectual operations, of which computation is representative? Bush’s question was: how can a machine be constructed to extend the reach of human memory both by improving our access to existing records and by constructing new records in a way that will make them more accessible? Turing wanted to model the most abstract function of the human mind, i.e. computing; Bush wanted to model the most concrete function of the mind, i.e. the recollection of particulars. And their respective answers, we have also seen, are to a significant degree complementary in terms of their respective contributions to the evolution of computer technology; specifically, Turing’s articulation of the concept of a stored-program, digital, universal machine has proved the enabling technology for the personal workstation which is partially specified in Bush’s description of the Memex which, for its part, lacked the essential digital component.

The two conceptions are, however, rivals in the sense of providing competing paradigms with which (or through which) to think about the computer and its counterpart, the human mind. If we adopt the Turing machine as our paradigm, we opt to regard the digital computer as essentially a mimicker of the human mind; we will then naturally regard each new triumph of computer technology as another step towards the complete replication of the mind by the machine, and thus also as one more confirmation of a mechanistic theory of the mind. This perspective, we have seen, is optional. If instead we adopt the Memex as our paradigm, [239] the function of a computer is no longer to duplicate the mind but to complement it. From this perspective, there is no temptation to regard advances in computer technology as providing evidence for mechanism. Nor, of course, has any refutation of mechanism been attempted here. Finally, the augmentationist perspective poses an urgent need for philosophical investigation of how the increasing complementation of the human mind by the machine elucidates the nature of the mind, and how it perhaps alters the mind. If this article provides some impetus towards such investigation, it will have served its intended purpose.

NOTES

The footnotes appear in print as endnotes on pp. 239-241.

[241] REFERENCES

Baker, Nicholson. (1994). Annals of Scholarship: Discards. The New Yorker, April 4, 64-86.

[242] Ketner, Kenneth L. (1981). The Best Example of Semiosis and its Use in Teaching Semiotics. American Journal of Semiotics, 1(1-2), 47-83.

Ketner, Kenneth L. (1984). The Early History of Computer Design: Charles Sanders Peirce and Marquand's Logical Machines with the assistance of Arthur F. Stewart, The Princeton University Library Chronicle, 45(3), 187-211.

Turing, Alan M. (1965). On Computable Numbers, With an Application to the Entscheidungsproblem. In The Undecidable, Martin Davis (ed.), 116-154. Hewlett, NY: Raven Press. Originally published in Proceedings of the London Mathematical Society, 2nd Series, 42 (1936), 230-265.

van Brakel, J. (1993). The Plasticity of Categories: The Case of Colours. British Journal for the Philosophy of Science, 44, 103-135.

van Brakel, J. (1994). The Ignis Fatuus of Semantic Universalia: The Case of Colour. British Journal for the Philosophy of Science, 45, 770-783.

Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgement to Calculation. San Francisco, CA: Freeman.

Weizenbaum, Joseph (1984). The Computer Fallacy. Harper's

, 268(1606),22-23.

NOTES

<1> For helpful critical comments on earlier drafts, the author is indebted to Howard DeLong of Trinity College, Robert Innis of the University of Massachusetts - Lowell, and James H. Nyce of Ball State University.

<2> On literacy and the printing press, cf.., respectively, Havelock (1963) and McLuhan (1962). The parallel between the personal-computer revolution and those earlier transformations has been noted, e.g., by Levinson (1988), and by Heim (1987).

<3> The philosophical outlook of IA is discussed in detail in Skagestad (1993b), where I also show in detail how a strikingly similar philosophical perspective has more recently been formulated by Karl Popper (1972) in his 'World 3' epistemology.

<4> Such a machine is described in detail in Hopcroft (1984). An excellent more recent account of Turing Machines is found in Penrose (1989), Ch. 2.

<5> Charles Babbage's Analytical Engine, though never completed, gave a complete blueprint for a programmable computer, and some outline programs were written in the eighteen-forties by Babbage's associate Ada Augusta, Lady Lovelace, reputedly history's first programmer; cf. Bernstein (1981: 50-51).

<6> My Macintosh is in fact literally equipped to simulate a wide variety of Turing machines, thanks to the tutorial program packaged with Barwise and Etchemendy (1993). Of greater practical importance, the computer can simulate a DEC terminal, a feature which combined with a modem enables me to connect to my university's computer and, through it, to the Internet. Indeed, without 'terminal emulation', as it is called, the Internet could not have been realized in its current cross-platform incarnation.

<7> This definition first appeared in Baldwin (1902),Vol. 2, p. 763. On the Scotistic origins of the concept of virtuality, cf. Heim (1993), pp. 132, 160.

<8> For a sampling of opposing views on this issue, cf. Lucas (1961), Putnam (1961), Webb (1983), and Ketner (1988).

<9> We shall here rely on the definitive composite version reprinted in Nyce and Kahn (1991: 85-110). For an in-depth citation context analysis with an extensive bibliography, cf. Smith (1991). Nyce and Kahn (1991) also includes a wealth of secondary material, which has proven extremely valuable to the present author.

<10> The story of these visionaries has been brilliantly told in Rheingold (1985); for a philosophical discussion stressing the intentional character of the personal-computer revolution, cf. Heim (1987). Without going into the issue in detail, I must here forestall a possible misconception that might arise. One of these visionaries, Douglas Engelbart, has never shared the view that computers ought to be usable without special training; on the contrary, he has always stressed the enormous payoff in intellectual leverage that will result from a comparatively small investment in training. Cf. especially Engelbart (1988), which contains the analogy with the tricycle, which you can use without training, versus the bicycle, which will actually take you places after you have learned how to use it.

<11> For the sake of the historical record it should be noted that Mendel's personal failure to advocate his ideas vigorously and so bring them to the attention of the scientific community has been documented by Olby (1966), esp. Ch. 5. Of related interest, cf. Kim (1994), with a Foreword by Donald Campbell and Commentaries by Robert Olby and Nils Roll-Hansen.

<12> On the history of the ARPANET, see Roberts (1988).

<13> One recent commentator goes so far as to credit Bush with originating the concept of the personal computer: 'Obviously, [Bush] was thinking about the electronic computer, a monstrous, number-crunching machine just being developed at the time he wrote [1945]. Bush understood, as did very few in thos days, that the underlying technologies of this tool were not limited to a more efficient duplication of roomsful of human calculators. He envisioned these machines as processing symbols as well.' (Levy 1994: 32-33) As we have just seen, Bush did not understand or envision anything of the sort.

<14> This perspective on artifacts in general has been developed in great detail by Donald Norman (1990) and (1993).

<15> This 'mimetic' character of Bush's earlier Differential Analyzer has been stressed by Larry Owens (1991). However, according to Nancy Stern the Differential Analyzer did not actually differentiate, but solved differential equations by integrating; cf. Stern (1981: 17). This is also noted by Owens, but without any explicit discussion of what this might mean for the role of the analyzer specifically as a model.

<16> An excellent starting point for the study of cost-benefit analysis is Stokey and Zeckhauser (1978).

<17> The unnaturalness of online library searches is argued by Baker (1994) and by Heim (1993: 13-27). The present author's opinion is that there is nothing in the binary logic of computers per se that necessitates subjecting the user to the rigidity of binary search trees; browsing, hypertextual search modes are an option, though presumably a more expensive one. It is not clear to me whether either Baker or Heim would disagree; Heim discusses hypertext (1993: 28-39), but not in this context.

<18> Of relevance here is Borenstein (1991), which does not, however, directly reference Turing or Bush.

<19> In a manifesto-like article John Haugeland (1981) has claimed that adherence to an information-processing model constitutes a prerequisite for membership in the cognitive-science community. For a cogently argued dissenting view, see Fetzer (1990).

<20> An exceptionally clear exposition of the elementary part of Peirce's logical graphs is found in Ketner (1981). More recently, Ketner has brought Peirce's graphs to the Apple II computer in the tutorial program bundled with his (1990). Peirce's tinctured existential graphs are introduced in his 'Prolegomena to an Apology for Pragmaticism' (Peirce 1935: Vol. 4, §§530-572).

<21> The cognitive role of the purely serendipitous weight-shape combination of the pencil is observed by Engelbart in this context and has later been documented in detail by Henry Petroski (1989), esp. p. 27. It has been pointed out to me by Donald Campbell (personal communication) that the Neo-Whorfian Hypothesis, as stated by Engelbart, is logically independent of the Sapir-Whorf Hypothesis, which inspired it and which may or may not be defensible in the strong form in which it is customarily stated. Of considerable relevance to the Sapir-Whorf Hypothesis is the recent discussion between J. van Brakel and C.L. Hardin over the 1969 Berlin & Kay study that is sometimes cited in refutation of the hypothesis; cf.van Brakel (1993), Hardin (1993), and van Brakel (1994), which contains an extensive bibliography. I am also indebted to Campbell for drawing this discussion to my attention.

END OF: Peter Skagestad, "The Mind's Machine: the Turing Machine, the Memex, and the Personal Computer"

Do you think the author has it wrong? If so and you want to contribute a critical comment or commentary, brief or extended, concerning the above paper or its subject-matter, or concerning previous commentary on it, it will be incorporated into this web page as perspicuously as possible and itself become subject thereby to further critical response, thus contributing to Arisbe as a matrix for dialogue. Your contribution could also be of the nature of a corroboration of the author, of course, or be related to it or to some other response to it in some other relevant way.