In the mid-1950s, French theorist Jacques Lacan had the prescience to proclaim that the calculating machine is “far more dangerous for man than the atom bomb.” (Jacques Lacan, The Seminar of Jacques Lacan. Book, W. W. Norton: London and New York, 1988). This enigmatic remark about the computer seems to imply more than it can spell out about the conditions of postwar American technocratic domination of the world where the algorithmic machine, the A-bomb, H-bomb and other technologies were being built, by the same academic-military-industrial complex and often designed by the same scientists. It is by no means clear, however, where the dangers of the algorithmic machine lay in its early years of development until we begin to reassess the fundamentals of digital technology. Over the past few decades, we have witnessed a proliferation of digital media and the formation of a planetary system of the technosphere that grows inextricably entangled with the biosphere and other networked forces.

If my hypothesis is correct, namely, that humans are evolving to resemble the intelligent machines they invent even as engineers build robots to behave increasingly like humans, one of the outcomes of this ceaseless feedback loop is the emergence of not just cyborgs, but a new species of what I call the “Freudian robot” that, under our eyes, is evolving from Homo sapiens. In my essay, I suggest that the key to our understanding of human–machine simulacra – not to be understood in terms of human–machine rivalry which merely reasserts the human will to power and fails to explain much – lies in our peculiar and shared aptitude for processing discrete symbols in a finite system of writing. This system of writing has come to structure both the algorithmic machine and the human unconscious as networked systems. To understand how this happened, we must excavate a political history of information theory that ranges beyond the discipline of communication studies while benefiting from the insights of humanistic research. 

The 27th Letter in the English Alphabet

Until MIT mathematician Claude Shannon added a letter to the English alphabet in 1948, no one had suspected that the phonetic alphabet was less than perfect. Like the feat of Newton’s apple, the 27th letter, which codes “space” as an equivalent but non-phonetically produced positive sign, laid the first stone in the mathematical foundation of information theory in 1948. […]

While developing information theory, Shannon made a careful study of Morse code and the cryptographic inventions of the past. His analysis was drawn to the curious fact that Morse code was made up of more than just dots and dashes, for letter spaces and word spaces must be factored into the sequences of signals being transmitted through a discrete channel. Each sequence is constrained by a number of possible states (finite state) and only certain symbols from a set can be transmitted for each state. He demonstrates this process by showing that, in the transmission of a telegraphic message, “There are two states depending on whether or not a space was the last symbol transmitted. If so, then only a dot or a dash can be sent next and the state always changes. If not, any symbol can be transmitted and the state changes if a space is sent, otherwise it remains the same.” (See Claude E. Shannon and Warren Weaver, The Mathematical Theory of Communication, University of Illinois Press: Urbana, Illinois, 1949, p. 43). […] The “space” letter, therefore, must be taken as a positive letter in Printed English, rather than a word divider, as is commonly observed in modern and some ancient writing systems such as Akkadian cuneiform. […]

What is Printed English?

The term “Printed English” first appeared in Shannon’s seminal essay “Prediction and Entropy of Printed English,” used interchangeably with “statistical English,” as it refers to the 27-letter English alphabet with a definable statistical structure. (See Shannon/Weaver, pp. 237–256, among other)). Mirroring cryptography, Printed English has a corresponding, translated text in numerical symbols where the symbolic correspondence is between the 27 letters and their numeral counterparts rather than that between the letters and their phonemic units as understood in modern linguistics. From a mathematical viewpoint, our implicit knowledge of the statistical structure of language is convertible to a set of numerical data with the help of simple experiments. […]

In Shannon’s experiment with the stochastic structure of randomly chosen texts, the human subject was allowed to use statistical tables, a dictionary, a list of frequencies of common words, a table of the frequencies of initial letters in words and other aids; and he or she would be asked to guess the text, letter by letter, for each sample. This and other guessing tests suggest that the predictability of English is dependent on the “space” letter far more frequently than on any of the other letters in the alphabet. […]

Shannon’s guessing game presumes a directionality which is linear and irreversible; namely, one predicts the next letter on the basis of what comes before. At each letter, the human subject is made to guess what he or she considers the most probable next letter is in view of the preceding letter. In cases where an error is made, he or she is asked to guess again and again until he or she arrives at the correct next letter. Here, something important happens in the evolution of the postwar technosphere. For the first time, the psychic is introduced into the digital machine even as the digital is speculated to structure psychic processes. Though Shannon’s intention is not to speculate about the unconscious, his guessing games rely on the unconscious to reveal the symbolic processes that govern the structure of Printed English mathematically and are, therefore, methodologically related to the word association game Carl Jung and other psychoanalysts had conducted at the turn of the 20th century. […]

Does it mean that a new Tower of Babel or a shared basement of universal communication is once again on the horizon? One of the first attempts to answer that question came not from the obscure corner of machine-translation projects, which experienced a slow and painful start, but from the fast-growing research programs on the genetic code in molecular biology and the philosophical responses to them.

The Genetic Code and Grammatology

Jacques Derrida, for example, lays out the fundamentals of his new grammatology by introducing the concept of “grammè.” What is grammè? Derrida explains it by linking it to the processes of information within the living cell and with the cybernetic program. Taking these elementary processes of genetic inscription as the instances of generalized writing, he speculates: “If the theory of cybernetics is by itself to oust all metaphysical concepts – including the concepts of soul, of life, of value, of choice, of memory – which until recently served to separate the machine from man, it must conserve the notion of writing, trace, grammè [written mark], or grapheme, until its own historico-metaphysical character is also exposed.” (See Derrida, Of Grammatology, Johns Hopkins University Press: Baltimore 1976, p. 9). Derrida made these remarks about the grammè at a time when the young discipline of molecular biology was busy importing the cybernetic tropes of coding, decoding, message, messenger and so on, from information theory. It was also a time when the four-digital system of DNA and the mathematical correlations of nucleic and amino acids were catapulting molecular biology into the respectable ranks of hard sciences. […]

The Psychic Machine

The code switch in life sciences, linguistics and other disciplines has done more than substitute one set of scientific idioms for another in the manner of a “sustained metaphor.” The truth is that alphabetical writing must shed its age-old image of phonetic symbolism to become a “speechless” code for the purpose of universal inscription. This new system of ideographic inscription includes letters, numbers, spaces, etc. and excludes phonetic and verbal expressions.

However, the mathematical reasoning that supplies the symbolic logic and code to the cybernetic machine is not in itself the raison d’être of digital media. There is something else going on besides the zeros and ones within the architecture of the computer, because the central problem in the digital revolution, as I have argued in The Freudian Robot, is the recasting of the mind as a psychic machine. This recasting requires a certain techne of the unconscious, including biological processes, that embodies the concept of stochastic processes – as amply demonstrated by Shannon’s guessing games – as well as definitive procedures to capture those processes, such as coding in Printed English. The danger suggested by Lacan at the outset of my essay lies, therefore, in the broad cybernetic spaces opened up between the neural nets of the brain and the computing machine, or in what is digital and computable in the psychic machine of the future.