Creatures, Creators, and their Research Horizons

When Petar Jandrić interviewed me for the first volume of Postdigital Science and Education, he asked me about how I understood the term ‘postdigital’. I immediately referred to Erwin Schrödinger’s (1944) set of Dublin lectures, entitled ‘What Is Life?’ (Fuller and Jandrić 2019: 215). The lectures are rightfully known for having popularised the idea of a ‘genetic code’, which inspired a generation of physicists, chemists and mathematicians to migrate to biology, where they instigated the ‘molecular revolution’, starting in the 1950s and continuing to this day. The lectures remain a paradigm case of interdisciplinary persuasion with radical transformative intent (Ceccarelli 2001: Chaps. 4–5). The feat seems especially impressive, given that an even more celebrated physicist, Niels Bohr, had urged his colleagues to turn to genetics 10 years earlier in the world’s top scientific journal—but to little effect (Bohr 1933). But perhaps that is not so surprising. Bohr had used the opportunity to extend his famed ‘complementarity’ thesis into biology, but in a way that understandably discouraged researchers. The basic point about complementarity in physics is that a quantum of light can be understood as either a wave or a particle but not both at once. To take a measurement is to select one perspective and exclude the other. By analogy, Bohr argued that knowledge of an organism’s overall organization structure and its precise chemical composition may also be mutually exclusive, which could be read as implying that half of the available knowledge about an organism is lost while it is still alive (shades of the famed ‘Schrödinger cat’ thought experiment). In contrast, Schrödinger assumed that an organism’s unique level of physical complexity implies that its design is, so to speak, ‘quantum-proof’, which allows us—at least in principle—to understand its workings without requiring Bohr-style paradoxes and mysteries.

Nowadays, we would say that Schrödinger treated the difficulties thrown up by the complexity of organisms as an ‘optimization’ problem of matching function to form. This style of thinking first came into fashion in late seventeenth century Europe, as a strategy for rationalizing the apparently bad in nature as a good in disguise, a physical limitation as an opportunity for the demonstration of spiritual strength. The trick was to change one’s frame of reference from creature to creator—that is, from the naïve human observer to ‘God’s point of view’. The strong Protestant interpretation of humans having been created ‘in the image and likeness of God’ had become the default position in Christendom, even among many who remained Catholic. Part of this strong interpretation was that we start in a ‘fallen’ state, due to Original Sin, which in turn inspires the need to turn so sharply away from the point of view of the naïve human observer. Moreover, this shift invited believers to speculate about why God has allowed things to be as they are, as a means to deepen their relationship to the deity (Harrison 2007). In the history of philosophy, this style of thought is normally presented as early modern ‘rationalism’ (Nadler 2008). It was associated with what Leibniz at the time dubbed ‘theodicy’ and Voltaire later satirised as ‘optimism’—and still later Hegel and Marx promoted with a deeper sense of irony as the ‘cunning of reason’ in history.

The optimizing mentality—along with its theological undertow—persisted well into the nineteenth century, influencing fields ranging from physics to economics. More to the point, Charles Darwin read William Paley’s similarly inspired arguments for the ‘irreducible complexity’ of organisms as a demonstration of God’s intelligent design of nature—and on that basis rejected them. A century later, Erwin Schrödinger proposed his own secular and scientifically updated versions of Paley-like premises—but now to provide job opportunities for those trained in the physical sciences to make proper human sense of that same peculiar optimality of form to function in organisms. Unlike ‘Neo-Darwinian’ Richard Dawkins, Schrödinger did not take the presence of design in nature to be an illusory residue of religion. On the contrary, he took it to be a challenging frontier for scientific research. It turns out that my own initial study of Schrödinger’s lectures dates from a period in my career (around 2005–2006), in which I was heavily engaged in both the defence of intelligent design and the study of the ‘converging technologies’ science policy agenda being advanced on both sides of the Atlantic, which heralded ‘nanoscience’ as the foundational discipline to radically improve the human condition by promising biomedical interventions at the physically smallest levels that still manage to avoid quantum instability (Fuller 2011: Chap. 3). It is difficult to read ‘What Is Life?’ today without imagining the prospect of such an agenda.

Nature and Scripture: a Unified Theory?

The ‘Neo-Darwinian synthesis’, which was being forged at the time of Schrödinger’s lecture, was required because genetics as a mathematically based experimental science was developed independently of Darwin’s original work. Indeed, there was little in Darwin to suggest that heredity could be ever understood in such an exact fashion, and most of Darwin’s intuitions about the workings of heredity turned out to be wrong. It is perhaps telling that the founder of genetics, Gregor Mendel, was a monk in the Catholic order dedicated to St Augustine, the early Church Father who placed imago dei (i.e. humanity’s creation in the ‘image and likeness of God’) at the heart of Christian doctrine. It began a long and often turbulent history whereby the Christianity—as well as the other Abrahamic religions, Judaism and Islam—tried to come to terms with nature as both medium and message through which we communicate with God (Fuller 2008).

Thus, when Galileo said that nature is a book written in the language of mathematics, he was being only mildly provocative. The idea that nature is a book written by God had already been present in early Christian doctrine for at least a thousand years, though its exact meaning depended on the Church Father consulted. However, the idea acquired a specific meaning once Aristotle became the authoritative source on nature in thirteenth century Christendom, following the Latin translation of his works from Arabic. The Muslims had preserved Aristotle in this usable form once Greek had lost its standing as an international language. It would only be in Galileo’s own day that Europeans began to understand what Aristotle had said in the original Greek. In the meanwhile, Aristotle as Latinized by Thomas Aquinas was Christendom’s designated reader of the book of nature —and its language was most definitely not mathematics, which Aristotle regarded as simply the art of counting and measuring. Indeed, Aristotle was largely responsible for branding the followers of Pythagoras as ‘mystics’ due to the metaphysical significance that they attached to mathematics.

A good way to understand the period in which Galileo lived—what Jacob Burckhardt dubbed the ‘Renaissance’—is in terms of the repurposing of Plato to repair Aristotle’s reputational damage to Pythagoras. However, this requires translating ‘Renaissance’ not as rebirth but as remembering. ‘Rebirth’ suggests a reiteration, as in the cyclical visions of history prevalent in the pagan world—to which Aristotle gave his own ‘scientific’ justification by referring to an organism’s life cycle. In contrast, ‘remembering’ suggests a return to the beginning and a piecing together of what had been lost along the way. This suits the Christian Platonism that had been dominant before the injection of Aristotle into Christendom and which returned to the fore—with a technological vengeance—during the Renaissance. The missing link is the connection between Plato’s distinctive view of knowledge as a form of remembering, anamnesis, and the equally distinctive Christian doctrine of humanity’s fallen state, as exemplified in the babble of mutually incomprehensible tongues in search of some common underlying meaning.

We normally associate the Renaissance with the revival of the study of the ancient languages, including both Greek and Hebrew, which led to a gradual downgrading of Latin’s epistemic standing in the modern period. But the most ambitious scholars of the time sought something deeper: a system of disciplining the mind—the ‘art of memory’—that might permit recovery of the original language of thought, the logos, to which we somehow still had subconscious access, perhaps in dream symbolism. This original language was generally identified with mathematics, understood not in Aristotle’s limited practical sense, but in expansive Pythagorean terms as the art of the possible—the likely source of Plato’s attraction to Pythagoras. Galileo was very much on this intellectual wavelength. But equally, he took note of the ill fortune of his failed rival for the chair in mathematics at the University of Padua, Giordano Bruno. Because Bruno focussed on the hidden potential of the human mind, he was widely seen as concerned less with understanding nature as it is than creating nature anew in a manner that would make us rivals to God. For this radical metaphysical repositioning of the human, he and his books were burned at the stake (Yates 1966).

Galileo escaped Bruno’s fate by making a good faith effort to fight Aristotle on his own turf—namely, by providing a better account of nature as it normally appears to us. Galileo’s own appeal to the untapped powers of the human mind was muted by Bruno’s standards. Galileo simply sharpened the method of analysis and synthesis that the late medieval philosophers had already deployed in their thought experiments, supplemented by an upgraded version of an already existing technology, the telescope. To be sure, that was not quite enough to spare Galileo the condemnation of the Papal Inquisition. Nevertheless, he managed to escape with his life, albeit under house arrest in Florence. There he met with Thomas Hobbes, who was then best known for his service as Francis Bacon’s personal secretary. Bacon, himself a Protestant Galileo fan, had ventured the hypothesis that God wrote two books that offer complementary perspectives on the same reality. Their titles are ‘Nature’ and ‘Scripture’ (Harrison 1998).

To be sure, this suited Bacon’s own role in the court of England’s King James I, which involved overseeing the first complete English version of the Bible, while planning a strategy for organized inquiry in a disorganized Christendom. And it is easy today to map Bacon’s dual mission onto a Stephen Jay Gould-style of ‘non-overlapping magisteria’, which treats science and religion as ‘separate but equal’ domains, to recall the old US euphemism for racial segregation performed in the name of equal treatment under law (Gould 1999). Yet, there remains the inconvenient matter of Bacon’s strong belief that all of humanity remains tainted by the Fall of Adam. This suggests that Nature and Scripture cannot be easily harmonized but require some metalanguage that puts both in a different and clearer light. Bacon believed that his own experimental method offered the prospect of just such a metalanguage, one that controlled for what he memorably called ‘idols of the mind’, the various biases and limitations of human inquirers as they approached either Nature or Scripture. Moreover, in keeping with Galileo’s rather than Aristotle’s view of mathematics, the results of this new form of inquiry could be systematically presented and grasped so that all could know the state of knowledge in any field at any moment.

The Mystery of Creation and the Mystery of Integrity

This elaborate scene-setting is relevant to Schrödinger’s lecture, which triggered within biology a ‘Renaissance’ in the deep sense just outlined. Schrödinger himself was certainly no Bruno but more Galileo than Bacon. Like Galileo, he certainly overstepped not only his own personal knowledge but also what was scientifically known at the time. Moreover, he did not propose an entirely new methodology but only suggestions that were delivered in arresting metaphors and thought experiments. But also like Galileo, those who heard and read Schrödinger knew what to do to turn his vision into a reality. Just as the vast majority of Galileo’s followers were Protestant not Catholic, Schrödinger’s followers were not professional biologists but chemists, physicists—and mathematicians. They launched the molecular revolution in biology. Indeed, ‘molecular biology’ had been coined in the 1930s by the Rockefeller Foundation’s research director, the mathematician Warren Weaver, who is perhaps best known for the Shannon-Weaver model of information but who also supported the funding of the Cavendish Physics Lab at Cambridge University where DNA’s double helix structure was discovered in the 1950s—thereby converting Schrödinger’s dream to a reality.

Just as Protestantism ushered in the seventeenth century Scientific Revolution that Galileo intellectually spearheaded, the Rockefeller Foundation provided wind to the sails of the molecular revolution, in which Schrödinger functioned as the Galileo figure. The exemplar of this change was Francis Crick, who after hearing Schrödinger in Dublin redeployed his Cambridge physics PhD to study biology, recognizing that it was a field ripe for the sort of breadth and rigour that physics had to offer. Not only did Crick go on to co-discover the double helix model of DNA but he also tried to consolidate the study of the human brain as a proper ‘neuroscience’ on broadly physicalist principles that nevertheless respected the brain’s biological exceptionalism. Just as Schrödinger had thought that the unique physical features of organisms required new physical laws, Crick thought that something similar might apply to the human brain. This explains his sceptical response to suggestions that the brain might be an inefficient computer (Crick 1994). A younger fellow traveller of Crick’s was the Oxford mathematical physicist Roger Penrose, perhaps best known as Steven Hawking’s rival, but whose career has increasingly focused on trying to show that the physics of neuronal conductivity is optimized to enable the collapse of the quantum wave function into a form of information that can be readily utilized by the human organism, aka ‘consciousness’ (Penrose 1989). Not surprisingly, Penrose penned the preface to the most recent (1991) English edition of Schrödinger’s lecture.

Schrödinger’s intervention in the history of biology highlighted a fact that had been obscured by Darwin’s revolution. The field’s big visions had been dominated by natural historians, who generally understood the problem of life as soluble in terms of understanding a species’ reproductive process. This was because the primary fact about organisms—observable to the naked eye—is that they undergo a clear life cycle, which involves conception, maturation, decline and death. Thus, for a species to survive, its member organisms must reproduce effectively. This much united Aristotle and Darwin—and most of those who came between them. None of these figures asked Schrödinger’s question: How do organisms maintain their integrity throughout this life cycle, such that they can undergo significant changes in form and function, while retaining the capacity to reproduce themselves? For Schrödinger, the question ‘What Is Life?’ is answered by understanding the physical specificity of the organism as a kind of entity—as if one were trying to construct such an entity from scratch, given our current knowledge of physics.

Aristotle and Darwin simply presumed the ‘naturalness’ of organisms in a sense that implies no special need to explain their origins as a kind of thing. This helps to explain why neither took the idea of ‘creation’ as a fact or process very seriously. For Aristotle’s followers, both pagan and pious, an empty conceptual appeal to the primum mobile (‘first mover’) has sufficed, while Darwin’s followers have imaginatively deployed the probability calculus and the ever-extending timeframe of evolution to suggest—but never prove—that with enough time, organisms could have arisen from Darwin’s ‘primordial soup’ in such a backhanded way. With his characteristic candour, Dawkins (1996) has described this Darwinian miracle as ‘climbing mount improbable’. In contrast to both Aristotle and Darwin, Schrödinger found organisms sufficiently strange and challenging to suggest to him that something fundamental was missing in our understanding of the physical universe. He was struck by several features of the physical character of the organism that call for explanation. The fundamental one, already mentioned, is its integrity in the face of the enormous changes that it undergoes both internally and externally over the course of a normal lifespan. For Schrödinger, the challenge posed to the physicist had to be understood against both a macro- and a micro-context.

The macro-context concerns the energy transfers involved as an organism converts one state of matter into another on many levels at once. Instead of going into meltdown, as might be expected of a machine charged with so many tasks, an organism’s metabolism allows it, in Schrödinger’s words, ‘to free itself from all the entropy it cannot help producing while alive’. The chemist, Pauling (1987), was among many critics who thought that Schrödinger had left the misleading impression that organisms are perpetual motion machines that defy entropy altogether. But of course, organisms do die in the end—and in that sense, Aristotle and Darwin would seem to have the last laugh. Nevertheless, for Schrödinger’s sympathetic readers, such as Crick and Penrose, the point about the organism’s ‘negentropic’ tendencies refers mainly to the unprecedented efficiency of its metabolism—which to this day has yet to be matched in machines, not least computers, that have been designed to perform comparable functions. Indeed, even the proverbially super-efficient ‘quantum computer’ is likely to approach the total efficiency of the human brain only once its processing capacity exceeds our own to such an extent that the excess energy requirements of the relevant machines appear justified (Hsu 2015).

The micro-context refers to what Schrödinger called the ‘hereditary code-script’, which was subsequently popularised as the ‘genetic code’ and is nowadays normally called the ‘genome’. Here, he was impressed by the fact each organism’s blueprint is contained in a very small yet optimally sized physical space in which the combinatorial possibilities of the molecules of heredity are played out in a syntax-like sequence. Here, it is likely that Schrödinger was influenced by geneticist Hermann Muller’s (1936) call for physicists to try to make sense of what he had recognized as the ‘positional effects’ of genes in determining their ‘expression’ in the constitution of an organism. In linguistics terms, it would seem that syntax, semantics, and pragmatics corresponds, respectively, to genotype, phenotype, and ecology, in biological terms. At least, that seemed to be Muller’s impression, to which Schrödinger clearly resonated. Indeed, Schrödinger added vividness to the code-like character of genes by comparing their activation to Morse code, the binary sign language on which telegraphy is based. Moreover, Schrödinger was sensitive to the role of mutation in the transcription of the genetic code, which he characterised as a ‘quantum theory of biology’, a phrase that probably means that genetic information literally exists only in potentia until it is actually expressed (Fuller 2008: 207–209).

Reading and Writing Self-Programming Beings

To conclude, let me take up a suggestive contrast between the human organism and the advanced computer drawn by the Harvard philosopher Peter Godfrey-Smith, which places the ‘postdigital’ horizons opened up by Schrödinger’s lecture in an intriguing light (Godfrey-Smith 2014: Chaps. 6, 9). When computers are designed in the spirit of ‘artificial intelligence’, the aim is, at least in part, to make them self-programming. This means that the computers cannot only read their own programme for purposes of executing its prescribed functions but they can also, at least to some extent, rewrite that programme to their own specification. In that respect, such a computer can be straightforwardly understood as an internalised communication system, as in cybernetics. However, while humans can certainly be embedded in cybernetic systems (hence ‘cyborg’), as in the case of an airplane pilot, the human organism itself is a cybernetic system only in a limited sense, namely, in terms of the ‘self-maintaining’ required to regulate the body’s functions in response to changes in the immediate environment: the search for ‘equilibrium’ or ‘homoeostasis’. What we cannot do very reliably is to make sustained use of our conscious minds, in their natural or technologically enhanced state, to shift that set point substantially without incurring significant ‘negative externalities’. In the language of economists: no pain, no gain. Indeed, a classical Aristotelian argument for the ‘wisdom of nature’—used today by natural law theorists keen on arresting the pace of biomedical progress—is that our interventions end up creating more problems than they solve, which is nature’s way of telling us that we should not be trying to reprogram it.

This is not to deny that nature itself has been long understood, not inaccurately, as somehow ‘writing’ on our brains through our senses. However, we have had limited success in reading what has been written, at least in terms of predicting one’s beliefs and actions based on one’s experience, including the prolonged disciplined forms of experience associated with ‘education’. Recent advances in neuroscience, while illuminating, have yet to alter this basic conclusion. Yet, at the same time, we have become increasingly fluent in reading the human genome. And some, such as genome sequencing pioneer Craig Venter, are confident that we will soon be able to rewrite the genome to our own specification, and thereby become literally self-programming beings. Indeed, he argued this point in Dublin when delivering the seventieth anniversary lecture in honour of ‘What Is Life?’ (Venter 2012). It leads Venter to downplay—if not ignore—the difference that a molecule’s position in an organism’s genome makes to the array of traits that it expresses, not to mention the environment in which such gene expression occurs. This is like someone claiming that they have mastered a foreign language by virtue of passing exams on its lexicon and syntax but without ever having read much of the language’s literature or dealt with its native speakers on a sustained basis. In what sense does such a person really know what they are doing? A similar scepticism confronts anyone who proposes a ‘one size fits all’ pedagogical panacea, given that even the most effective instructor is still at the mercy of students who bring their own associations to the material delivered.

In this respect, the asymmetry between the largely writable (but largely unreadable) brain and the largely readable (but largely unwritable) genome that constitutes the human organism is reminiscent of the early social history of literacy, in which ‘writing’ and ‘reading’ were rather separate skills that were unevenly distributed in a population. At this point, one is likely to veer in one of two radically opposed directions. One is simply to forsake the opaque relations between brains and genes in the human organism and focus instead on increasing the self-programming capacities of advanced artificial intelligence—perhaps even as the platform into which humans will migrate in a ‘transhuman’ future. Think Ray Kurzweil. The other is to dwell, as Schrödinger himself did, on the thermodynamic efficiency of human organism, notwithstanding its opaque inner workings. In that case, by discovering the physical organization behind the requisite energy transfers, we might come to forge a ‘psychosomatic’ capacity for self-programming so much more efficient than machines that it might approximate Lamarck’s dream of the inheritance of acquired traits, while at the same time remaining ecologically sustainable (Fuller 2019).