Introduction

It has often been told how, in the spring of 1900, an all but forgotten paper published by the Austrian monk Gregor Mendel in 1866, was suddenly unearthed by three (or even: four) biologistsFootnote 1 who, independently from one another, “chanced upon the same article at almost exactly the same time” (Henig 2000, p. 178). Although Mendel’s paper had been sporadically cited over the years in botanical books and journals (Orel 1996), its true significance was now suddenly discerned and its author was posthumously hailed as the founding father of genetics, a field that quickly evolved into one of the key research areas of what came to be regarded as the “century of the gene” (Fox-Keller 2000). Indeed, the astonishing history of genetic research, from the rediscovery of Mendel (1900) via the discovery of the molecular structure of DNA by Watson and Crick (1953) and the sequencing of the human genome (1990–2003) up to the emergence of next generation sequencing (the current era) reads like a success story almost without precedent in history. Nowadays, armies consisting of thousands of researchers are involved in this research arena of truly global proportions and Mendel’s offspring has become as numerous as the stars in the desert sky at night, or even more so. Posthumously, his paper came to be seen as the onset of a prolific research practice that is producing millions of research papers every year. And the end is not in sight, far from it.

Also in 1900, the Austrian neurophysiologist Sigmund Freud published Die Traumdeutung (The Interpretation of Dreams), the book that more or less unleashed the psychoanalytic movement.Footnote 2 And indeed, the twentieth century has been called ‘the century of psychoanalysis’ as well (for instance: Porter 2002, among others). Although presently, in 2012, the prospects for psychoanalysis may seem less splendid than for genetics–notably since, in the wake of the discovery of the structure of DNA in 1953,Footnote 3 genetics has significantly profited from its molecular turn -, the birth and history of both movements coincide in time and the question may therefore be raised to what extent their synchronicity can be employed to further our understanding of the intellectual movements they represent.

In this paper, I will reread the history of genetics, more specifically: of molecular genetics, from a psychoanalytical angle, analysing it as a Fallgeschichte, a collective biography or case history, with a focus on the second half of the century: the post-war (molecular) years. Building on developmental theory of Freud and his followers (such as Erik Erikson and Jacques Lacan), I will distinguish four subsequent stages, namely: (1) early (oedipal) childhood; (2) the latency period; (3) adolescence and (4) adulthood. In the final section, I will indicate how a psychoanalytical assessment conducted in this manner may help us to interpret and address some of the key normative issues that have been raised with regard to genetics and the molecular life sciences over the years, such as ‘relevance’, ‘responsible innovation’ and ‘promise management’. But first of all, I will briefly outline the psychoanalytic theory of development that will serve as my conceptual frame of reference, my magnifying glass.

Stages of development

The extrapolation of the Freudian theory of individual development to the history of a whole field builds on the assumption that phylogeny to a certain extent recapitulates ontogeny and that the typical transitions, challenges and conflicts associated with key stages in an individual’s biography can be used to characterise and assess the vicissitudes of whole research areas as well. We already implicitly subscribe to this idea when we speak about the ‘birth’ of a discipline, or about a ‘young’, a ‘maturing’ or an ‘aging’ field. In this paper, the tendency to compare individual with collective development is fleshed out in a more systematic way. Still, in terms of empirical data, the focus will remain on the individual (that is: the biographical) level: on the life stories of a limited number of persons who, as visible scientists and Noble laureates, may be regarded as representatives of their field. Their biographies can be seen as symptomatic exemplifications of the vicissitudes of the research practices they came to represent.

An important first question is: where to begin? My point of departure will be the moment when physicists (following the example of pioneers such as Schrödinger and Delbrück) began to migrate from the physics to biology, a development that played a crucial role in the ‘molecularisation’ of genetics. Thus, bypassing the earlier (‘oral’ and ‘anal’) stages of development, I will start with what from my perspective can be regarded as the ‘oedipal years’ of the field in question (1943–1953). From there, I will follow the history of molecular genetics up to its present state: the period of adulthood.

The ‘oedipal’ stage of early childhood is characterised by the small scale and informal nature of the research, but also by its atmosphere of playfulness. Another characteristic feature consists of the desire to have a research object or research project for oneself, often of a clandestine or even illicit nature, beyond the control of (and in defiance of the commandments of) authoritative others, such as department chairs of prominent representatives of previous generations of researchers.

Subsequently, I will describe the ‘latency’ stage, characterised by the development of basic tools and skills (1953–1989). At this stage in life, playfulness is to a considerable extent superseded by a willingness or even eagerness to acquire complex skills (such as reading and writing), to adopt and follow normative rules, to cooperate with peers and to achieve recognition from authoritative figures (parents, teachers) for skilfulness and diligence in writing, drawing, problem solving, playing instruments and sports—in short: by a willingness to put exercise before pleasure (Erikson 1950, 1968). If we extrapolate these characteristics to latency as a period in the history of molecular genetics, we will notice that individual researchers and their fields indeed tend to be self-centred in the sense of: focussed on the elaboration of basic technologies and dexterity in handling them. Laboratory life entails a fair amount of exercise and individuals are relatively submissive towards the department chairs or company managers directing them.

The third stage, then, is adolescence,Footnote 4 dominated by nonconformity and opposition, by grandiose ambitions, dramatic conflicts and breath-taking expectations (1989–2010). In his classic account of the changes that occur at puberty, Freud (1905/1942) stresses the need of post-latency individuals to distance themselves from the authority and control of their parents (and other authoritative figures), since cultural progress requires a “painful” moment of antithesis and opposition, setting the new generation up against the old.Footnote 5 Yet, the key assignment during this period consists in the transition from inwardness and inner-worldliness (in the sexual domain: the period of ‘auto-eroticism’; in the research domain: the focus on basic knowledge and skills) towards developing a true relationship with an object–in Freudian terms: “Objektfindung” (p. 123) or “Objektwahl” (p. 127). Researcher will have to embark on research endeavours of their own design, they will have to find and secure a research object of their own, somewhere in the outside world, not merely an object in the sense of a laboratory artefact, but a genuine challenge, preferably a big societal theme, a research project that will make an impact. During early adolescence, such a personal theme is bound to be more imaginative than real, but eventually the research goal will become more credible and realistic, so that mere phantasy gives way to focussed and outward-reaching, albeit often still fairly stellar ambitions.Footnote 6 To demonstrate their stamina and independence, the researchers involved are fuelled by the urge to eclipse the apparently more modest achievements of previous generations.

One of the crucial mechanisms of this period is identity formation through identification: a mechanism that allows youngsters a way out of their oedipal impasse, as resentment and rivalry towards fatherly figures (including teachers) gives way to imitation and internalisation (Freud 1921/1940). Grand ambitions are replaced by restrained objectives and the authority of the father is “introjected”Footnote 7 as the nucleus of a new (conscientious and self-restrained) Self, although (repressed) resentment continues to play a role in the form of ‘ambivalence’, so that negative feelings may easily resurge in situations of conflict or disillusionment.Footnote 8 As Jacques Lacan (1998) points out, through identification the Self moves away from captivating, fascinating and alluring but still imaginary objects towards legitimate and socially recognised ones. Through introjection, authoritative figures become models or ideals, and the subject adopts and mimics certain characteristics of these exemplary individuals (which Lacan refers to as insignia or stigmata), for instance in terms of choice of model organism and research tools, but one may also think of typical gestures or lecturing style. Moreover, identification is an option that allows individuals to associate themselves with (idealised) heroic model figures from the present or past. Through identification with such models, whose achievements were initially quite controversial, but are now acknowledged by society, adolescents can probe and ponder attractive goals and pathways for themselves in life (Erikson 1968). Eventually, identification will usually entail an element of resignation as well: by accepting the professional role of the conscientious scientist behind the bench, the moment of bliss (the dream of a big, ground-breaking discovery) is gradually postponed to an indefinite distant future, and the highly ambitious researcher eventually becomes a normal scientist—like his or her teacher.

A special role during this period is played by the so-called maverick scientist, the non-conformist who refuses to opt for normalisation (i.e. domestication of desire) through identification and who is relentlessly looking for untrodden ground, bent on challenging the authority of paternal figures. Originally, the term maverick was used to refer to an ‘unbranded calf’ (the term comes from a Texan farmer called Maverick who refused to brand his cattle), but in contemporary English it refers to an independent, dissident mind, someone who ignores authorities and is keen on distancing him-/herself from ‘the herd’ (an anonymous masses of mass psychology).

The final stage is adulthood, marked not only by the largeness of scale and the level of institutionalisation and professionalism, but also by the realistic and generally accepted objectives, by the sense of responsibility as well as the level of anonymity and ‘normalcy’ that has been achieved (2003-present). The goals are seen as realistic, achievable and hardly controversial, compared to the sweeping and unsettling designs of pre-adulthood, but also as fairly modest and tailored to the demands and expectations of recipients. In the following sections, I will describe these stages in more detail and connect them with the achievements of prominent protagonists.

Early childhood: the oedipal years

The Double Helix is… a joyous celebration of the importance of being playful (Sylvia Nasar, 2001, p. xvii).

The history of the molecular life sciences has more than one beginning, no doubt, but one of the most important markers is the lecture series presented by the Austrian quantum physicist and Nobel Prize Laureate Erwin Schrödinger (1887–1961) in Dublin in 1943 entitled What is life? (Schrödinger 1944/1967) Many post-war researchers involved in the molecular life sciences revolution, but notably the three scientists who were awarded the 1962 Nobel Prize for their contribution to the discovery of the structure of DNA (namely James Watson, Francis Crick and Maurice Wilkins), have cited Schrödinger’s lectures (published as a booklet in 1946) as their pivotal source of inspiration.Footnote 9 His reflections on genes as the elementary particles of biology and as carriers of the ‘Morse code of life’, incited a large number of physicists to refocus their attention to molecular genetics as the new frontier of our will to know. And this notably applied to research into the sublime but mysterious ‘aperiodic crystal’ (the ‘Holy Grail’ of biology) that came to be known as DNA. Schrödinger’s lectures signalled a migration of a large number of researchers from physics into biology (as well as from the German-speaking to the English-speaking world), taking with them powerful tools, such as X-ray crystallography (and the arithmetic know-how required to interpret them).

Yet, already in 1937, several years before Schrödinger presented his influential lectures at the newly established Dublin Institute for Advanced Studies, Max Delbrück (1906–1981), a young quantum physicist from Germany, had already made this move. He had migrated from the Kaiser Wilhelm Institute (later renamed: Max Planck Institute) for Chemistry in Berlin to the Californian Institute of Technology (Caltech) in Pasadena, California. Delbrück’s basic objective was to take the key ideas and concepts of quantum physics, as they had been developed by the previous generation of physicists during their ‘heroic years’ (the 1920s), and introduce them into biology. And as the heroes of quantum physics (Bohr, Heisenberg, Schrödinger, Pauli, Born, Jordan, etc.) had made their discoveries by focussing their research on the smallest, most elementary atom of the periodic system, namely Hydrogen, Delbrück decided that he likewise wanted to work with the most minimal, most elementary living entity at hand. This lead him to bacterial viruses. He regarded the bacteriophage (‘bacterium-eater’) as the “hydrogen atom of biology” (Fischer 1985). In 1940, he settled upon the phage as his model organism because he wanted to study a living entity that came as close as possible to being “the gene in itself” (“Das Gen an sich”, Fischer 1985, p. 98). As a model organism, bacterial viruses would allow him to study gene mutations in a similar way as the older generation of physicists had studied quantum jumps with the help of the hydrogen atom.

Delbrück’s decision to exchange physics for biology, and German-speaking Central Europe for the New World, may be interpreted as an oedipal move: as an effort to get away, to step out of the shadow of his all too famous predecessors (‘fathers and elder brothers’), in order to explore the mysteries of life more or less on his own accord, without any further interventions from the side of these senior colleagues. If we see Delbrück’s vicissitudes from this perspective, it is interesting to notice the exact terms in which Ernst Peter Fischer (his biographer) describes this period in Delbrück’s career, namely as a ‘childish era’ (“eine kindliche Zeit”, p. 110). According to Fischer, Delbrück was a playful person, very much given to playing games, and science was more or less his Kindergarten, his ‘playing ground’.Footnote 10 For him, as Fischer phrases it, the bacteriophage was first and foremost a gadget, a ‘toy’.Footnote 11

Moreover, in contrast with the dramatic turbulence of the time, Delbrück was living in an extraordinarily small, quiet and introvert world. Fischer depicts him as an apolitical figure who, although being a German (an ‘enemy alien’) living in the US, managed to more or less ignore the War. He lived and worked as an ivory tower-figure of the classical type,Footnote 12 as if this devastating mass conflict did not really exist (using his uncompromising involvement in science as a shield or defence mechanism, no doubt, and as an immunisation strategy to keep the raging global pandemic of violence and collective madness at a safe distance). It was only after the War, when he paid a visit to a completely destroyed Berlin, that the full scale and impact of the events finally dawn upon him.

During the post-war period, however, all this would inevitably change. The playing grounds were to give way to the industrial plants of Big Science,Footnote 13 and this was also reflected in the biography of “Max”, as Fischer calls him. He had to stop playing with his viral toys in order to become a manager, the chair of a department, raising funds, supervising early stage researchers, and so on.Footnote 14 And with this, the romantic childhood period of molecular genetics came to an end.Footnote 15 His small-scale, protective “phage family” would be replaced by the collaborative networks of large-scale professional research. Yet, this was something for the future. The oedipal childhood stage of molecular biology was to linger for a while. In fact, even the work of Watson and Crick, which culminated in the discovery of the structure of DNA, still belongs to this playful ‘childhood’ period.

As a biology student, young James Watson had regarded Max Delbrück as a father figure. It was a clear case of identification. As his biographer McElheny (2004) puts it, while following the phage course at Cold Spring Harbor, Watson tried to be near Delbrück “as much as possible” (p. 25). He was young enough to be his son and others observed their similar bodily shapes: they also noticed Waton’s “never subtle attempts to mimic Max’s behaviour” and jokingly began to call him “son of Max” (p. 25). And indeed, Delbrück’s "fun and games" style was not lost on young Watson (McElheny p. 23).

In the fall of 1951, 23-year old Chicago-born biologist James D. Watson came to the Cavendish Laboratory of Cambridge University where he and physicist Francis Crick, roommates by sheer accident, quickly became the best of friends. Head of the department was Sir William Lawrence Bragg, the Cavendish “patron” The Double Helix (1968/2001, p. 17), an impressive Nobel Prize winner, depicted by Watson, in his famous autobiographical account (Watson 1968), as a stern father figure, who, as Watson and Crick’s first effort to solve the DNA riddle had resulted in an embarrassing failure, formally banned the work that had triggered their imagination and explicitly forbade these youngsters to continue their tinkering with DNA.Footnote 16

An important reason for Bragg’s lack of trust in their exploratory endeavours was the method Crick and Watson used, which they had borrowed from the notorious nonconformist Linus Paulin, namely model-building. In comparison to crystallography and other high-tech and sophisticated approaches, their tools were literally “resembling the toys of preschool children” as Watson phrases it (1968/2001, p. 50) and therefore looked quite unimpressive and informal. The general disregard for the “tinker-toy-like models” on which their work relied (idem, p. 69) also played a role in their tense relationship with Rosalind Franklin (1920–1958), their rival from King’s College London, who preferred the high-tech crystallographic approach at which she herself excelled to this playing with toys like mere boys (p. 69). Or, as Walter Gratzer would later phrase it, “[many] were shocked by the seeming amateurishness of Watson and Crick’s model-building efforts” (2000, p. xv), while Erwin Chargaff even dismissed their work as “biochemistry practiced without a license” (p. xvi). For Watson and Crick, science "had many elements of playfulness, even farce" (McElheny 2004, p. 31).

Yet, after a while, secretly evading Bragg’s formal ban, Crick and Watson decided to continue their now illicit probing. Their persistence may be interpreted as something of an oedipal revolt against a powerful and formalistic father, who had given his verdict (“No!”), but was too occupied with running his department to effectively enforce it. And as in the end their efforts proved successful after all, Bragg good-heartedly changed his attitude, like a good father should. Eventually, by writing the Foreword to The Double Helix, he formally gave his fatherly approval (nihil obstat), post factum as it were. Thus ended the early childhood stage in molecular biology, in an atmosphere of triumph and exuberance, rather than guilt and submission (as is usually the case in the standard Freudian account). Watson and Crick had embarked on an informal project of their own design. Soon, the era of innocence and playfulness would be something of the past. Both in university settings and in industrial milieus, researchers were about to accept stricter levels of discipline and organisation. The period of latency would now set in.

Developing a conscience: the period of latency

In comparison to the dramatic ‘oedipal’ early childhood stage, the subsequent chapter in the molecular genetics case history can best be described as a period of latency (“Latenzzeit” in German). Pioneer playfulness and insubordination gave way to a period of diligent and systematic laboratory work on the molecular biology and biochemistry of DNA. The focus shifted to the development of new techniques and skills as a substitute for model building. In the career of Watson, playful modelling gave way to teaching first of all, as he became assistant professor of molecular biology at Harvard University in 1956–an activity that culminated in the standard textbook Molecular biology of the gene (Watson 1965). In 1968, moreover, he became director of the laboratory at Cold Spring Harbor, and institute specialised in organizing training courses (allowing early stage researcher to acquire new laboratory skills) and in published laboratory manuals.Footnote 17 But this era is also represented by Nobel laureates such as Frederick Sanger (1918-) and Walter Gilbert (1932-), both active in the field of tool development for DNA sequencing. Skills and techniques were developed in preparation of a grand but as yet still fairly abstract future. Technologies and methods were tried and a growing number of scientific teams laboured diligently, but with relatively low public visibility, on ways of understanding and manipulating DNA and genomes. Their work was characterised by tedious labour invested to achieve small steps forward. Slowly but gradually, the basic skills and tools that would later play a pivotal role in the genomics revolution, such as Sanger sequencing, came into being.

Sanger and Gilbert shared their 1980 Nobel Prize in Chemistry with a third typical representative of the latency period, namely Paul Berg (1926-) who, building on the work of Herb Boyer, Stanley Cohen and others, acquired lasting fame with his research on recombinant DNA, one of the basic technologies of the biotech revolution. Yet, rather than bluntly unleashing the potential of this powerful tool, he opted for a much more restrained and conformist approach. An outstanding symptom of his retentive attitude is the fact that, after voluntarily halting the more disconcerting part of his DNA research in 1974 (Berg et al. 1974), he played a prominent role in advocating and preparing the famous Asilomar conference in 1975 where, in view of possible risks and societal biohazards, a voluntary and self-imposed moratorium on some of the more ground-breaking forms of DNA recombination was being discussed. During this event, some 140 researchers and professionals participated in drawing up voluntary guidelines ensuring the safety of recombinant DNA technology on the basis of the precautionary principle. “Until then there had essentially been no public discussion about possible biohazards”, James Watson would later (and indeed: regretfully) write in retrospect, “but motivated primarily by a desire to be maximally socially conscious and without any evidence that recombinant DNA was dangerous, we called for a partial moratorium until we had a big meeting the following February” (2000, pp. 55–56). From that time onwards, as Watson puts it, the molecular life sciences would be “haunted” by guidelines, “saddled with precautions” (p. 65) and “held back by a morass of bureaucratic regulations” (p. 63). Thus, the molecular revolution in genetics was to remain contained, for the time being at least, in a state of latency.

Another representative of this period is Kary Mullis who, as an employee of Cetus Corporation, one of the first biotech companies in the Bay area (established in 1971), invented the key technology of the biotech era: the polimerase chain reaction (PCR) in 1983. This technique made DNA manipulable, by decontextualizing and denaturalising it, taking it out of its natural environment (the cell) and bringing it under laboratory control (Rabinow 1996), in other words making it reproducible by turning a biological process (polymerisation) into a biomechanical “machine” (idem, p. 9) for producing research material. Thus, DNA was definitely becoming molecular and chemical. Bioengineers were designing procedures to increase their versatility and speed up time-consuming research processes and in this context Mullis had his Eureka experience that would bring him lasting fame.

At the same time, however, in terms of personal style and personality traits, Mullis was already the exception: a colourful, rebellious figure heralding the epoch of grandiose endeavours that lay ahead, a case of early onset puberty as it were. His aura was that of a con-conformist adolescent from the very start of his career. Not only because of his confession (made on various occasions) that, without the use of LSD, the discovery of PCR would never have been made. For the managers at Cetus Corporation, he proved an all but impossible individual to deal with, notably because “emotional strain and upheaval in his love life spilled over into his professional life” (Rabinow 1996, p. 97), while at parties he would run into physical fights with colleagues. When he gave lectures, audiences tended to find his ideas so bizarre that many would leave the room well before the lecture had ended. In 1985, the management seemed ready to fire him. As one of the managers put it, he was.

…creating havoc—affairs with people in his lab, threatening people who were going out with his friends, threatening to kill them, fistfights, threatening the guards in the evening because he didn’t have a badge when he came in the building. Problem after problem. And what to show for it except wild ideas that were so out of his field that people felt it wouldn’t work” (idem, pp. 107–108).

And although they decided to keep him on, the effort to turn his work on PCR into a publication led to serious additional difficulties between Mullis and his immediate colleagues: “Kary wanted to do what he wanted to do, and didn’t want anyone to tell him anything about anything … (He felt that) he should be left alone to do his work as he wanted to do it. Period. (He believed) we stole his work” (idem, p. 122). Eventually, his paper was rejected by Science as well as by Nature, and ended up in Methods in Enzymology. And in 1986, Mullis decided to leave Cetus for good.

And yet, if we look back at Cetus and all the work that has been done by hundreds of devoted researchers, PCR clearly stands out as its most impressive ‘product’—a product for which the company would eventually be paid $300 million US dollars by Hoffman-La Roche. Thus, in terms of personality type, Mullis seems the prototypical, flamboyant, ‘maverick’ scientist, much more so, perhaps, than the life scientist and genomics protagonist who would later become most frequently associated with this epithet, namely Craig Venter. But let us look into Mullis’s case, a boundary case representing the onset of the upcoming period of adolescence, in more detail.

Inventing PCR: grand ideas and the reality principle

I am a big kid (Mullis 1998, p, 3).

The most obvious source material for studying Nobel laureate Kary Mullis (1944) from a Freudian angle is his autobiography (“Dancing naked in the mind field”), a fascinating ego-document, published in 1998, which, in many ways, can be seen as the (‘pubescent’) counterpart of Watson’s (‘oedipal’) Double helix, and representing a new era in science (namely the 1970s, 80s and 90s, rather than the 1950s and 1960s). On the cover, the author is presented in the following way:

Here are the outrageous ideas and extraordinary adventures of the world’s most eccentric and outspoken Noble Prize-winning scientist. Awarded the Nobel Prize in chemistry in 1993, Kary Mullis has frequently been at odds with the scientific establishment. Legendary for his invention of the polymerase chain reaction (PCR), which redefined the world of DNA and genetics, Mullis is also an accomplished surfer, a veteran of Berkeley in the sixties, and perhaps the only Nobel laureate to describe a possible encounter with aliens…

Although the book begins in media res, namely with the invention of PCR, it also contains detailed information about the author’s earlier and later stages of life. In fact, building on the developmental format introduced above, these stages can easily be distinguished (childhood, latency, adolescence, adulthood).

Already as a child Mullis’s biography exemplifies the developmental pattern as outlined by Freudian theory. In his “psychoanalysis of knowledge”, Gaston Bachelard (1938/1949); 1947) points to the fact that, according to their childhood stories, prominent chemists often tend to be greatly intrigued by one particular chemical phenomenon when they are still quite young, namely the phenomenon of explosion. Indeed, explosives are what youngsters tend to find fascinating about chemistry (if they are fascinated by it at all). The best way to draw their attention to chemistry as a field is by tinkering with dangerous chemicals. Subsequently, however, Bachelard argues that, in order to really become a mature and professional scientist, and in order to put chemistry as a field on a scientific footing, this fascination with explosives has to be overcome (“repressed”). This is what Bachelard basically has in mind when he speaks about the “formation of the scientific mind” (1947): childhood games, fantasies and desires must be dropped and relinquished. Professional chemistry no longer is a playing ground. Rather, it entails tedious labour.

Mullis’ autobiography provides interesting case material for such a view. To begin with he describes how, as a child, he was indeed intrigued by tinkering with explosive chemical substances. For him, chemistry was basically big fun.Footnote 18 His mother allowed him to build his own chemistry lab at home and he did performances (‘science shows’) at school which fairly often ended in dramatic explosions. But as soon as he was enrolled as a chemistry student at the Georgia Institute of Technology in Atlanta (better known as Georgia Tech), science became a more serious practice: “At Georgia Tech, I had short hair and I studied all the time. In my senior year I made perfect grades. I studied physics and math and chemistry…” (p. 162). In terms of our Freudian scheme: he had now entered the latency period of modern biochemistry. While the 1960s and 1970s era of psychedelic neurochemistry flourished, Mullis as a chemistry student continued to work hard on improving his knowledge base and basic laboratory skills.Footnote 19

He started working at Cetus in 1979, where he was hired to produce oligonucleotides and learned how to synthesise DNA. In other words, he basically focussed on developing new biochemical techniques and tools. At that time, Cetus had begun to evolve from a small company into a real business. Mullis describes how safety officers used signs and stickers to raise the awareness of potential hazards among employees and to encourage obedience with safety protocols. They had become researchers under strict (self-imposed) surveillance.

But Mullis’ nonconformist personality was increasingly at odds with this disciplined environment. In the spring of 1983, driving with his girl-friend from Berkeley up to Mendocino County, where he spent his psychedelic weekends in a cabin, it happened: Mullis suddenly had his Eureka experience. He all of a sudden knew how to produce thousands of copies of pieces of DNA in no time: “Something incredible had just occurred to me. We were at mile marker 46.58 on Highway 128, and we were at the very dawn of the age of PCR” (p. 7). A grand idea had ‘exploded’ in his mind (“explosions were rocking my brain”, p. 9):

Everybody on Earth who cared about DNA would want to use [this technique]. It would spread into every biology lab in the world. I would be famous. I would get the Nobel Prize (p. 7).

It was an intoxicating moment indeed, the dawn of the era of adolescence in biochemistry and molecular genetics.

Yet, things did not work out the way Mullis envisioned in his grandiose scheme. Quite the contrary, during the months and years to come, his dreams and ambitions were bound to collide rather forcefully with the ‘reality principle’ that dominated everyday laboratory life, in various ways. First of all, it proved frustratingly difficult to make the great concept work in real practice and to find empirical support for it by way of controlled experiments, the outcomes of which would be publishable in a proper journal. Progress in the lab was slow and the first experiment that appeared to yield any positive results happened months later, on December 16, 1983. Moreover, his colleagues at Cetus reacted with scepticism rather than enthusiasm. They discarded it as another of those wild and useless ideas of his. But the biggest collision occurred when Mullis was finally ready to submit the first article on his ground-breaking discovery to Nature, the most prestigious journal to date. In fact, in 1968, as a graduate student, Mullis had already managed to have an article published in Nature entitled “The cosmological significance of time reversal” in which he argued, on a purely speculative basis, but inspired by psychoactive drugs, notably LSD, that half the matter in the universe is moving backwards in time. By the time he was writing his paper on PCR, he regarded his former publication as a childhood naiveté. Now, he had “grown up” (p. 104). He was a “professional scientist” (idem) who could reasonably assess the importance of his own invention. PCR would allow researchers worldwide to detect infectious disease, solve murders with DNA samples, rewrite early human history on the basis of paleobiology, and so on: “I knew that PCR would spread across the world like wildfire. This time there was no doubt in my mind: Nature would publish it” (p. 105). But Nature rejected it, and so did Science.

The paper would be published, albeit in a much less prestigious journal, and it would bring him the Nobel Prize, but it also allowed Mullis to become a visible scientists as well as a highly controversial public figure, notorious for his nonconformist, rebellious views. As explained in his autobiography, his Nobel Prize, and the prestige associated with it, gave him the liberty to do and say what he liked and to indulge in various forms of “denialism”, such as the denial that fossil fuel emissions contribute to climate change, that research into healthy food makes sense, that the release of chlorofluorocarbons into the atmosphere has created a hole in the ozone layer, and that HIV is the cause of AIDS,–at the same time confessing his belief in things like astrology, horoscopes, parapsychology and encounters with extra-terrestrial aliens. Indeed, the image emerging from his autobiography is one of a scientist who, while being sincerely committed to science and truth,Footnote 20 also greatly enjoys his bouts of provocation, using the freedom granted to a Nobel laureate to abound in uninhibited, unsettling claims and in unruly, at times downright disrespectful behaviour, quite at odds with sensitivities of the public, the police, the authorities and the scientific establishment. In other words, after the discovery of PCR, he enthusiastically assumed the role of enfant terrible of modern biochemistry, and from then on the story of his life reads like the account of an extended puberty.Footnote 21 With the invention of PCR and its aftermath, the adolescent period of great expectations, grandiose endeavours and, last but not least, fairly narcissistic personalities, had set in.

Great expectations and grandiose endeavours: the era of adolescence

Rabinow (1996) concludes his analysis of the invention of PCR by stressing that “it opened the door for an extraordinary proliferation of knowledge” (p. 134). Mapping and sequencing DNA had been a most tedious affair and PCR greatly increased the speed and accuracy of the work, thereby technically enhancing activities such as human genome mapping. Indeed, “it is no exaggeration to claim that PCR is a fundamental tool that made such megaprojects as the Human Genome Initiative (feasible)” (p. 136; Cf. Fischer 2003, p. 243).

The Human Genome Project (HGP, 1989-2003) represents the period of adolescence, for which Mullis’ PRC adventures (from 1983 onwards) served as a prelude. The 1990s were the years of grand fantasies and grandiose ambitions, of sweeping promises and stellar expectations. Genomics would change the world, revolutionize biology and transform human life. “Our children’s children will know cancer only as a constellation of stars”, was one of the grand statements made during the press conference on June 26 2000 when the (as yet unfinished) genome sequence was proudly presented to the world.Footnote 22 This gala televised press conference abounded in overconfidence and narcissistic self-aggrandising. The HGP was presented as one of the few true milestones in the history of science (or even of humanity), able to solve most if not all major societal challenges of the day, although some of the concerns that might hinder or delay the advent of the glorious genomics revolution were also mentioned.

The preparations and start of the HGP coincided with the collapse of the Berlin Wall and the subsequent reunification of Germany and, to some extent, of Europe as a whole. Euphoria was in the air. On November 9,1989, as citizens from East-Berlin flooded into the Western parts of the divided city, Nature published an article on HGP scientists developing a joint data-base where they could deposit their sequencing materials, flanked by an article about societal issues to be addressed. These turbulent years were first of all personified by James Watson who (once again) became a highly visible scientist in his glamorous new role as Director of the HGP.Footnote 23 In this role, he developed a predilection for using strong metaphors, stressing how the HGP would dwarf the achievements of previous generations.Footnote 24 He was forced to step down in 1992, however, after a conflict with NIH-director Bernadine Healy. After that, he developed a reputation for politically unsettling (and scientifically dubious) statements on sensitive issues such as intelligence and race. Indeed, there is an element of extended puberty, an inclination towards provocativeness in the way he phrases his contributions to such issues.Footnote 25 But the adolescent stage is notably represented by the two researchers who would eventually flank President Bill Clinton during the Press Conference of June 2000, namely Francis Collins and Craig Venter.

Watson was succeeded by Francis Collins who, during the early 1980s (during the latency stage) had made significant contributions to the genomics tool box, developing the techniques (notably ‘gene jumping’ and ‘positional cloning’) needed to identify causal genes of mono-genetic diseases such as cystic fibrosis, neurofibromatosis and Huntington’s disease, thus making a name for himself as a gene hunter. As the director of the HGP juggernaut, however, he became much more outspoken in his rhetoric. He announced, for instance, that the HGP was “bigger than splitting the atom” and “dwarfed going to the moon” (Shreeve 2004, p. 20).Footnote 26 Indeed, he regarded the HGP as “one of the boldest scientific efforts that humankind has ever mounted” (2010, p. 299).Footnote 27

Like so many other protagonists of this on-going molecular genetics revolution (Watson, Crick, Wilkins, Mullis, Venter, etc., etc.), Collins published an autobiographical retrospect (2006) concerning the history and meaning of his key achievement, the HGP. This ego-document contains an interesting example of what Erikson (1968) would have called an identity crisis. The invitation to become Watson’s successor apparently took him by surprise. The idea of taking the lead in a research endeavour of such breath-taking proportions, with profound implications for human self-understanding, was almost like a divine calling (2006, p. 118). Would he be able to live up to it? Several years later, at the Press Conference of June 2000, it became clear that Collins had solved the challenge with the help of the mechanism of identification. In the speeches of both President Clinton (for whom Collins acted as a ghost-writer) and of Collins himself, the mapping of the human genome was compared to a previous mapping expedition, namely the one lead by Lewis and Clark, commissioned by President Jefferson at the beginning of the nineteenth century, to map the North Western territories (1804-1806):

Nearly two centuries ago, in this room, on this floor, Thomas Jefferson and a trusted aide spread out a magnificent map… The aide was Meriwether Lewis and the map was the product of a courageous expedition across the American frontier, all the way to the Pacific. It was a map that defined the contours and forever expanded the frontiers of our continent and our imagination. Today, the world is joining us here in the East Room to behold a map of even greater significance. We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt, this is the most important, most wondrous map ever produced by humankind.Footnote 28

Through this identification (of Clinton with Jefferson and of Collins with Lewis) the importance of the project (and of Collins’ role in it) was formally acknowledged. Meriwether Lewis was like a childhood hero, representing an almost mythical past (the adolescence stage of the US as a nation, a period during which its range of action was dramatically increased), who allowed Collins to position himself, to define his identity and to assume, underscore and legitimize his role.

For Craig Venter, this same Press Conference, as a ceremony of formal public acknowledgement, was no less important. In contrast to Collins, he had left the publicly funded program, out of frustration with the bureaucratic intricacies of its funding mechanisms, and audaciously launched a human genome sequencing project of his own design. As the head of his own small army of computer scientists and molecular biologists, he combined stellar ambitions and an exuberant rhetoric with an attitude of non-conformism. Quite frequently, he was referred to as a ‘maverick’ scientist in public media (Venter 2007, Shreeve 2004), challenging the publicly funded program in a fairly provocative manner, turning the endeavour into a ‘race’. The Press Conference formally placed his adventure on equal footing with the three billion dollar project lead by Collins, although the exact importance of his contribution has been challenged by representatives from the public effort ever since.Footnote 29 Subsequently, having established a research Institute of his own (the J. Craig Venter Institute), Craig Venter launched the Sorcerer II Global Ocean Sampling Expedition (Rusch et al. 2007). Identification as a mechanism was at work here as well, as the great heroic scientific sea voyages of the nineteenth Century, notably those of Charles Darwin on the H.M.S. Beagle and George Nares on the H.M.S. Challenger,Footnote 30 served as sources of inspiration for this effort to map microbial life worldwide, circumnavigating the globe for more than 2 years. And indeed, in this gigantic effort to “explore the incredible diversity of the sea” (Venter, 2007, p. 332), Venter is his own master, in charge of his own project, both scientifically and financially, consciously defying the more institutionalised procedures for supporting and funding large-scale molecular genetics research.

The waning of narcissism: adulthood

Currently, in the era of adulthood, stellar ambitions and grand expectations have given way to an attitude of accepting boundaries and limits under the sway of the reality principle. Life is much too complex to understand, let alone manipulate and transform, by means of automated sequencing machines alone. Indeed, genomics, once seen as a revolutionary field, has become normalised, has become a ‘normal’ science. It has been tamed and disciplined,–in other words: domesticated, also in the sense that genomics has entered every-day life, for instance through direct-to-consumer genetic testing provided by websites such as 23 and Me. But, contrary to earlier expectations, the life-world has not been revolutionised. Rather, genomics itself, although it did change the way life sciences research is being done, has finally become a settled and accepted area.

The onset of adulthood began with a series of sobering disappointments during the first decade of the twenty first century, starting in 2000 with the remarkably small number of protein-coding genes that were actually found on the human genome (~22,500, which is a small number compared to previous estimates going from 100,000 up to 300,000 and more), as well as the similarity of our genome with the genomes of the chimpanzee, the laboratory mouse and other species. Apparently, there is nothing special about our genome, and self-aggrandising has given way to the awareness that (like so many other scientific breakthroughs) human genomics entails a ‘narcissistic offence’, challenging our view of ourselves as highly exceptional key player in the history of the world (Freud 1917/1947; Cf. Zwart 2007). We are less unique than we thought, and the living world is much more complex than was appreciated when the HGP set off.

The progress made in terms of actual benefits for society is also quite modest. Most of the applications under discussion are quite unspectacular or even trivial compared to the staggering scenarios depicted in the 1990s. Promises are now reduced to a much lower key, but at the same time, they became more realistic (i.e. more in tune with the reality principle). This has the collateral benefit of course that also the fears and concerns raised by the human sequencing effort have by and large subsided. In his recent book The language of life, for instance, Francis Collins (2010) explains that genomics knowledge now notably plays a role in fine-tuning individual decisions about cancer treatment—important no doubt, but not a disruptive ‘revolution’ (Zwart 2011). Adulthood entails an acceptance of limited capacities and an acknowledgement of societal embedding. Genomics has changed into an anonymous, well-regulated, highly reliable research practice. There is no longer anything ‘exceptional’ about it after all. The grand expectation have not completely evaporated, however, but have rather been transferred to alluring substitutes, such as synthetic biology or human enhancement—a process which in itself is a telling example of what Freud in The Interpretation of Dreams has called “displacement” (“Verschiebung”, 1900/1942; Cf. Zwart 2012).

Methodological considerations and the issue of gender

Before wrapping-up the results of my psychoanalytical rereading of molecular genetics as a Fallgeschichte, a number of possible methodological biases must be addressed. First of all, as we have seen, a Freudian analysis tends to focus on individual (i.e. biographical and autobiographical) material, rather than relying on a more sociological macro-analysis of institutionalisation and group dynamics. A limited number of symptomatic ‘cases’ have been selected, who are regarded as representative for the field as such. The strength of such a micro-level approach is, no doubt, that biographical data allow us to make the issues addressed quite tangible, lively and concrete. And the selection procedure (focussing on highly visible Nobel laureates) is not completely random, of course. Still, the broader socio-economic context is to a certain extent eclipsed. The purpose of my exercise, however, is not to prove that the psychoanalytic theory of development is ‘right’. Rather, my goal is a more modest and pragmatic one, namely to demonstrate that a Freudian perspective can be helpful to discern some coherence in the bewildering richness of the data, although other readers, starting from different (perhaps more sociological) perspectives, may highlight other elements than I have done in my approach. Whenever a beam of light is cast, and individuals are allowed to enter the stage, the socio-economic backdrop becomes more or less obscured. Yet, by trying to see everything, we would end up with a fairly diffuse picture. The analytical approach (with its focus on individual biographies) has it value, provided it is supplemented and contextualised by other (more macro-oriented) approaches.

Another much-discussed weakness of the Freudian approach to development is its outspoken gender-bias. In the Three Essays on the Theory of Sexuality and other publications cited above (1905/1942; 1924/1940), Freud typically starts with the masculine route, subsequently discussing the feminine version as more or less derivative, adding all kinds of disclaimers indicating that, apparently, feminine development is much more complex, less well-studied and less well understood (Cf. Lacan 1998 p. 251 ff.). So far, my rereading of the history of molecular genetics likewise focussed almost exclusively on male protagonists, thus apparently reflecting or even reinforcing this bias. Although substantial numbers of female scientists are playing an increasingly important role in present-day science, the most prominent and visible positions are often still occupied by men, a fact that is also reflected in the list of Nobel Prize winners up to the present. Yet, this may well change in the century to come. Moreover, even for the first century of Nobel Prizes, important counter-examples can be given, although they usually come from outside the HGP.

First of all, Nobel laureate Barbara McClintock (1902–1992) provides a complementary flanking tale to the male-dominated history of molecular genetics. Much like Mendel, she worked quietly for many years, devoting her solitary life to studying one particular plant model (maize), while her achievements (notably on transposition, also known as ‘jumping genes’) remained unrecognized by her contemporaries,–remained ‘latent’ as it were -, until (several decades later, in the 1980s) molecular genetics establishment finally managed to catch up with her work as a genomics pioneer (Fox-Keller 1983).

One of the most outstanding scientists of the past decades has been Lynn Margulis no doubt (1938–2011). Her ground-breaking paper on the origin of eukaryotic cells, now generally regarded as a scientific classic and a landmark publication in endosymbiotic theory, was rejected by a whole train of journals before it was finally published, but her views, fleshed out in a plethora of books and articles, have dramatically changed our view on the origin and history of life. Her bold claims initially met with scepticism and rejection, but are now widely accepted by mainstream science. She is applauded for her tenacity and perseverance in the face of substantial opposition. On the Wikipedia site devoted to her work, Richard Dawkins is quoted saying: “I greatly admire Lynn Margulis’s sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. [This theory] is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it” (Brockman 1995, p. 144). Thus, her work exemplifies Freud’s statement cited above that cultural progress entails a phase of antithesis and opposition which he identifies with “adolescence”. Moreover, whereas the male “adolescents” such as Watson, following in the footsteps of Darwin, emphasised the role of competition and selection, Margulis (1987, 1991) has opened our eyes to symbiosis and cooperation as the dominant motif of life.

Thus, both during the latency period (McClintock) and during adolescence (Margulis), prominent researchers represent the ‘feminine version’ of the developmental story, while it is to be expected that, whereas in terms of gender the molecular life sciences of the twentieth century were still largely dominated by males, the twenty first century will predominantly belong to the female researcher as the normal case or standard story.

Conclusion: reframing expectations

From a psychoanalytic perspective, there is a telling similarity between a particular type of nostalgia about the ‘purity of childhood’ and the idea that, in the 1950s (i.e. the ‘oedipal’ era of molecular genetics), there still existed something like ‘pure science’. The research of Watson and Crick was ‘pure’ in the sense that the latter had moved away from scientific research in service of the war effort (Crick had been involved in developing smart bombs, Cf. Crick 1988) to the analysis of a ‘pure’ object, the object of objects in fact, to DNA as an almost Lacanian ‘thing’,Footnote 31 without any regards (for the time being at least) for applications or commercial potentials. DNA was a fascinating grail-like entity, a ‘substitute’ that allowed two unruly youngsters to sublimate all their drive and energy into research, eventually solving the ‘riddle of life’.Footnote 32 The only constraint temporarily blocking their access to this forbidden (and therefore all the more alluring) object was the prohibition of Sir Lawrence Bragg, mentioned above, depicted by Watson as a kind of frowning father who was nonetheless too occupied with managerial obligations to effectively enforce his ban, as we have seen.

But, as countless documents–ranging from autobiographies by Watson and Crick themselves up to biographies of Rosalind Franklin (their most daring competitor)–have amply revealed, this research wasn’t pure at all in the sense of ‘innocent’ or ‘unspoiled’. Notably, it entailed serious competition and rivalry, not only with fatherly figures such as Linus Pauling (who had proposed his own version, namely a triple helix model of DNA), but also between peers (in this case, between the Cambridge and the King’s College team). And the course of the events continues to trigger debates about moral rules and misdemeanours up to this very day (notably concerning the issue whether Watson actually plagiarised data–notably provided by photograph 51, which was secretly shown to him by Maurice Wilkins—from Rosalind Franklin, Cf. Maddox 2002). And of course, all this was done in defiance of the models and rules set out by ‘fatherly’ figures, such as Lawrence Bragg and the US funding agency that had awarded young Watson a grant to work, not in Cambridge but in Copenhagen, and on something else than the molecular structure of DNA.

During the subsequent latency period, the researchers involved were much more willing to accept regulation and surveillance. Their readiness to behave in a responsible manner was so outspoken that no special top-down efforts seemed called for when it came to containing and managing the societal hazards of their research that began to become visible: compared to their colleagues during the oedipal or pubescent phase, these responsible scientists displayed a stunning readiness for self-containment.

This clearly changed during the adolescence stage. In view of the grandiose ambitions and great expectations, concerns were raised as to the consequences the HGP (and similar life sciences research endeavours) would have in stall for us. Special programs had to be established (ELSI in the US, ELSA in Europe, GE3LS in Canada) to analyse and address the ethical, legal and social implications (ELSI) or aspects (ELSA) of such a turbulent field. These concerns and initiatives are reminiscent of special programs developed by policy makers to govern unruly youngsters, as well as of the feelings of uneasiness and concern new generations are bound to evoke within the establishment during their adolescence stage.

But now, during the period of adulthood, such special programmes no longer seem called for. Molecular genetics has evolved into a large-scale, but also highly institutionalised endeavour, so that the research can be governed on the basis of normal procedures of quality control and oversight. Concerns over genetic discrimination that were so often voiced in the 1990s actually reflected the overestimation of the societal import of genomics, now seen as typical of the boisterous adolescent phase. By now it has become sufficiently clear that life is much too complex to find clear predictive indices for societal failure or success among specific genes on our genomes. For ethical surveillance, funding agencies seem to increasingly rely on the normal techniques and institutions of ‘normal’ science. At this adult stage, moreover, researchers are expected to assume social responsibility themselves and to trigger awareness for the societal dimensions of their research programs from the very start. Using psychoanalytical terminology, we may call this ‘internalisation’. Responsibility is increasingly regarded as an intrinsic ingredient of normal science (von Schomberg 2012). The social issues coming from molecular genetics research are framed in a realistic, at times even trivial vein. Through capacity building and bioethics courses, early stage researchers are empowered to address such issues themselves in the context of pro-active dialogues with societal stakeholders. Moreover, as molecular genetics has evolved into a Big, large-scale enterprise, its research ethics has likewise become macro-ethics: the focus has shifted from an ego-centred concern for individual responsibility and conscience development, towards the institutionalisation, operationalization and implementation of social norms and values (such as responsibility, transparency, fairness and the like) in the context of large-scale, fairly anonymous research endeavours (Zwart 2008). And the fact that big promises, such as the claim that human genomics will allow us to win the war on cancer, are no longer credible, should not be seen as a loss of prestige of science (after years of overpromising), but rather as an indication that molecular genetics as a field has now definitely come of age.