Introduction

Can there be a dedicated nano-ethics, just as there is, by now, a bio-ethics? And should there be? The title of the journal Nanoethics is careful in that it creates some distance to nanotechnology in its sub-title: Ethics of technologies that converge at the nano-scale. While there are an increasing number of articles and comments that call for nano-ethics these are mostly calls for more ethical reflection in general, or focus on the utopian and doomsday scenarios that have been put forward (e.g. [17, 27, 31]). There is little specific to nanotechnology that would warrant the prefix ‘nano’ [19]. This is different from the case of bio-ethics, where aspects and issues derived from living creatures are a shared starting point. Nanotechnology has no such common referent other than that phenomena and manipulations occur at the nano-scale. It is an umbrella term covering a host of heterogeneous technologies, from electronics to materials and on to medical use of nanoparticles. At the same time, there are calls for nano-ethics, and working on nano-ethics is a business proposition for organisations like the Centre for Responsible Nanotechnology (http://CRNano.org) and The Nanoethics Group (http://www.nanoethics.org).

More important for our query about the status of nano-ethics is the fact that nanotechnologies are enabling technologies. By making existing technologies smaller and faster, well-known ethical issues, say privacy and new ICT, or point-of-care diagnostics and professional-medical responsibilities, can become more pressing, but not necessarily different in kind. Quite a number of reports reflect this by taking sector issues (medical, environment, military) or moral principles (equity, privacy, safety, sustainability, security) as their starting point, rather than specific features of nanotechnology. (For a good example, see [50].)

Still, there are calls for nano-ethics. And actors present issues about nanotechnology as ethical issues. A striking example (which we will use again later on) is this quote from Philip J. Bond, US Under-Secretary of Commerce, ‘Responsible nanotechnology development,’ in the SwissRe workshop of Dec 2004 ([49], p. 7):

Given nanotechnology’s extraordinary economic and societal potential, it would be unethical, in my view, to attempt to halt scientific and technological progress in nanotechnology. Nanotechnology offers the potential for improving people’s standard of living, healthcare, and nutrition; reducing or even eliminating pollution through clean production technologies; repairing existing environmental damage; feeding the world’s hungry; enabling the blind to see and the deaf to hear; eradicating diseases and offering protection against harmful bacteria and viruses; and even extending the length and the quality of life through the repair or replacement of failing organs. Given this fantastic potential, how can our attempt to harness nanotechnology’s power at the earliest opportunity – to alleviate so many earthly ills – be anything other than ethical? Conversely, how can a choice to halt be anything other than unethical?

In this quote one sees how actors tend to use the qualifiers ‘ethical’ and ‘unethical’ to indicate what is good (must be done) and bad (must not be done). The key feature for our discussion, however, is that it makes a general point about progress thanks to new technology, rather than saying anything specific about nanotechnology other than that it is wonderful and will enable the blind to see and the deaf to hear (phrases with a biblical ring to them). Clearly, there are ethics involved, but these are not nano-ethics, but ethics of progress and/or negative impact through new technology.

Similarly, the recent exercises in public engagement with nanotechnology and its promises and possible concerns, like focus groups and a citizen jury in the UK, or nano-dialogue projects funded by the European Union, tend to come up with reports which are quite general and could apply to any new or emerging technology. This has led to critical comments, by nanotechnology actors as well as analysts: why do these exercises at all if nothing specific to nanotechnology comes out of the discussions and reflections?

What we will do in this article is to turn this criticism around, and see it as a finding, and starting point for further analysis. There appear to be certain patterns of moral argumentation about new and emerging technology, which are then applied to nanotechnology. In other words, while there may not be a nano-ethics, there definitely is a NEST-ethics. The prefix NEST stands for New and Emerging Science and Technology (similar to the title of the EU funding program NEST in the 6th Framework Program). Our contention is that most ethical questions presently raised about nanotechnology belong to NEST-ethics. In the next two sections we will show that there are indeed typical argumentative patterns that together constitute a NEST-ethics. Such a NEST-ethics is not given once and for all. It evolves with further experiences with new science and technology, like stem cells and now also promises and concerns about nanotechnology. But there are also strong continuities.

NEST-ethics typically exists as a set of recurring tropes and argumentative patterns. By ‘trope’ we understand a recurring motif or argument that is supposed to have particular force. By argumentative ‘pattern’ we understand two or more ethical arguments that hang together in the sense that they provoke each other into existence. The tropes and the ‘storylines’ in the argumentative patterns have become a repertoire that is available in late-modern societies, both as a framing of how actors view issues and expect others to view them, and as a kind of toolkit that can be drawn upon in concrete debates.

The to-and-fro of moral argumentation about NEST is actually played out at two levels. There are what one might call meta-ethical issues, addressing the relation between technology and morality, with a particular focus on the ‘new and emerging’ feature. We will discuss these issues first. The remaining tropes and patterns in the repertoire can subsequently be clustered according to the dominant moral standard referred to implicitly or explicitly: collective utility (utilitarianism, consequentialism more broadly), duties and rights (deontology), the just distribution of costs and benefits (theories of justice), or conceptions of the good life (virtue ethics, or as we prefer to say: good life ethics – which merges into ‘good society’ ethics).

This turns out to be more than making an inventory. Most often, there is a pattern: the debate starts with seemingly obvious consequentialist arguments. These are then criticized, and this provokes reference to equity, basic values and aspects of the good life. In response, there are attempts to blackbox these references, and return to consequentialist arguments which are easier to handle discussion and management (in a broad sense) of new technology in society. And this allows a simple division of moral labour where scientists and other introductors of new and emerging science and technology are justified in pushing on as long as they are willing to consider side-effects.

After we articulate the outline of a NEST-ethics, we can go back to nanotechnology and ask whether there might be processes and patterns specific to nanotechnology – and thus lead to the articulation of a nano-ethics. For example, we will look at an ambivalence inherent to nanotechnology: many new and interesting phenomena and effects occur at the nano-scale, so “size matters.” But this claim also applies to risky properties and effects, like the increased reactivity and mobility across biological partitions, up to the blood–brain barrier. Regulators are struggling with this problem, because existing regulatory approaches cannot accommodate the aspect of size (of particles).

We conclude with a short reflection on the implications of such ambivalencies for nano-ethical debates as well as NEST-ethical debates generally. In particular, we identify the inadequacy of agora-types of debates, and argue that an arena model is more suitable. Thus, nano-ethics becomes an occasion to address such issues, and develop NEST-ethics further.

Technology, Morality and Ethics: Preliminary Reflections

New and emerging science and technology constitute novelties, already within the world of science and technology. Of course, some novelties are more novel than others. A distinction is often made between incremental and radical (or disruptive) innovation. But they are still innovations, and some existing alignments will be threatened, or at least opened up. Then, a process of re-alignment starts which runs more or less smoothly. In science and technology, the creation of novelty is actively pursued, and the new findings and options and proofs of principle are expected to be taken up. There is also resistance to change, however, as the history of science and technology amply shows.

Responses of society to new and emerging science and technology and its societal embedding also vary. One can hypothesize that when a new technology can be fitted to existing artifacts, routines and strategies, and/or when it appears to address existing or newly articulated needs and desires (when new experiences are offered as with Sony’s Walkman, cf. Du Gay et al. [12]), its embedding will go smoothly. There are also general cultural patterns in the response to novelty. It can be seen as the hero who shall conquer, overcome the barriers of the existing order which will soon become obsolete; or the deviant, the wayward, and if it persists, it becomes the sinner that must be punished for going against the existing order. The moral flavour of the terms is not accidental.

The link with moral argumentation and ethics requires some reconsideration of ethics as well. We will build on philosophical pragmatism, especially in Dewey’s version [10, 22, 23]. In the pragmatics of everyday life, morals exist mainly as routines which are considered to be self-evident so that people are hardly aware of their existence. These moral routines once started their existence as conscious solutions for conflicting stakeholder interests/rights or as answers to the question: what would be a good life to lead, as an individual and/or as a community? But afterwards, we unthinkingly obey these tacit norms and unthinkingly pursue these tacit values. For example, as Williams [55] pointed out, ‘normal’ people do not consciously decide that it is immoral to kill an obnoxious colleague. The thought should not even cross their minds. And if it does, this indicates abnormality.

We become aware of moral routines when people disobey them, when conflicts between routines emerge and a moral dilemma arises, or when they are no longer able to provide satisfactory responses to new problems. To put it strongly: Whereas morality is characterized by unproblematic acceptance, ethics is marked by explicitness and controversy. Ethics is ‘hot’ morality; morality is ‘cold’ ethics. We perform ethics when we put up moral routines for discussion. For example: in discussions about emerging technologies, values like health, safety, sustainability and economic growth are usually ‘cold’; the use of embryonic stem cells or the possibility of human enhancement are ‘hot.’

Emerging technologies, and the accompanying promises and concerns, can rob moral routines of their self-evident invisibility and turn them into topics for discussion, deliberation, modification, reassertion. This is also an effect of promoters of a new technology who stress its novelty to attract the attention of parties who are needed for their financial, political or moral support. Nanotechnology is not just about new phenomena at the nano-scale and their manipulation, it is about new possibilities for diagnosis and drug delivery, about a third industrial revolution, about human enhancement, up to a heaven on earth where the blind will see and the deaf will hear (as in Under-Secretary Bond’s quote).

Working with a novelty necessarily means venturing into the unknown. The extent of the unknown may be large, as in the case of genetic manipulation, or small as when an improved ingredient to toothpaste is advertised as “New! Better!” A principle problem is that it is never known what the extent of the unknown is. We may think that the new ingredient of toothpaste is harmless, but it might turn out to create a new allergy (as has happened occasionally; tests can only test for known allergies). Thus, NEST-ethics will have to address two issues at the same time: there is ignorance about what the new technology might become and do, and moral routines cannot be relied upon unquestioningly. The newly emerging technology robs morals of their self-evident invisibility, and transforms them into ethics.

The friction between established moral routines and new technology is a well-known issue, with the advent of the contraceptive pill as a canonical example (and one where behaviours changed and morality adapted). And instead of friction, the new possibilities may open up spaces for reflection. A small but interesting example from micro-technology to be further enabled by nano-miniaturization is the Verichip™: a passive RFID chip with a person’s identification that is implanted under the skin, and is used (on a voluntary basis) by regular visitors to nightclubs in Barcelona and Rotterdam. They don’t need to carry an identity card anymore, and if the chip includes a money deposit, they can also pay with it, and don’t need to carry a wallet. Just come as themselves – with their identity enhanced by the implanted chip. Many more options are possible for nightclub visitors, and the company also pushes other uses. The ethical question here is what sort of identity we want to construct (with the active support of the company) for ourselves. A new good life is being articulated stimulated by technical ability to construct a variety of such lives.

Our use of morality as ‘cold’ and ethics as ‘hot’ helps us to highlight an important phenomenon. And it is this phenomenon of opening up of existing moral routines and moral orders which is important, not our specific use of the terms – even if this helps us to make our point, and write this article. Terminology is difficult anyway, because actors use the label ‘ethics,’ and particularly ‘ethical’, to refer to what is good to do, and should be done (or refrained from doing), rather than the reflexive discussion about what might be good to do that we would highlight.

We acknowledge that our conception of ‘ethics’ takes us some distance from how actors use this label. But this is necessary for understanding what happens.Footnote 1 In NEST-debates ‘ethics’ is often positioned as a brake on technology (like technology assessment used to be labelled as technology ‘harassment’). But positions promoting technology are every inch as ethical as positions harassing or limiting technology in the name of some higher value. This implies that Under-Secretary Bond’s simple contrast of ethical and unethical cannot be kept up. Instead, ‘ethical’ is the articulation (including contestation) of what used to be morally self-evident.

Our approach to ethics is also broader compared with how ethical arguments in NEST-discussions are contrasted with economic, environmental, social, political, medical, or metaphysical arguments. The use of the acronym ELSA for Ethical, Legal and Social Aspects, introducing three different aspects, is a case in point. We argue that there is no principle difference. Presumably ‘non-ethical’ arguments in the end refer to stakeholders’ interests/rights and/or conceptions of the good life – thus, ethics. For example, economic arguments in favour of, or opposed to, an emerging technology usually follow a clearly utilitarian logic, with a focus on maximising collective happiness. Or metaphysical considerations on human–machine interactions refer to ethical conceptions of what constitutes a good life for humans.

For health and environmental risk issues, this point is particularly important, because these are often treated as technical questions. As has been shown in detail for recombinant DNA, genetic modification, and telecommunication standards [37, 42, 53], the focus on technical questions is only possible when some closure of the open-ended ethical (or normative, or political, or foundational) debate has occurred, and further discussion can be delegated to technical–analytical work. Conversely, the technical discussion can be opened up again to ethical discussion when the assumptions protecting the technical approach are questioned.

The evolution of the debate on health and environmental risks of nano-particles can be understood in these terms. In the late 1990s, there were some early warnings, based on very limited evidence and on the analogy with risks of asbestos. When precautionary approaches were advocated, and particularly when (in 2003) the ETC group proposed a moratorium on nano-particle production [13], the debate became acrimonious, and the right of ETC and other critics to raise their voice was contested. Explicit and implicit normative questions about the sort of life we should lead: avoiding risks, or experimenting and learning, or even embracing risks, were at issue. The closure of this debate can be located in time, linked to the wide acceptance of reinsurance company SwissRe’s report Nanotechnology: Small Matter, Many Unknowns [48]. Risks of nano-particles became a legitimate question: further research was pushed (and funded), and government agencies started to investigate ways to regulate such risks. Thus, it was uncertainty about the extent of such risks that was at issue, rather than ignorance about the nature of potential hazards. By now, some actors, concerned nano-scientists as well as NGOs like Greenpeace UK, realize that this focus on handling the risks technically and in terms of new regulation, is backgrounding wider considerations about the desirability of developing and using nano-particles.

We see a pattern here. In addition, the recourse to the technical is itself a normative position, and thus a meta-ethical issue.

NEST Ethics: Meta-ethical Issues

NEST-ethics starts with the opening up an existing order by a scientific or technological novelty that undermines the self-evidence of existing moral routines, in combination with the additional challenge of our ignorance about the nature and effects of this novelty. We can then identify and characterize patterns of moral argumentation as they occur (and with examples from nanotechnology). Interestingly, part of the argumentation is at a meta-level: about our background understanding of the issues and how to approach them, rather than about substantial questions about good action and the good life. This is linked to the “new and emerging” aspect of NEST, where it is too early to reach conclusions about concrete ethical issues, but the prospect of having to do so induces discussion of how to go about it – which raises meta-ethical questions.

The first meta-ethical issue derives from the prima facie presupposition of NEST-ethics that one can influence the development of new technology, and so has to discuss desirability as well as feasibility. This leads into a long-standing discussion about technological determinism, and its more recent counterpoint, social determination, or at least social construction, of technological development. In the technological determinist view, emerging technologies will materialize anyhow, independent of what people think, deliberate or decide. The problem of how to act under conditions of ignorance is thus ‘solved’ by denying human agency. Technological determinism might be justified by appealing to a transcendent technological reason, unfolding/materializing itself like a Hegelian idea. Actually, transcendent reason tends to be replaced by immanent strategic games, an equally unyielding, superhuman international competition: if we don’t do it, our competitors will, so the new technology will happen anyway. Human agency is not completely denied, but delegated to the strategic games that actors continue to play, and depend on. The self-fulfilling prophecy of Moore’s Law for semiconductors is a clear example. Conversely, those who are not part of the strategic games experience another lack of agency, that of outsiders. In recent focus groups and other public engagement exercises on nanotechnology, particularly in Britain, members of the public voiced their experience of not having any agency, and were then joined, to the surprise of both parties, by nano-scientists being involved but unable to make a difference either.Footnote 2

More central actors, like firms developing new technological options, and government agencies enabling new technology development and constraining it through regulation, might be seen as carriers of agency. Even then, there is the fact of non-malleability of technological developments, not because of inherent technological determinism but because directions and path dependencies emerge at the collective level, in a sense behind the back of the actors. The emergence of paradigms and dominant designs are examples. As one of us (AR) has argued, such non-malleability is itself a societal construction, but once in place, it cannot be easily undermined. Human agency, so dear to classical ethics, has to be replaced by distributed and collective agency, and a time dimension has to be introduced. Human agency can make some difference at an early stage (even if the issues and directions are still unclear), but much less so at a later stage, when alignments have sedimented.Footnote 3

Actual moral argumentation patterns are divided, depending on the situation and audience. When addressing external audiences, promotors of new technology use the deterministic metaphor of a train that cannot be stopped so as to enrol funders and publics.Footnote 4 These are then painted as fatalistic, and experience themselves that way. Internally, however, the determinism is leavened by the possibility to do better. Illustrative is Vicki Colvin’s testimony before US Congress, April 2003, on the “wow-yuck pattern” of public appreciation of new and emerging science and technology. She presents this pattern as a recurrent phenomenon, but then adds that we can counteract it if “we” understand what “we” (the promotors of NEST) did wrong. So nanotechnology actors need not repeat the mistakes made by biotechnology actors. Thus, determinism is repositioned as a contingent result of actors’ behaviours and interactions leading to unintended outcomes at the collective level. Understanding of such processes then enables agency, in the sense of making a bit of difference rather than forcing one’s way.

An interesting further aspect is the subterraneous link between how the promotors (or enactors, or insiders) position new technology, and how outsiders, publics, critics do so. Enactors position the technology as promising as such, independently of the efforts that actors must make. They let the promising technology speak for them, and so give it agency. Critics and publics see technology as exogenous, entering society from somewhere outside. For the critics, Franklin ([16], p. 87) notes: “This view [of Fukuyama and Habermas] of genetic manipulation as a force unto itself, hostile to social order and integration (…). Here ‘biotechnology’ is attributed a sinister agency (…).” For the public, a similar response was visible in the reports of a focus group, and their discussion of new technologies, including nanotechnology, and their perceived ability to transform society and nature. One participant says: “It’ll get out of the cage I’m sure” – so it is a wild beast that has to be contained … (Kearnes et al. [21], p. 53) Thus, ‘outsiders’ also picture new technology as an independent force. In other words, there is an unholy alliance with the insiders, and this perpetuates the myth of exogenous technology.

There are other patterns of second-order moral argumentation. One such pattern derives from the dual way that past experiences can be drawn upon. The past is mobilized to give credibility to arguments favouring promoting NEST but also to arguments pleading for prudence and precaution.

There are general arguments in favour: the new technology will bring us all kinds of good, because technologies have done so in the past; mankind has progressed because our forbears did not shrink away from their duties. Prometheus is invoked here (and sometimes there are second thoughts about such progress being a Faustian bargain – by now, the two tropes are often linked). It does not matter that first only technological ‘haves’ profit, because eventually the benefits will trickle down to the lower strata. Even the poorest person is nowadays materially better off than kings were in the Middle Ages.

And then there are arguments cautioning against the emerging technology: technologies always have unintended, and quite often unwanted, side-effects; there are always bad people misusing technology; new technology makes the rich richer and the poor often more powerless; scientists and technologists are always promising more than they can make come true; and so on.

The trope that humans (some humans) end up misusing new technologies for destructive purposes can be called the inverse King Midas trope: whereas the mythical Greek king turned everything he touched into gold, modern (Western) civilisation turns everything into a means of destruction (and both Midas and civilisation got into trouble). This bleak view of mankind can lead to the conclusion that we should not go for more and more ‘technological toys.’ The ethics are more complex, however, as is clear in the debate about guns (in the USA): do guns kill people (so no more guns), or do people kill people (so people must do better)?

We have used a simple dichotomy between promotion and caution here, but there is more at play than contentions between proponents and opponents. There is a sequence of actions and interactions, which creates a specific pattern of moral argumentation [46]. First, there is recognition and announcement of a novel technological option and its promise. In response, those pleading for prudence and precaution stress the novelty of the new technology as well, but now to communicate the message that there is not just uncertainty, but ignorance about effects of the new technology. The promotors now face a quandary. They had started the (NEST-ethical) discussion by stressing the novelty of the emerging technology, so as to attract attention and enroll allies. This move then creates opposition that cannot simply be negated. One strategy that is used often is to play down the novelty of the NEST, presenting it as nothing unusual. What was first introduced as a ‘revolution’ is now toned down to ‘business as usual.’ In the case of nanotechnology, the slogan then is: we’re just making things smaller and faster. Or in the discussion about health and environmental risks of nanoparticles: we’ve had nano-sized particles around all the time, in soot from fires and in exhausts of diesel engines.

The message thus shifts: the new technology is in fact not new at all, the past contains all kinds of precedents for the emerging technology. Haven’t we been genetically modifying animals since the first breeding experiments? Are twins not living proofs that clones are willed by God and/or in accordance with natural order? Is education not a basic form of human enhancement already? Is writing itself not a technology that produced texts, an external medium to which we delegate part of our cognitive power and autonomy, no different from when we will interface our brains with computers? If we see these earlier technologies as being in accordance with our present moral intuitions, we should now be consistent and see the new technologies as similarly acceptable. This is not only an argument from precedent: the new technology is nothing new; but also an injunction to stay with the moral intuitions that have evolved in our interaction with earlier technological developments.

This last point creates an opening for technology critics to turn the argument from precedent around, creating an argument from consequent. Instead of legitimizing future developments in terms of criteria of the past and present, they de-legitimize the past and present by applying criteria derived from a desirable future. For example, if possible cloning of farm animals is seen as an unacceptable form of commodification of living organisms, the concern about commodification should be used as well to reconsider our current acceptance of the bio industry. And if we are really worried about toxicity of nano tubes, should we not be consistent and worry about all the fine dust currently produced by exhaust fumes as well?

There can be further rebuttals; the pattern continues. The steps in this pattern have become expected in our late-modern risk society, as it were as moves in a game. In that way, the pattern will create the positions of proponent and opponent: an inquiry into possible side-effects will be treated as an indication of opposition to the new technology, and thus call up further arguments legitimating the original inquiry – turning the innocent inquirer into an actual opponent.

The third main pattern of meta-ethical argumentation is linked to a basic characteristic of NEST-ethics, viz. the possibility that emerging technologies may change our morals and ethical considerations. This gives rise to two mirroring arguments. Technologically induced moral change can be projected as almost inevitable – the habituation argument – or depicted as a threat – moral corruption. The pattern of argumentation is now about the general relation between morals and technology. Such arguments do not aim to win the match by gaining the most points, but by revaluing the whole match. As a strategy, such a revaluation occurs late in the game, when first-round arguments seem unable to win the day.

The first argument is the habituation argument. Its basic tenet is that although at present the new technology is in conflict with established morals, there will be reconsideration of the morals when people become used to the new technology and its possibilities and limitations. In time morality will adapt. Precedents are quoted, ranging from overcoming fright (as for the first trains, which might even frighten the cows in the meadow so that their milk would turn sour) to more explicit changes in morals. People called contraceptives immoral because these would severe sex from procreation (and lead to wanton sex); many are now quite happy to accept that it did so indeed. Louise Brown, the first child created by IVF, was greeted as a miracle or a monster, depending on your view. Nowadays the excitement seems distant, and IVF is accepted while recognizing that it is invasive and a psychological burden. Similarly, so the argument runs, people might now feel uncomfortable about interfacing the body and the brain with silicon-based implants, to enhance the human; in another 10 years they will no longer understand what the fuzz was all about.

The habituation argument can become part of an action plan, e.g. of promotors of a new technology who sit out the flak until people have become used to the new technology. This is sometimes part of explicit policies. A balanced example is the Dutch Embryo Act, which prohibits the creation of embryos for scientific research in section 24, but in section 33.2 explicitly states that this prohibition has to be reassessed after 5 years to see whether prevailing moral insights might have evolved by that time.

The second argument is the argument of moral corruption. It comes in two forms: the slippery slope argument stresses the temporal dimension of this corruption; the colonisation argument the spatial dimension. The argument can be deployed in its own right, taking as its starting point that humans have to be protected against their own bent towards the immoral. As a strategy, it comes into play when (parts of) public opinion seems to favour the emerging technology and no convincing moral arguments against the emerging technology itself have turned up. In such a situation opponents can argue that the new technology, although seemingly innocuous or even beneficial now, will inevitably invoke further technological steps that will later result in applications that are blatantly immoral. The only way to stop this from happening is to prohibit the emerging technology from the start. For example: if implanting a chip in the brains of paralyzed patients will enable them to communicate with the world, who in her right mind would want to deny that this is a good thing? But that same technology, once developed, will be marketed for other less deserving consumers and for less legitimate, manipulative or hedonistic, purposes. The implanted chips, for example, can and will then also be used for manipulative mind-control or like a new kind of drugs.

In its spatial form, the moral corruption argument leads to the same conclusion: better stop now before the new technology can spread and be taken up for the wrong goals. The new technology might indeed address legitimate needs of a minority, but it is impossible to stop others making less legitimate use of the technology once this is developed. The technology will spread out. Nuclear proliferation and the attempts to contain it is a case in point. Nano technology will develop ultra small bio-sensors that will permanently monitor our body processes. This can be important in hospitals. No longer confined to the laboratory and the hospital, however, these devices will result in the complete medicalisation of our everyday lives.

The two types of moral corruption argument lead to proposals for moratoriums and other ways of self- and other-containment. The call for a voluntary moratorium on recombinant DNA research in 1974 and 1975, by molecular biologists themselves, is a well-known example [24]. Bans on cloning, because of the risk that boys from Brazil will be cloned, are a current example (there can also be deontological arguments for such a ban, see next section). Such proposals quickly turn into debates about practicalities, and the question of feasibility of containment. Promotors of the new technology will use infeasibility of global containment as an argument to allow them to continue – because somebody elsewhere will certainly do so, and we (perhaps with higher moral standards) had better be in as well. Such debates overlap with what we call ‘consquentialist contestation’ and will discuss in the next section.

NEST-ethics: Patterns of Ethical Argumentation

In practice, NEST-ethics starts with a consequentialist pattern of ethical argumentation: the new and emerging technology is deemed desirable, or not, because its consequences are desirable, or not. Since such consequences are still speculative, they have the form of promises, or warnings and concerns when put forward in an action-oriented context. NEST-ethical discussion typically starts with the promises made by scientists and technologists and those who identify with their message about the new options. (See the Philip Bond quote in the introduction.) These promises reflect the passion and confidence of those who make them, but they are also a way to attract attention, and thus financial, political and moral support for the new ventures.

While promises can enrol allies, they can also raise doubts and critical questions, already from actors pushing other promises which compete for the same scarce resources. Such discussions occur, for example, around the promise of fuel cells and the hydrogen economy [3]. The feasibility and desirability of a hydrogen economy is questioned by those who push other energy futures and other technologies to carry them. For nanotechnology, such contestation remains subdued because nanotechnology is relevant for all sorts of applications, and thus unspecific in terms of what it is competing with.

Critical reactions can also focus on the new technology itself, independent of alternative technological options. In the consequentialist pattern of moral argumentation, critics then have to identify undesirable consequences to get a hearing. A struggle ensues about the nature and plausibility of the various consequences. Such consequentialist contestation is further fuelled by a cultural expectation, in late-modern societies, that there will be proponents as well as opponents of a new technology [36], somewhat independent of its specific features. In fact, by now there are NGOs like Greenpeace with a professional opponent role. When new technologies emerge, they will try and identify the negative consequences. That is their business model. Or sometimes, as in the case of Greenpeace UK for nanotechnology, come up with a balanced appraisal – which is then not believed by proponents because they project the stereotypical opponent role on Greenpeace.Footnote 5

Other patterns of argumentation can be characterized as deontological, as focusing on justice, or as drawing on ‘good life’ ethics, as we will show below. In practice, they are most often additional to the consequentialist pattern.

Consequentialist Arguments

Consequentialist contestation follows a distinctive pattern, which is fuelled by two general perspectives on technology, which are linked to the meta-ethical discussion of agency. There is the optimistic view that technological progress is basically beneficial, and a pessimistic view of technology as inherently risky and dangerous. The optimistic belief in technological progress short-circuits the problem of uncertainty and ignorance by arguing that there may be small mishaps, but all in all, and in the long run, the new technology will benefit us. As we discussed already, this optimism gets extra ‘muscle’ by combining it with determinism: you should not want to stop this technological advance, but you cannot, either. Resistance is bad as well as futile.

A priori pessimism about the effects of new technology gets rid of the uncertainty and ignorance just as well; you may not know exactly what will go wrong, but go wrong it will. The critical stance that goes with pessimism might lead to attempts at changing the course of events, i.e. some voluntarism. But just as often we see pessimism and determinism combining into fatalism. Resistance against fate is then undertaken in a spirit of duty, not of hope. There is more to say about the issues of uncertainty and agency, as these are linked to basic views of nature and society. For example, as Mary Douglas and others have argued, a view of nature as resilient goes together with a conviction that we can, and thus should, go for technological progress, there is no need to bother about side-effects until they appear. The alternative view of nature as vulnerable is linked to a view of technology as a “monster” that might have to be banned, or at least contained from the beginning [11].

Consequentialist contestation is inevitable in late-modern societies. The pattern of moral argumentation starts with promises which have the form: if we invest in this new and emerging science and technology, this will increase our knowledge as well as our scope in manipulating the natural world, which will eventually result in increasing general happiness when application of such knowledge and manipulation leads to positive effects x, y and z. Such a claim can and will be challenged along three axes.

The first axis concerns the basis of the promises made, that is, their plausibility. Because promises are based on assumptions about, or projections on, the future, one can demand that we get our “facts” straight before taking these promises seriously. Some optimists predict that nanotechnology will help to interface human brains and computers, so that we can ‘learn’ French by simply implanting a chip. Highly improbable, the objection goes, because learning a language is extremely complex – as is shown by the sometimes hilarious results of translation software programs. Clearly, there are attempts to check how speculative such predictions are, even if these will be inconclusive because they are themselves part of the consequentialist contestation. This is visible in the prolonged debate between (the late) Richard Smalley and Eric Drexler about the principle possibility of molecular assembly. Smalley’s objections (“fat and sticky fingers”) were not directly countered by Drexler, who tended to refer to the occurrence of molecular assembly in living cells to make his position plausible. The pressure to assess the so-called realism of the Drexlerian scenario is also visible in the stipulation in the USA 21st century nanotechnology Act of 2003 to do just that.

The second axis along which promises can be contested, is not the plausibility of the benefits, but the ratio of benefits and costs. Do the latter not outweigh the former? Skeptics will stress the danger of not acknowledging our cognitive limits. Transgressing these limits, as the topos of the sorcerer’s apprentice teaches us, means sowing seeds for future disaster. This line of argument can lead to the demand to first “get the facts straight,” e.g. by first assessing health, environment and safety risks of nano-particles before going into wholesale production and use. Proponents of nanotechnology first contested this demand (“there is no risk”), then grudgingly took it up while production continued, and now recognize it as a real concern. They fear a backlash if the public finds out in a later stage that the risks were “underestimated.” Simply acknowledging risks, however, might not be enough to prevent such a backlash. Social studies have shown that experts generally perceive risks in quantitative terms, whereas the general public perceives risk in more qualitative or narrative terms (cf. also [54]). As a result, there is a real chance of miscommunication between these two parties.

The third axis of consequentialist contestation consists of questioning whether the benefits promised are really benefits. This will shift the discussion to another level, because this no longer is a factual question but an explicitly normative one. Promises of benefits imply views and criteria about what is beneficial, even if these remain implied. Such views and criteria can be unproblematic, when all participants agree that health, absence of hunger, economic growth, and cheaper products are desirable and that hunger, sickness, and poverty are not. In the case of cochlear implants, however, the promise of allowing the deaf to hear again was contested by the deaf community, with its own culture, and now officially recognized language. The utilitarian criterion of ‘maximizing happiness’ has been shown by philosophers to be inadequate. In the case of cochlear implants for the deaf, the whole notion of happiness, in the sense of what is considered to be beneficial, can shift according to the culture from within one is viewing happiness.

Underlying most consequentialist arguments is a utilitarian ethics, with its moral drive to reduce pain and to maximize happiness. In modern times, avoiding or reducing pain (non-maleficence, primum non nocere) is taken to have priority over maximizing happiness (beneficence). The underlying idea is that suffering is not only more pressing than sub-optimal happiness, but also a somehow more objective or uncontested criterion than happiness [32]. Ideas about what makes a person happy vary, whereas people tend to agree about what counts as suffering. Few would deny that hunger and sickness are harms that need mending. Thus, one can understand why those consequences of an emerging technology are foregrounded which reduce hunger and disease. The facile way in which agricultural biotechnology was (and continues to be) linked with reducing hunger in developing countries is visible again for nanotechnology.

Minimizing suffering and reducing harm are phrased as positive goals. In practice they often also function as it were negatively: as long as a new technology does not harm anyone, it does not need ethical discussion. This laissez-faire attitude can itself be formulated as a political and ethical principle (cf. [30]). It does raise questions about the burden of evidence, which are taken up as patterns of moral argumentation. Often, it requires critics to argue that the new technology might cause harm to some stakeholders and thus cannot be pursued freely. Those favouring an emerging technology do not have (or do not see) a duty to check for possible harms (except when regulation requires them to do so, as with the registration of new medical drugs). Furthermore, a new technological option tends to be developed with certain concrete stakeholders in mind, so at least some of the benefits will be clearly defined. In contrast, possible harms are often speculative, lie farther away in the future and/or space, and concern as yet anonymous, collective stakeholders. The asymmetry of benefits and harms is almost unavoidable, structures not just argumentation but also action, and has given rise to increasing recognition of the need for early warning [20, 46].

There are three recurring rhetorical tropes in this consequentialist cluster. The first is about upstream solutions for downstream problems. This trope is very visible in promises about genetic therapy, and in human enhancement debates generally. No longer, so the argument goes, will we have to muddle through by fighting symptoms; genetic therapy and enhancement technologies will finally enable us to go to the (biological, molecular) root of the (medical and socio-economic) problems and solve them there. In nanotechnology, upstream solutions are pushed when nanotechnology enables enhancement technologies, but also in relation to drug delivery and to problems of developing countries.

Secondly, a sceptic might first allow that the emerging technology will indeed plausibly deliver some of its promises, but then proceed to deny that this makes the emerging technology necessary. Here the first trope about ‘upstream solutions’ gives way to a second trope about the (un)desirability of ‘technological fixes’ and ‘social fixes.’Footnote 6 There may well exist alternatives that address the problems, say of environment or poverty, as they appear here and now. These alternatives are argued for by labelling the proposed upstream solution a technological fix, with its pejorative connotation of pushing through a technological approach with all sorts of harmful side effects. The assumption here is that social problems deserve social solutions, not technical ones that only address the symptoms anyway. Proponents can open up this trope by arguing that the technological solution is much more feasible and realistic than a cumbersome social one. In that case it is narrow minded and irresponsible – in the light of the pressing problems – to cling to a dogma of social problems deserving social solutions.

A third trope is precaution, i.e. precautionary approaches in general and the specific precautionary principle that is now part of EU regulations [14]. In terms of Mary Douglas’s cultural theory, this trope of precaution belongs with the hierarchists and bureaucrats, not with collectivists/sectists whose precautionary concern is to ban the monster of new technology. Thus, in the formulation of the European Union, there must be “reasonable grounds for concern for the possibility of adverse effects” before there can be measures “to ensure the chosen high level of protection in the [European] Community” “based on a broad cost-benefit analysis whereby priority will be given to human health and the environment” [39].

Presently for nanotechnology, the focus is on risks of nano-particles. There, precautionary approaches have been narrowed to health and environmental risks, and wider concerns about the need for nano-particle based products are backgrounded. We discussed this example already at the end of “Technology, Morality and Ethics: Preliminary Reflections.” What is interesting here is how some actors, e.g. Greenpeace UK, are concerned about the narrowing of the agenda, question the benefits of nano-particles (cf. the fourth-hurdle argument as discussed for biotechnology), and start offering good-life ethical arguments. In other words, consequentialist contestation has led to partial resolutions, i.e. of the issue (here, health and environmental risks of nano-particles) that was foregrounded in the debate, but there are residual concerns which cannot be addressed in the consequentialist pattern of argumentation. This creates openings for deontological and good-life ethical arguments, for the next step in the evolution of the debate. Sometimes, as with stem cells, deontological arguments are present at an early stage already.

Deontological Arguments

Deontological (i.e. right- and duty based) arguments are expected to be up front when the new technology touches upon deeply felt convictions and existential interests. They can also function as a check on consequentialism, because deontological principles, in our societies, appear to have a right of way before consequences. Even if the principles can be contested by referring to benefits that we now forego, or risks we have to suffer – an example of the latter would be that individual choice and autonomy, as a principle in medical ethics, has to be modified, for example because of the possibility of community genetics.

Technologies may appear to produce desirable over-all consequences, but they can still conflict with deeply seated moral convictions about duties and rights, often but not necessarily protecting the interests of individuals or minorities that are threatened by the majority interests favoured in consequentialism (because of embedded utilitarianism). A good example is medical experimentation on humans. Here deontological principles protect individual patients – or embryos – from being subjected to cruel experiments that in fact could benefit public health.

In NEST-debates deontological arguments are often introduced to counter optimistic promises. But deontological principles are not only called upon to frustrate emerging technologies. Common moral principles supporting new technologies are: a duty to further human progress, a duty to help diminish suffering, a duty to acquire knowledge, and last but certainly not least: the right to choose freely whether or not to use a particular technology (as long as this does not harm others, of course).

There are three main ways along which deontological principles can be contested. First by invoking another principle with a higher priority, e.g. by stressing that the principle of nonmaleficence (primum non nocere) outweighs the principle of beneficence. An example is the claim: “Although miniaturized surveillance techniques might increase security, this does not make the accompanying infringement of privacy rights acceptable.”

A second way is by arguing that the principle does not apply in the case of this specific technology. “Of course it would be wrong to kill human beings, but you cannot seriously consider a human embryo of less than two weeks old, a human being.”

The third way to counter a concrete deontological argument is by interpreting and applying the principles differently. The same principle is mobilized to prohibit a new technology and to endorse it. For example: “We all endorse the principle that people should have a right to choose freely whether or not to use a technology. But thinking through what will happen in a competitive world full of inequalities, the same principle entails that human enhancement should be forbidden. When some individuals exercise their right and start to technically enhance their offspring, this in practice forces other parents to follow suit. Allowing enhancement techniques to be available therefore effectively infringes upon the right of the other parents to choose freely not to use these techniques.” This is recurrent argument, and there are further moves, like emphasizing that the other parents are still free to choose, only the effects of their choice may hinder their offspring to compete with the kids who were enhanced. We note that the structure of this pattern of argumentation is the same as the argument about new technology as an unstoppable train, because “if we don’t do it, our competitors will”, which we offered as part of a meta-ethical issue in the beginning of “Nest Ethics: Meta-ethical Issues.” The meta-ethical issue is visible in the reference to our present competitive world, and to forces felt in practice, which are both treated as given, and to be accepted.

Justice Arguments

Distributive justice, in the immediate sense of how the benefits and the risks will be distributed, is an important issue, even if it gets only passing reference in NEST-discussions. The low prominence in these discussions has to do with the mostly speculative nature of the impacts. One can still project, and hope that inequities will be mitigated, somehow. For technologies closer to implementation than nanotechnology, for example biotechnology, distributive justice is higher on the agenda. Still, there are common patterns in the moral argumentation.

There are contrasting views of what constitutes distributive justice, depending on the distributive criterion that is used: equality, need, merit, effort, or a combination of these (as with Rawls [33] on principles of justice). For NEST, the paradigmatic issue in the discussions is a techno-divide, the gap between rich and poor countries, and between poor and rich strata of the population within a country. And the basic tenet, accepted by most of the discussants (even if the reasons are not clear), seems to be Rawls’ ‘maximin’ rule: the new technology will only advance justice when it will benefit those who are now worst off: the poor (countries).

Arguments supporting developing the new technology in rich countries, and with affluent consumers as the first target group, must then include a trickle-down effect. The new technology will create more goods/value, and therefore everyone can have a larger piece – in absolute terms – of the expanded cake. Although the new technology might at first benefit the rich countries who had the resources to develop it, in the end the poor (countries) might be the ones profiting most: “what at first appears to be very ‘high-tech’ and costly and therefore perhaps irrelevant for developing countries, in the end might come to be of most value for those same developing countries. Thus NT, were it to develop in the way it ought, might ultimately be of most value for the poor and sick in the developing world” [31].

There is a further move in this pattern of argumentation. Even if the new technology does make the majority better off in absolute terms, it might still widen the (nano)divide between those reaping most benefits and those left to pick up the crumbs. The relative position of the latter group will worsen as a result of the emerging technology: “The transition from a pre-nano to a post-nano world could be very traumatic and could exacerbate the problem of haves vs. have-nots. Have-nots do not easily obtain access to new technologies: the difference between the lives of the nano-rich and the nano-poor will likely be striking” [43, 204]. This latter argument need not, however, lead to denunciating the new technology. The conclusion, most often, is a plea for developing the technology in directions that specifically address the needs of the poor developing countries [29]. Thus, for some proponents, the issue of distributive justice is more than putting trust in the trickledown effect of new technology.

Arguments from ‘Good Life’ Ethics

What sort of good life can be achieved thanks to new and emerging science and technology? The promises of enactors about new options tend to short-circuit this question by projecting wonderful new possibilities without reflecting on how ‘good’ this kind of ‘life’ actually might be. In contrast, commentators and critical groups will sometimes outline a ‘good life’ and use this as a reference when discussing and assessing new science and technology. This is particularly clear in the environmental movement (up to ‘deep ecology’). The ETC group, which now focuses on critical evaluation of ongoing developments in nanotechnology, is a good example because it had started with a view on the good life, emphasizing ecology (E), better use of technology (T) and reluctance towards concentration (C), and this is still visible in its arguments about nanotechnology in society.

One framing of a ‘good life’ occurs through culturally shaped identities and aspirations: who are we and who do we want to be? Indicative are references to archetypical figures and myths. Those promoting new technology typically draw upon a Promethean identity, mixed with some frontiers rhetoric: “Boldly go where no man went before.” Conversely, sceptics and adversaries warn against Faustian bargains, and against ‘hubris’: proud Icarus soaring high in the skies like the Gods – and then plummeting to his death.

Another framing of what is a ‘good life’ is visible in discourses of limits. In the biotechnology debate, a recurrent motif was that humans should not play God. The concrete reference was to the possibility of recreating nature. God’s Creation would then be a shorthand, somewhat independent of theistic religious connotations, for respect towards what has evolved, instead of it being objectified, instrumentalized, commodified, subjected and manipulated.

Further limits are derived from what is deemed to be natural. There exists a (hidden, but to be explicated) moral order in nature that should be followed. If not we will create monsters, as Victor Frankenstein did. Mary Shelley’s novel is more complex, as it includes the experience and feelings of the monster, and suggests that monsters are the result of lack of care and love, rather than because of the technology that went into their creation. In the debate about genetically modified food, especially in the UK, the term ‘Frankenstein food’ or even ‘Frankenfood’ has become a shorthand for what is inadmissible. “This is not what we want to be on the shelves of our supermarkets.”

The focus on limits is often conservative: do not transgress what is already there. Another way of viewing what is out there, and what might put limits on the aims of control that are associated with technology, is the idea that human beings need ‘otherness’ and cannot flourish in a completely controlled and manipulated, human built, “brave new world” that plies itself obediently to our every desire. This motif is visible when people start to extol unspoilt nature, human imperfection, suffering, death and fate. We want the world to put up resistance to our touch, to show robustness, to surprise and provoke us. We do not want the world to become a mirror in which we only see our own image reflected.

The debate about the good life follows the lines laid out by the short-cuts, rather than discussing the good life as such. Proponents of a new technology will offer more technology-friendly interpretations of what God wants us to do, or even argue that God means us to play Him (and others of course will question His existence). The moral order implied in nature is queried by arguing that it is our ability to create technology what truly constitutes our human nature. Others flatly deny that there is any moral order hidden in nature (apart from the well-known naturalistic fallacy). And while mankind has a bad track-record as to wielding our technological powers wisely, we are learning and making progress. And finally, even when we will one day live in a complete ‘technotope,’ this will do nothing to diminish our experience of ‘otherness’ because technology is every inch as capricious, surprising and different as nature is [41].

Two final comments. Good life arguments can lead to clashes between incommensurable worldviews. Instead, and to avoid such clashes, their persuasive force is drained by treating the arguments as private beliefs. For proponents, it suffices that the new technology will help realize the wants and preferences of at least some interested parties. They are not much interested in shared conceptions of the good life, and position conceptions of the good life as private. They argue that although the belief in limits might be a perfectly respectable private opinion, it does not constitute a legitimate argument in a public discussion. This (liberal) argument gives the ‘good life’ part of NEST-ethics a peculiarly asymmetric and somewhat slippery character: arguments are not met by counterarguments, but relocated from the public to the private domain [45, 47].

Secondly, it is not always possible to draw a sharp line between good life ethics and deontology. The former envisages substantive, thick, conceptions of the good, whereas the latter concentrates on ‘thin’ conceptions of the good or on what is ‘right.’ What belongs to the ‘good’ and what belongs to the ‘right’ is always a matter of contention. To illustrate this: in a society where most people believe in God and in a moral order hidden in nature, the limits are all accepted as belonging to the domain of the right, of what is neutral. In a modern, secular, pluralist society the same limits would have to be qualified as belonging to good life ethics because they rest on non-neutral, and therefore substantive, conceptions of the good.

Ethically Relevant Ambivalencies of Nanotechnology, and an Arena rather than an Agora Model

Clearly, a large part of the discussion of nano-ethics is about NEST-ethics. But there might be ethical issues somewhat specific to nanotechnology as well. We think there are, and these can be brought out by focusing on ambivalencies in nanotechnology. Ambivalencies imply that there is no simple resolution: an attempt to go for one side of the ambivalence brings out the problems linked with the other side. Nanotechnology introduces new ambivalencies, and enhances existing ones. Nano-ethics can then not be an application of existing moral routines to a specific sector or technology, here nanotechnology. It must venture into unknown territory: how to address ambivalencies.

The slogan for nanotechnology, ‘size matters,’ is ambivalent, as we noted already in the introduction. At the nano-scale, the small size (of particles) creates unexpected new properties, for example reactivity of particles of noble metal gold. Unexpected new properties could just as well constitute hazards, however. If “size matters,” nanotechnology will structurally introduce risks. As Maynard et al. [28] put it: “Concerns have been raised that the very properties of nanostructured materials that make them so attractive could potentially lead to unforeseen health or environmental hazards.” They develop their argument in terms of novel risks, reduced public confidence and fears of litigation that may make nanotechnologies less attractive to investors and insurers, with specific reference to earlier experience of asbestos health hazards and financial risks to the insurance industry. Indeed, Swiss Re’s earlier intervention in the debate is illustrative for such concerns [48, 49].

Such arguments take an enactor’s position as their starting point. The recipients of nanotechnology, comparative selectors as it were, might argue that they do not need this new technology, with its inherent hazards. And will thus be reluctant to buy into the promises.

The key point for nano-ethics is that no stable consequentialist assessment is possible for nanotechnology as a whole. Every further venture on the nano-scale can create new and unexpected benefits as well as potential problems. For health, environmental and safety regulation this implies that there cannot be a generic regulatory approach. There will always be novel types of effects that have to be taken into account. So for regulation (with its inherent consequentialist-ethics bias) a learning approach, with emphasis on monitoring and self-reporting, becomes important (cf. [8]). For nano-ethics more generally, a learning approach as advocated in Dewey’s pragmatism will be important [9].

Another such ambivalence arises from the delegation of agency to nano-enabled technology like smart dust, and active systems in general. For example, in theranostics (the combination of diagnostics and therapeutics), a nano-enabled micro-system in the body can make a diagnosis, and when finding, say, the wrong level of concentration of X, do something about it on its own accord. Or start attacking cells diagnosed as cancerous. Such active systems are not yet in place (except perhaps in some military technology), but it is clear they will introduce uncertainties about what will happen. Renn and Roco [35] have discussed this in terms of risk governance, arguing that traditional risk analysis cannot handle this, and that scenario approaches should be developed. Further deliberation is necessary, where expert scenarios are combined with what John Dewey called ‘dramatic rehearsals’ (see [6], 113–14).

Delegation of agency to technology is not new. Bruno Latour has highlighted the de facto morality of seat belts and signs to warn you when you haven’t put them on, and speed ramps in roads are also called “sleeping policemen” (Latour [25, 26]: 186–189). This has been turned into proposals to create artifacts which will do moral work (Verbeek [51, 52], who elaborates the idea originally proposed by Achterhuis [1]). All these examples, however, are of passive systems. Active systems are now considered and put in practice, for example advanced driver assistance systems, which could make it impossible for a driver to shift lanes if there was a car in the other lane. The morality of active systems is of a different kind. In the example of advanced driver assistance systems, it is programmed into a device located in the car, and thus traceable. However, the nano-enabled systems that are envisaged, wander about in the world, helping us but perhaps also creating havoc. You can’t have the one without the other. This may well be a cause for concern, and then lead to proposals for a moratorium on such active systems. An example is Altmann and Gubrud’s [2] argument for a moratorium on independently active micro-nanosystems for battlefields. On the other hand, a moratorium can be seen as prohibiting learning about the new technology. We will not enter into this particular discussion, but do note that the whole notion of learning about new technology becomes complicated when the technological device or system has, in a sense, its own agency. While this is not specific for nanotechnology, and thus does not qualify as nano-ethics per se, it is definitely a cluster of ethical issues that is important for technologies that converge at the nano-scale.

Another type of ambivalencies is linked to the enabling character of nanotechnology so that eventual effects are co-produced by the way it is embedded in devices and systems and how these are taken up in society. A cluster of ethical issues is involved here, from foundational issues about distributed agency undermining traditional individual-based ethics, to actor’s tactics in attributing praise and blame. Up to gerrymandering, or “elasticity of moral thinking,” as is brought out in Ravetz’s aphorism: “Science takes credit for penicillin, while Society takes the blame for the Bomb” (Ravetz [34]: 46).

The ambivalence resides in the claiming of positive impacts for one’s action – here, developing a nanotechnology -, but then also perhaps also being held responsible for possible negative impacts. To avoid being blamed, one can tone down one’s agency (we couldn’t help it, we’re only a small part of a bigger whole) – but then cannot be praised anymore for what one has claimed to have brought about.

One further implication is that it will often not be clear who is responsible for what (and thus might be praised or blamed). Beck [4, 5] made such “organized irresponsibility” a key part of his diagnosis of the risk society. The organisation of irresponsibility is visible in how enactors of nano-enabled technologies position themselves.

Working towards biosensors for diagnostics, and towards passive and active drug delivery systems, as well as their combination (theranostics) is an important activity in the nanoworld, and definitely so in its repertoire of promises. When asking R&D people in Philips Company (for which new diagnostic tools are one of their strategic directions) about societal impacts like changing responsibilities when point-of-care diagnostics become widespread, they said that was not their business. “Others should look into it” – at some later time, but then the parameters of the situation may have been fixed already by the shape the technology has taken. For the Philips people, this is a matter of pragmatic negligence, and the hope that others will be diligent. At the societal level, it amounts to organised irresponsibility. When this is made explicit and debated, perhaps contended, there will be a struggle about new roles and responsibilities, and of distributing praise and blame (before the fact).

Organised irresponsibility is a general feature of our late-modern risk society, and nanotechnology is then one further site where this is played out. Because of the promises made for nanotechnology, up to undersecretaries of state (the Philip Bond quote) claiming that the blind will see and the deaf will hear – whether they want to or not, we add -, enactors do not see any other responsibility than achieving the promise.

This leads on to a general NEST-ethics issue, the ethics of promising technology. In a nutshell: Thou shalt not exaggerate without reason. But there is reason: one has to mobilize resources to be able to realize (materialize) the promises, and has to do so in competition with many other claims on such resources. One has to claim more than is reasonable, in order to be able to realize what is actually a reasonable claim.

This issue is widely recognized in the nano-world. And it leads to an ambivalence in the sense that two different actor-strategies are possible: Be willing to inflate expectations, and hope disappointment will not run high. Or be modest to avoid a backlash. The former strategy is dominant in the USA, up to recurrent reference to human enhancement. In Europe, the sales rhetoric for nanotechnology is about just making things smaller and faster.

What we have shown in this last part is that nano-ethics does introduce some specific questions for NEST-ethics, and makes existing questions/issues more urgent. In particular, there are ambivalencies which create challenges for actors as well as for ethical analysts. As we noted already, the general ethical point about ambivalencies is that there is no simple resolution: an attempt to go for one side of the ambivalence brings out the problems linked with the other side. In other words, deliberative approaches oriented towards consensus (the agora model) will not work, and not because of communicative problems between people with different positions and views (that might be overcome by better interaction), but because of the structure of the situation.

One might view NEST discussions as rational, consensus-seeking deliberations. The idealized paradigm for this type of deliberation is the classical Athenian agora: the market place where the free citizens gathered to decide, solely on the strength of arguments, the good of their polis. Instead, one should start with actor strategies, serving particular interests. Not an agora where Rousseau’s volonté generale takes form, but an arena where some win and others lose. In an arena consensus is never reached, although a workable compromise is sometimes achieved. Consensus-seeking models can at best provide temporary stabilisations.

It is not a fight of all against all, however. In the public sphere in our societies, characterisized as functioning democracies, all players are forced to seek legitimacy for their standpoints. Such legitimacy can only be acquired by participating in deliberation. To win in the arena, participants have to act as if they were in an agora. In other words: even if the agora is an illusion, it is a necessary one, and it is productive [44].

Since Machiavelli, political theorists have pointed out that struggle among an irreducible plurality of perspectives can be productive. Diversity, heterogeneity, incommensurability, and antagonism – they can tear the fabric apart but they can also help to keep it vital and vigorous. Probing each other’s worlds goes together with competition for primacy in a universe of discourse with others who cannot beforehand be branded as unreasonable. Such reflexive awareness rejects the naivety of dogmatic beliefs, recognizes its own fallibility, and leaves room for reasonable dissensus. Pragmatist ethicists can contribute by helping develop different tools for ‘conflict’ and ‘dilemma’ management to enhance mutual respect [23].

In sum, there may not be a need of a dedicated nano-ethics. Our discussion of essential ambivalencies in nanotechnology has, however, thrown up specific challenges to ethics which require us to think through a number of basic issues, including the role of struggle. Nano-ethics then becomes a site for general ethical analysis.

As an instance of NEST-ethics, nano-ethics will reproduce the general patterns to some extent, but also modify them. An important point, which remained implicit in our discussion of NEST-ethics, is the co-evolution of ethics and new technologies: while there are recurrent patterns of moral argumentation, there is also learning, shifts in repertoires, new issues coming up. The presently widespread acceptance of precautionary approaches (definitely in Europe) is an example of such a shift. What one now sees happen with nanotechnology is a further kind of precaution: promotors do not want impasses to occur as happened with green biotechnology, and go out of their way to communicate with publics and politicians [40], and want to discuss ethics and societal aspects at an early stage. While the debate still often follows the lines of the patterns of moral argumentation we outlined, there are now openings for further articulation.