Dr Chris Groves
But “right” here has a very specific meaning, which moral philosophers typically
distinguish from two other senses in which one could use the word. For example, you
might be concerned with the right way to act. meaning something like “what is the
best way of achieving my chosen goal here?” “Right” here means efficacious,
efficient, or just “most likely to be successful”. Thinking about how to act in this way
is nothing to do with ethics – it is about what moral philosophers like to call prudence.
Ultimately, trying to figure out how best to realise a goal aims to make a judgement
on the means one should use to reach a given end, not on the value of the ends
themselves.
Alternatively, one might think about how to act in relation to what “feels” right in a
given situation. Here, “r ightness” refers to the force of an emotional motivation to do
one thing rather than another – like donate to a charity that works in developing
countries rather than go for a meal in a restaurant. Here, there is no reflection on the
value of ends either – the goal of action is simply to act in accord with one’s
inclinations.
Ethics, on the other hand, typically understands “right” actions as ones which would
be judged to such by anyone who is rationa l. 1 In other words, it refers to a standard of
action that is independent of criteria like efficiency and effectiveness, and is
independent of how we feel about some goal. There can be actions which are effective
– like ethnic cleansing as a means of secur ing resources for a community – and
purposes that are for some people emotionally appealing – such as executing burglars
– which we may argue are not morally right, for example.
If ethics deals with a very specific meaning of the word “right”, then we might also
expect that when moral philosophers disagree over an ethical issue, they disagree for
very specific reasons. Ethical disagreements are not about factual concerns, in the
sense of facts about how the external world works which can be established with
reasonable certainty by conducting experiments, making observations and formulating
theories. On the contrary, they are about basic principles and values which people
rely on in assessing whether they should act in a particular way.
Many apparently ethical disagreements actually concern facts about the external
world, and are thus not genuinely ethical in nature at all. Where disagreements are
1
What “rational” should be taken to mean here is , of course, debatable too…
1
genuinely ethical in nature, they will typically concern issues like the relative priority
of different values or principles.
Among these principles and values might be the following: human rights,
responsibilities or duties, the common good, utility, harm and justice. Ethical debates
will often concern which of these principles might be the most important in a
particular case, or in any given case. They might also concern to whom a given
principle applies. For example, we might ask who has rights: all humans, all non-
foetal humans, humans and primates, all animals, all living creatures, and so on? Or:
who can be harmed by our actions? Only living people – or also dead people? Future
people? Only creatures which can feel pain?
Looking at the journalistic interest in nanotechnology over the last few years, the
ever-growing ethics literature 2 and the debates among regulators, there are two
2
See for example http://www.nanoethics.org/.
2
notable cases of new ethical issues which have been associated with nanotechnology,
along with a number of examples of how specific potential nanotech innovations may
change the way we think about existing ethical problems.
Some have argued that, should this technology be developed, there is therefore a
remote possibility of nano-engineered robots replicating out of control and consuming
the biosphere in the process as their numbers grow exponentially – global ecophagy.
This scenario originated with K. Eric Drexler, one of the originators of the concept of
molecular nanotechnology (MNT), the (still hypothetical) use of nanoscale scientific
techniques to achieve precise enough control over chemical reactions to build
complex macroscale structures “from the bottom up”, from the atomic level.
Given that this outcome is a possible outgrowth of nanoscale science, then – some
have suggested – it is morally impermissible to continue with nanoscale science. Does
this make sense as an ethical argument?
An argument of this kind isn’t really about principle s or values. If it were, then it
would be all about whether Z is really bad and if so, on what grounds. If you had very
extreme views about suffering being bad, for example, you might decide it would be
better that all living things perished rather than continuing to live and suffer, and
might quite welcome the idea of global ecophagy...
The slippery slope is a question of facts, not values: it only exists if each of its
premises and their relationships A > B > C > …Z can be factually established. Taken
independently, these steps may be unlikely; taken together, their total probability may
be vanishingly remote. If this is the case, then we have little reason to conclude that
there is a slippery slope. In the case of grey goo, there are very many intermediate
steps that must be realised before it could even be a possibility: before you can build
runaway, self-replicating, nanomachines, you first have to build any nanomachine at
all, and then have to build nanomachines that can self- replicate under carefully
controlled conditions, and so on. Most nanoscience now is arguably a continuation
and refinement of forms of chemistry which have been around for a long time – it is a
3
long way from the kind of precise atomic control and manipulation that people such
as Drexler had in mind.
So, the grey goo argument is really about what nanotechnology can factually
accomplish, not about whether pursuing one path of innovation is morally right or not.
But does this mean there are no ethical issue s implicated within the grey goo
scenario? As I said, if a possible (but highly unlikely) future scenario for the use of
nanotechnology were ethically troubling, this would have to be because it has
implications for ethical principles. We saw that grey goo is really about a set of facts,
which have to do with the present and future of nanotechnology. But we could
reformulate the argument tha t we shouldn’t pursue advanced nanotechnology research
in case it leads to grey goo in more general terms, which do relate to principles. If a
given path of technological innovation carries great risks, including risks so great that
if they come to pass they entail the destruction of the world, should we pursue this
path at all?
Here the argument becomes: if doing A risks Z, and Z is serious enough to infringe
fundamental moral principles (such as your right not to be consumed by my
nanobots), then it doesn’t matter about the intervening steps – taking the risk is itself
bad enough. This is the root of what has been called the precautionary principle, a
regulatory standard for judging right and wrong actions which has been incorporated
in European Union la w. Basically, this states that scientific uncertainty about the
outcomes of an action does not give us an excuse for going ahead and doing it, should
there be “reasonable suspicion” that it may have very serious consequences. This
means that no proof of present risk should not be taken as proof of no risk, and that
there is therefore a burden of proof on those who want to use a particular technology
to show that it can be developed without leading to serious negative outcomes.
2. Nanopollution
This concept of precautionary development gets us to the second of the new ethical
problems which some have associated with nanotech – that of nanopollution. This
differs from grey goo in that it doesn’t concern some postulated future technology
which may or may not evolve. Rather, it concerns scientific uncertainties which
surround nanomaterials that are being developed and commercialized now. Many
nanomaterials (for example, nanoscale silica used in the electronics industry for
polishing circuit boards) are generally considered to be essentially no different in
terms of their properties to their larger scale equivalents, and are safe or at least easily
controlled. However, some nanoscale materials (particularly metal oxides, carbon
nanotubes, fullerenes and so on) have unusual size-related properties which raise the
suspicion that they may have serious effects on human health or on the natural
environment, should there be any chance of uncontrolled exposure at some point in
their lifecycle.
4
If such effects are a possibility, they may take a long time to appear, maybe after a
long cumulative series of exposures or pollution incidents. The need to consider
chronic exposure is hard to fulfil in practice: how do we test for its effects? So given
these possibilities, would it be morally wrong to avoid taking special precautionary
measures to avoid exposing people (especially workers in labs and factories) to
carbon nanotubes? Would it be morally wrong to keep adding nanosilver to socks just
to stop them getting smelly?
Here, the relevant ethical principles to consider might be individual rights – the right
to consent to having data collected about me, the right to access this data (by the
person whose data it is and by others), and the right to limit how it is used. But it is
not just surveillance technology that has to be considered – improved medical
diagnostics might mean that it is easier to detect dispositions to certain genetic
disorders by checking DNA, or to monitor chronic health conditions. Once such data
is available, however, who owns it and who can access it is once again of supreme
importance. Certainly, we might imagine wanting one’s doctor to have access to this
information, but what if health insurance companies want access? These issues are
established ones which are already being discussed with respect to biotechnology.
2. Military Applications
When it comes to uses of nanotechnology in the military, there are some relatively
innocuous possibilities. It may well lead (as in civilian applications) to new advanced
materials which can be incorporated in body armour, vehicle armour and so on, as
well as new electronic devices for use on the battlefield
But nanotech may also make it easier to design biological weaponry, as a result of
enhanced control of chemical and biological processes at the nanoscale. Micro-
organisms with specialised characteristics could be easily produced – such as the
capacity to overcome immune systems, produce selective reactions with specific
genetic patterns, the capacity to enter the body and cross tissues such as the blood-
brain barrier more easily, and so on.
5
The question of the ethics of risk arises here too – what if such micro-oganisms were
released from a research environment? How could this be guarded against? How
could they be tested in an ethical manner? Moreover, there is a long tradition of
philosophical thinking about what constitutes the morally right way to fight a war.
“Just war” theory, for example, has sought to establish distinctions between moral and
immoral ways of fighting based on the distinction between combatants and non-
combatants. It is held to be inherently immoral to target non-combatants. Just as it is
often argued that nuclear, chemical and existing biological weapons are inherently
less moral than conventional weaponry because of the way they make it harder to
honour this distinction, so it could be argued that nanotech weaponry may lead us
further down this route of indiscriminate killing.
This kind of argument has to do with the ethical principles of justice, or of the
common good. These state that one should always act so as to maximise the justice or
fairness of the outcomes of what you do (in this context, avoid reinforcing existing
economic inequalities or creating new ones). Or, act so as to promote a certain
positive idea of the good life, in which essential human needs and all aspects of
human life that are most conducive to well-being are enhanced. Promoting health,
access to adequate shelter, a non-polluted and biodiverse environment, access to
education, political freedom, creativity – all these might be elements of such a vision.
It’s evident that, without explicit efforts to reflect on the ethical significance of our
technological priorities and to do something with the results of our reflections,
development will continue to – so to speak- “follow the money”. The Woodrow
Wilson Institute in the US maintains an online directory of consumer products which
6
are based on (or are claimed to be based on) nanotechnology. 3 The majority of these
fall into the category of luxury goods, aimed at consumers in well-off countries, with
sports goods, cosmetics and consumer electronics being particularly well represented.
Similarly, when it comes to health issues and medical applications, a lot of resources
are being poured into developing new treatments for cancer which rely on advances in
nanoscience, including new forms of targeted drug delivery. But cancers tend to
afflict well-off populations more than poor ones. Greater profits come with fulfilling
consumer preferences and needs which are located in the developed world, and
among elites in the developing world. The potential of nanotechnology as a means of
for example, developing new forms of solar power generation or cheap means of
filtering drinking water to prevent the water-borne diseases which kill so many people
in developing countries may ultimately develop into commercial applications much
more slowly. This may, however, change over time – countries like India and China
are investing heavily in nanotechnology research.
4. Human Enhancement
Human enhancement refers to the use of technologies to change, adapt or improve the
basic capacities of human beings. Biotechnology and nanotechnology have been seen
by some as a means of overcoming what some see as the “limitations” of human
minds and bodies – from limits on speed and accuracy of human cognition, right
through to the ageing process, and dying.
Like grey goo, much speculation about the human enhancement possibilities of
nanotechnology stems from a particular vision of what nanotech could be in the future
(rather than facts about what it is in the present). Advanced manufacturing capabilities
at the atomic and molecular level may (it is postulated) lead to the construction of (for
example) artificially intelligent microscopic robots which are capable of repairing cell
and tissue damage, thus extending the human lifespan. Nano-enabled electronics may
make possible implants which could be introduced to the brain to improve memory,
reasoning capability and access to information. Nano-enhanced bionics may improve
physical performance to superhero levels.
Some argue that it would be wrong to use technology to make fundamental alterations
to human beings because there is something inherently worthwhile about being
human, with all its limitations, and that humans make their lives meaningful by
striving to overcome or deal with these limitations through their own efforts. Using
technology to simply remove these limitations is therefore cheating – it lessens the
significance of human efforts, and treats humans as just one more technical device
whose worth has to be measured in terms of how far it achieves optimal performance.
This view therefore rests on the principle that humans should not be treated as mere
means to an end, but rather as ends- in-themselves. Human dignity is proposed as an
3
http://www.nanotechproject.org/inventories/consumer/
7
ultimate value against which standards of right and wrong should be judged (de S.
Cameron 2006).
But one problem which defenders of this view have to deal with is that it is difficult to
specify what dignity means independently of some concept of what normality is. For
example, we might say that normal humans are not very good at digging ho les with
their natural endowments – unlike your average mole, for example. So is a spade a
form of human enhancement? Going further, is it possible to see all forms of
technology as human enhancement ?
Further, what about forms of technology which have a therapeutic function? Someone
who develops cancer has, in a sense, lost their “normality”. But we would not say
that, if a technology existed which cured their cancer, that they should not receive it
out of respect for their dignity. What if someone was born blind, and a technology
could restore something like 20:20 vision? We can see where this is going: if one
wants to defend the “human dignity” view, one has to defend a distinction between
therapy and enhancement – but to do this, one has to define what means by normality,
or normal performance, which given that technology (from spades to computers) is
already ubiquitous in human societies, is perhaps not easy to do.
The way science is done in industrialized societies has changed. It used to be thought
that scientists worked on basic research, and somewhere down the line, technological
applications were found which exploited some of the insights they came up with, and
these were developed perhaps with the assistance of government or within industry.
However, since the Second World War, it has increasingly become the case that
science – whether done in industry, in government labs or in academic institutions – is
surrounded by external priorities which shape what research gets done, even at the
level of basic science. Public funding and private capital combine within consortia of
universities and private companies, with the aim of developing further technologies
that private and public bodies (like the Department of Business, Industry and Skills in
the UK and associated bodies like the Technology Strategy Board) expect will be
important in the future for tackling major social problems.
In other words, the contract between society and science has shifted: rather than
simply letting scientists do basic research and then seeing what comes of the results,
society now supports scientific research financially with certain priorities in mind.
Even basic research is shaped by these expectations in ways that were not common
even fifty years ago. Expectations about how far science should be accountable to
society, and in what ways, therefore differ considerably now from in the past.
8
in the 1990s showed, when the non-scientific public have negative feelings about a
technology, it is often because they are not convinced that its benefits, either for them
or for society in general, will be great enough to offset any troubling uncertainties or
risks which surround it. One of the main reasons for this is that people tend to distrust
the capacity of the public and private institutions which make promises on the behalf
of scientific research to deliver on these assurances about future benefits, and to
safeguard against risks which may emerge further down the line.
This is reflected in the positions of key NGOs like Greenpeace on nanotech: rather
than taking a stand against nanotech as such (as recommended by those who are
worried about grey goo), they demand that its use be guided more by public
discussion of social priorities, rather than what applications may generate the most
profit.
References
de S. Cameron, N. M. (2006). "Nanotechnology and the Human Future: Policy,
Ethics, and Risk." Annals of the New York Academy of Sciences 1093: 280-300.
Donaldson, K., C. A. Poland, et al. (2008). "Carbon nanotubes introduced into the
abdominal cavity of mice show asbestos- like pathogenicity in a pilot study." Nature
Nanotechnology(3): 423–428.