JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS
149
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
150 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 151
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
152 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
cies, the only ones it has. Persons and person status are no more
"natural" nor "artificial" than is society itself. If society is sometimes
a precarious affair, this is a misfortune which reflection alone will not
affect.
In this article, I want to bring to bear on the problem of robot con-
sciousness the general principles I have just sketched out. If only society
can make a person, then only society can make a machine a person -
if indeed society can. The computing machine which the logician in his
study conceives and the engineer in his laboratory builds cannot be-
come a person ("just like us") until we collectively declare it one, per-
haps by an act of Congress or something of the sort. The prospect is
a challenge to the disciplined imagination.
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 153
ture and control. ("Body" and "mind" are the conventional terms.)
Whatever something is made of, whatever structure it has, if it can be
controlled as we control machines, that seems to make it a "mere"
machine, a robot. And if something has the structure of a mechanical
artifact, if it is made, say, of gears, wires, and transistors, then however
it may be controlled, that seems to make it a robot too. Things that
are robots by both criteria are familiar today and present no problem.
Intuition no longer guides us, however, when we ask whether some-
thing can be a robot by one criterion but not by the other. We are, in
effect, changing the relation between mind (control) and body (struc-
ture) by varying the terms independently of one another.
If something has a human mind, but a mechanical body, then it is a
robot by the structure criterion but not by the control criterion. I shall
call the problem of the possibility of such a being "the structure prob-
lem." If something has a human body, but a mechanically controllable
mind, then it is a robot by the control criterion but not by the struc-
ture criterion. I shall call the problem of the possibility of such a being
"the control problem."
In the Foundations of the Metaphysics of Morals, Kant writes
Now we cannot conceive of a reason which consciously responds to a bidding
from the outside with respect to its judgments, for then the subject would
attribute the determination of its power of judgment not to reason but to an
impulse.
While it makes sense to say of someone else that even though he looks
human, he just might possibly be a mere machine like a can opener,
it does not make sense to say this of oneself. When persons do make
such assertions about themselves ("I am a mere instrument in the hands
of the Party"), we take them to be resolving, not reporting. The control
problem is a third-person problem: one can raise it about someone else,
but not about oneself.
The structure problem, on the other hand, has both third-person and
first-person versions. The third-person version is familiar. Indeed, it
has monopolized the attention of students of the problem. Could a
mechanical artifact do things which would require me to call it con-
scious, to credit it with a mind like my own? How could it do this,
what would it have to do? This is the third-person version. The first
person version is: could I have my own mind in a nonhuman body, for
example, a gear, wire, and transistor body? I am not now a robot, but
could I become one, or be transformed into one?
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
154 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
Now this may seem a silly question. To see the point of it, we have
to recall the functionalism which is the controlling assumption of our
argument. Like "person," "mind" and "consciousness" are social con-
cepts. Whatever may be their substantive or material content, they are
formally reciprocal. As we observed, reciprocity is enforced: others will
judge one's own claim to have a mind or to be conscious in the light
of the accuracy with which one detects mind and consciousness in them.
Each of us necessarily thinks that he himself has a mind and is con-
scious. Whenever someone attributes a mind or consciousness to some-
thing else, then, he must mean at least that it has what he has. If, then,
one decides in advance that one could not possess one's own mind
except in the familiar flesh-and-bone body, one would appear to have
decided the third-person structure problem in advance. If I could not
possibly have what I have in a mechanical body, then it does not make
sense to suggest that something else could have what I have in a
mechanical body. We ought, therefore, to discuss the first-person version
of the structure problem before going on to the third-person version.
"I have what I have" (viz., mind, consciousness) is a tautology, a
necessary truth. Therefore it does not depend, either for being true or
for being known, on one's knowledge of one's body. If the transition
could be managed, there seems no reason why one could not have the
same self-consciousness in a different kind of body, in particular, a
mechanical body. Mechanical substitutes for individual organs are as
old as the wooden leg. Given such contemporary sophistications as
mechanical heart valves and artificial kidneys, it seems plausible to
suppose that it may someday be possible to replace every organ of the
body, even the brain, by a mechanical substitute - an up-to-date ver-
sion of the ancient concept of transmigration.
This transmigratory fable has to make sense as a first-person story:
one has to be able to put oneself in the place of the hero. Imagine,
then, that one has volunteered (perhaps out of scientific curiosity) to
become the world's first robot-convert, the first person to be trans-
formed into a machine. One cannot imagine the transition accomplished
at a single step - what would one imagine? - so we may suppose it
accomplished gradually, through a series of surgical operations in which
one by one the various organs of the body are replaced, the brain
coming last of all. If the operations are successful, one will emerge
converted into a robot. (Note that nothing is implied here about one's
freedom of the will,. The question is, "Could I have what I now have
in a mechanical body?" Whether what I now have includes freedom of
the will is another question.)
I take it that no operation in the robot-conversion series except the
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 155
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
156 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 157
That is to say: "God sees - but we don't know." But what does that mean?
We use a picture; the picture of a visible series which one person sees t
whole of and another not. The law of excluded middle says here: It mu
either look like this, or like that. So it really - and this is a truism -sa
nothing at all, but gives us a picture....
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
158 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
How? You are trying to have the results of communicating without the
communicating.
Could I have what I have in a mechanical body? Does it make sense
to decide the question in advance? We have seen, at least, that by a
"body" one means something with which one interacts with others.
Moreover, by envisaging the robot-conversion process as a step-by-step
process (and again: how else could one imagine it?), we have surrepti-
tiously introduced a whole host of "interaction conditions." The mechan-
ical body is to be of a convenient size, neither too large nor too small.
It is to behave at a certain pace, neither too fast nor too slow. Perhaps
the further we pursue the question, the more concretely we imagine the
situation, the more the robot body will come to seem just a slight varia-
tion on the conventional flesh-and-bone body. Indeed, since personifica-
tion is enforced, and imagination disciplined, this should come as no
surprise.
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 159
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
160 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 161
eyes - what? Can one keep the picture in focus? Surely one has to
imagine it having a dull, lifeless stare and making jerky, mechanical
movements. By hypothesis, it "has" a human body. But what does that
mean? It cannot put on a human performance, it cannot "have" a human
body the way persons have human bodies, or it will trigger off one's
personificatory proclivities, and one will turn it into a degraded human
rather than a machine.
Murdering a man is not like shooting at a target, nor is torturing a
man like twisting a rope. Acts of the first kind have a different moral
quality from acts of the second. If by "person" we mean "sane person,"
then a person is not logically required to act virtuously, but he is logic-
ally required to be able to act virtuously. I assume the reader sane
(however vicious). Take, then, your attitude towards a machine, say, a
can opener. Now apply that attitude to a living human body. Can you
really do it? When one demolishes the can opener, one does not feel one
is insulting it or degrading it. Imagine yourself demolishing a living
human body - I leave the means unspecified. To make the picture at
all plausible, does one not have to imagine oneself intending the act as
a degradation of the victim?
It is tempting to object that whether one can or cannot take up an
attitude proves nothing. But just as we know the difference between real
and make-believe personifications, we also know the difference between
real and make-believe depersonifications. The attitudes we are required
to take toward that most potent of symbols, the human body, are
reinforced by the most obvious social norms, norms whose force is so
great that to ignore them is literally madness. So we may press the case:
the cadaver-part machine puts on human performance. And yet we
control it, we think it a mere machine which we can destroy with
impunity. But how is it a machine, and how do we control it? The two
parts of the picture - human performance, nevertheless machine - will
not both come into focus at once. What puts on a human performance
with a human body simply is a person. So much for the control problem.
But what constitutes a human performance? Could a machine put one
on? One has to see that a certain sort of example is insufficient here,
temptation to the contrary. Our ingenious computer is bolted to the
floor and has "Property of US Govt" stamped on its console. We turn
it on and test its intelligence, let us say by playing Turing's "imitation
game" with it.2 Now the game is over, and the computer insists, "You
see? I'm human just like you! I won the game, I fooled you every time,
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
162 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
3 For an excellent brief discussion of this, see Erving Goffman, 'The Nature
of Deference and Demeanor," American Anthropologist, Vol. 58 (June, 1956),
pp. 473-502; reprinted as No. 97 in the Bobbs-Merrill Reprint Series in the Social
Sciences.
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 163
4 The error involved is subtle. For example, Hilary Putnam writes, "I will
imagine that we are confronted with a community of robots...." ("Robots:
Machines or Artificially Created Life?" J. Phil., Vol. LXI, No. 21 (Nov. 12, 1964),
p. 677). But how are we "confronted with a community?" If I understand how
several robots form a community, then I must have a criterion of robot identity,
a criterion which enables me to understand how, e.g., robot A identifies (projects
himself into, puts himself in the place of) robot B. But this must mean I put my-
self in the place of robot A, etc. Then the question of the consciousness or person
status of robots is already settled.
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
164 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 165
This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms