Anda di halaman 1dari 5

You have reached part of the Mulhauser Consulting legacy site.

Please note that the legacy

pages of the Mulhauser Consulting site have not been actively maintained since 2003.
Please click to visit the current home page of Mulhauser Consulting, Ltd.
Coaching • Consulting • Counselling • Contact & Background
Principio del formulario

Sci Google Search

fic Web
1 5006928713 ISO-8859-1

• Research Front
Res ISO-8859-1 0001 3nFy4Hw k8gw dE GALT:#0066CC;G
Page ear
• Research Details en

○ Researchch
Final del formulario
○ Publicatio
ns &
What is Self Awareness?
This article offers one look at the notion of self awareness as it bears
○ Mind Out on the project of building cognitively sophisticated robots. A
of Matter
companion article addresses some simple objections.
• Resources • Introduction
○ Bibliograp • Picking Definitions of Self Awareness
hies • Self Models: What They Are
○ Tutorials ○ The Denser Approach
& Intros
○ The Easier Approach
○ Draft
Papers • Self Models: What Good Are They?
○ Robots • Objections
• Related The single most significant point for dispelling the confusions
Destinations which sometimes infest talk of self awareness is that it need have
○ BTexactnothing whatsoever to do with consciousness. No single definition
Technologof self awareness commands a dominant mind share in the specialist
ies literature, and it is true that it is sometimes conflated with
consciousness. But within the context of analysing cognition and of
○ Santa Fe
building robots, we are safe in treating self awareness as a purely
○ Greg cognitive characteristic, while 'consciousness' refers to the
Chaitin'sphenomenal properties of mental processes. (For details, see the
Pages short notes on consciousness or on qualia in the section containing
tutorials & introductions.)
Y JournalThe relationships between (cognitive) self awareness and
(phenomenal) consciousness make interesting discussion topics, and
• Advertising
they live at the heart of a great deal of contemporary philosophy of
mind. It may be, for instance, that achieving the cognitive trick of
• Contact & self awareness guarantees phenomenal consciousness, or it may be
Background that consciousness could even exist without any cognition at all.
Fascinating though these questions are, it's beyond the scope of this
short paper to flesh out positions on such relationships (see Mind
Out of Matter for some of that!). The important point for now is
simply to recognise that the two are altogether different things.
Picking Definitions of Self Awareness
Precise definitions of self awareness vary wildly, but usually they
Return to the IWRC '98 are picked to capture one or several aspects of the term's intuitive
main page. meaning. The intuitive understanding of self awareness underlying
most of the literature on the topic is bound up with the notion of an
Return to the Draft Papers
organism using some concept of its own self -- as an item situated
main page.
within an environment -- to affect its behaviour. For instance,
experiments show that most primate species are unable to recognise
themselves in mirrors. (Specifically, if a spot of paint is applied to
their forehead and the animals are then offered a mirror, they fail to
respond to the image in the mirror by touching their forehead to
investigate the spot; chimpanzees and humans are notable
exceptions.) This, it has been suggested, indicates that such animals
lack self awareness -- even though they may perfectly well be
phenomenally conscious. (I.e., they feel pain, see red, etc.) On this
view, because the animals apparently lack the ability to translate a
picture in a mirror into information about themselves, they are not
'self aware'.
In my own work, I actually tend to avoid this slant on self
awareness. This psychologically inspired notion of self awareness
seems to me to place too much weight on an organism's capacity to
conceptualise its self in a highly abstract fashion.
Instead, my principal interest centres on the question of whether an
organism (or robotic system) can be said to have a self at all,
whether or not it actively engages in abstract conceptualisation of
that self. In Mind Out of Matter, I develop an information theoretic
description of the self model, understood as a type of data structure
which can reasonably be called the self of that which instantiates it.
Self models capture some notion of 'self awareness' in the sense that
organisms possessing self models display cognitive processes which
incline us to credit them with something like the intuitive notion of
self awareness. As such, they offer one way of making precise that
intuitive understanding of self awareness. But the notion is clearly
much broader than that to which psychologists appeal in denying
self awareness to most primates. I would like to make two things
• I take self models to be one cognitive route to self
awareness, but not necessarily the only route; other
architectures might also embody cognitive processes which
justify our calling them self aware.
• The notion of a self model is a definition, but its role in real
organisms is an empirical hypothesis.
In other words, I am just defining a class of architectures which
yield some important features of the intuitive notion of self
awareness, and I am hypothesising that such an architecture
subserves the type of cognitive processes which, in real organisms,
would lead us to characterise those organisms as self aware.
Self Models: What They Are
The Denser Approach
Each 'weasel word' in the following paragraph will be spelled out
The self model is understood as a conditionally coupled,
functionally active representation of the environmentally situated
sensory and motor system of which it is itself a part. (Perhaps the
label 'self-in-a-world-as-it-looks-from-here model' is more
descriptive, since the dynamic data structure essentially reflects a
centred perspective on the world and the system within it.) The self
model functionally represents itself, the environment, and the
interactions between and within each.
The fundamental notion of information underlying notions like 'data
structure' is that of algorithmic information content. I take
representation to be a symmetric relationship obtaining between
two physical bodies when those bodies contain substantial mutual
information content; alternatively, two items represent each other
when they are not algorithmically independent. (For those with an
interest in the literature, I would draw this to your attention: Hey, an
actual formal definition of representation!) Extending the
definition, an item x functionally represents another y when it
contains substantial mutual information content both with y and
with a set of transformation axioms describing the range of changes
in y for a given domain of conditions. Calling a representation or
functional representation functionally active means that the relevant
body of information plays a role as a representation in a functional
system. ('Functional system' can be another weasel word, but it is
beyond the present scope. In Chapter 5 of Mind Out of Matter, I
define a novel measure of process complexity called functional
logical depth and use it to provide what is arguably the first precise
and mathematically nontrivial account of functional systems.)
Finally, a functionally active representation which is conditionally
coupled to that which is represented may temporarily become
information theoretically 'disconnected' from that which it is
representing, in the sense that the actual state of that represented
may deviate from the information represented. The degree of such
uncoupling is quantified by the conditional information relation
between the representation and the represented.
The Easier Approach
Here's the easy way to make sense of the above information
theoretic look at self models.
The self model is a body of information about an organism and its
environment and about the flow of changes between and within the
two. That body of information plays a central role in directing the
behaviour of the organism. Unlike the case of a modern digital
computer, in which a very sophisticated central processing unit
operates on what are usually comparatively simpler data structures,
the self model is a sophisticated data structure which affects the
operation of comparatively simpler (at the relevant level of
description) biological components. The conditional coupling of the
self model to the organism and the environment allows the organism
to derive information about counterfactual conditions in itself or its
environment. In other words, because it contains information about
the interactions of changes between and within the organism and the
environment, the self model can yield 'answers' to endogenously
generated questions about what would happen if current conditions
were other than they are. It is this capacity to test counterfactual
hypotheses which underlies what Popper famously described as a
creature's ability to allow its hypotheses to die in its stead.
Self models are notably absent in all current artificial systems of
which I am aware.
The capacities which self models subserve, however, are notably
abundant in the natural world. (Note that this does not necessarily
imply that self models themselves are similarly abundant; recall that
the existence of self models is itself an empirical hypothesis.)
Self Models: What Good Are They?
So suppose one has a self model. What good is it?
The principal advantages of the self model derive from the
Popperian notion described above: an organism may rehearse
alternative courses of action without actually engaging in them, it
may model the viewpoint of its conspecifics by modelling itself in
the physical position of another individual, and so on. Within a
selectionist context, the advantages of having a self model over not
having one are, ceteris paribus, straightforward. How about
robotics, and machine cognition more generally?
Creating machines with a sense of self is unexplored territory. The
machines we use today are under a sort of selective pressure, albeit
of a fairly bland sort. Sometimes, high price/performance ratios win
out, although installed base, software availability, and many other
factors combine to press one or another variety of computer to the
fore and send others to extinction. Will the cognitive sophistication
so important for organisms competing in the biological realm prove
useful in the artificial realm as well? Just how would a machine
with a sense of self stack up? How might it exploit its 'self
awareness' to get itself selected over the latest shiny PowerPC or
plain vanilla Pentium II?
One thing is almost certain: just as the appearance of biological
organisms with progressively richer notions of self radically
transformed the selective biological environments within which
they emerged, so too will the emergence of machines with a sense
of self radically transform the selective dynamics of the
marketplace. While they will initially occupy the commercial
equivalent of a novel and tiny environmental niche, I believe that
eventually they will largely displace today's 'dumb' boxes, rendering
them a quaint bit of nostalgia.
By analogy to the biological world, today's computers have yet to
climb out of the primordial silicon soup. It took Nature 3.8 billion
years to create us, but I believe it will soon become clear that we
can move just a little faster than that when it comes to pushing
machine cognition up the 'evolutionary' scale.
Since it can be easy to misinterpret some of the statements above
summarising the more careful and extended treatment of self models
offered in Mind Out of Matter, a separate short paper covers some
of the simpler objections.

Copyright © 2002-2009, Mulhauser Consulting, Ltd.