By Andrés Monroy-Hernández
Society of Mind, Spring 2006
Professor Marvin Minsky
Introduction
In this paper I try to unpack Understanding using the framework and style for the
unpacking of Consciousness presented in Chapter IV of the Emotion Machine
[1]. The relevance of this analysis for Artificial Intelligence is exemplified by the
fierce debate over John Searle’s Chinese Room Argument [2]. Similarly, this topic
is also the focus of study of educators, particularly those interested in a
pedagogy centered on teaching for understanding, as eloquently stated by David
Perkins. My goal is to bring together different perspectives on the subject to
better dissect it or, as Minsky would put it, to unpack the suitcase of
Understanding.
Definitions are not useful either. For example, we find short definitions like
“mental grasp” [3] being too broad, while others too specific that do not fully
describe all its uses: “a psychological process related to an abstract or physical
object […] whereby one is able to think about it and use concepts to deal
adequately with that object” [4]
If we are to design a machine that does what our minds do, we must identify
what are the elements that constitute human understanding. David Perkins and
other educators, in their interest to develop better pedagogies, have identified
different constituents of understanding. Perkins starts by distinguishing
knowledge from understanding: “A comparison between knowing and
understanding underscores the mysterious character of understanding. […] The
mystery boils down to this: Knowing is a state of possession […] But
understanding somehow goes beyond possession. […] we have to be clearer
about that ‘beyond possession.’” [8] To go beyond possession one must show
enablement, the capacity to do something with that knowledge, be it observable
or not by others.
Multirepresentation
"Understanding means seeing that the same thing said different ways is
the same thing." --- Ludwig Wittgenstein
False positives
As mentioned before, it seems that often times our “understanding recognizer”
makes erroneous detections because sometimes it changes previous verdicts.
For example, sometimes after reading a few pages from a book, we think we
have understood them, but later we change our mind and realize we had not
really understood. A few reasons for this can be:
1. Our “understanding recognizer” resource takes time to be developed.
Perhaps the “understanding recognizer” only develops as much as it’s
needed and no more. Therefore in childhood, when we are just starting to
experience the world, we might experience false positives more often.
2. The “understanding recognizer” incorrectly decides which levels from the
understanding model it will need to detect to accomplish a goal. This is the
idea that the “understanding recognizer” looks for the activation of different
levels depending on a predicted goal.
For instance, for a student used to only memorize data and processes to
accomplish his goals, his or her “understanding recognizer” might be triggered
with just level one being active. For a different student, with previous experiences
that require the four levels to accomplish goals, his or her “understanding
recognizer” will only be triggered when the four levels are active. This might
depend on the familiarity with the topic, previous experiences and age. This is
why children and novices in a field might have similar experiences related to
detecting false positives. Perhaps, one of the goals of education should be to
teach which levels of understanding to turn on at any given point.
Some people might contend that the analysis of understanding would not be
complete without mentioning one of the arguments that have inspired intense
debate in the Artificial Intelligence community: the Chinese Room Argument. This
argument is a Gedankenexperiment, or thought experiment, proposed by John
Searle based on five well known ideas:
1. Turing Machines, the mathematical representation of a symbol-
manipulating device.
2. Algorithm, the mathematical description of a program.
3. Church’s Thesis, the idea that any algorithm can be implemented in a
Turing machine.
4. Turing’s Theorem, the idea that a Universal Turing Machine is capable of
simulating any Turing Machine.
5. Turing Test, originally proposed by Alan Turing “in order to replace the
emotionally charged (and for him) meaningless question ‘Can machines
think?’ with a more well-defined one.” [10]
With those ideas in mind, Searle asks us to think that “a monolingual English
speaker who is locked in a room with a set of computer rules for answer
questions in Chinese would in principle be able to pass the Turing Test, but would
not thereby understand a word of Chinese. If the man doesn’t understand
Chinese, neither does any digital computer.” [9]
A first reaction to this experiment could be to note the weight Searle places on
the Turing Test as a way to determine if a machine can understand Chinese.
However, to his defense, later he states the formal structure of his argument as
follows:
A. Programs are syntactical
B. Minds have semantic contents
C. Syntax is not sufficient for semantics
D. Therefore, programs are not minds.
There are a number of replies to the Chinese Room Argument, some of them, I
believe, are unconvincing. For example, the “robot reply” proposes that if we
replace the room with a robot that walks around and interacts with the
environment, like a human does, it would be showing understanding for the
things he does. Other responses, like the “systems response” are more
convincing. The “systems reply” states that it is not the man inside the room that
understands Chinese, but the system as a whole.
It is important to note that Searle does not deny the possibility that machines can
do what human minds do. What he does not believe is that a computer program
or for that matter a Turing Machine, can achieve it. He says: "only a machine
could think, and only very special kinds of machines, namely brains and
machines with internal causal powers equivalent to those of brains And that is
why strong AI has little to tell us about thinking, since it is not about machines but
about programs, and no program by itself is sufficient for thinking.” [2]
As stated before, one of the biggest issues with this type of discussion is the use
of the word understanding. This leads to a lengthy discussion over a word that
encapsulates multiple other processes without looking at each one of them
separately. Unfortunately, he also puts emphasis on yet another suitcase word:
intentionality. He says: “Any attempt literally to create intentionality artificially
(strong AI) could not succeed just by designing programs but would have to
duplicate the causal powers of the human brain.” [2]
Conclusion
The model of understanding presented in this paper gives importance not only to
the programs, but to the knowledge and the relationships between elements in
the four different levels. It is my hope that it also hints on the quantity of those
elements needed in order to achieve each level.
The Chinese Room Argument debate help us conclude that the question to ask is
not whether a program can “understand or not” but rather: what type of
processes would a “very special kind of machine” need to have in order to
achieve what the human mind achieves? I think that the Emotion Machine
addresses that question and I hope the model of understanding contributes by
tackling one of those processes.
References
[1] Minsky, M (in preparation). The Emotion Machine.
[2] Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain
Sciences 3 (3): 417-457
[3] Merriam-Webster OnLine. http://www.m-w.com/
[4] Wikipedia. http://en.wikipedia.org
[5] Leake, J (2006) Dolphins ‘know each other’s names’. The Sunday Times. May
6, 2006. http://www.timesonline.co.uk/article/0,,2087-2168604,00.html
[6] Janik, V.M.; Sayigh, L. S.; Wells, R. S. Signature whistle shape conveys
identity information to bottlenose dolphins. Proceedings of the National Academy
of Sciences USA (in press)
[7] Truth Journal. http://www.leaderu.com/truth/2truth03.html
[8] Perkins, D. (1992) Smart Schools: Better Thinking and Learning for Every
Child. Chapter 4: Towards a Pedagogy of Understanding. pp73-95.
[9] Searle, John R (1998). The Philosophy of Mind (Course Guide and Audio).
Lectures three and four. The Teaching Company.
[10] Turing Test. Wikipedia http://en.wikipedia.org/Turing_test