Anda di halaman 1dari 6

Original Intent

Part 1
5/10/2011 Auckland University of Technology Monish Mohan

Monish Mohan

0788847

Introduction In this part I will talk about human mind before going on to review the term original intent which has a deeper meaning than thought. The human mind is the most complex tool in the whole universe, given that it is the best example we have of intelligence with intentionality. This is achieved through large structure of neural networks which are able to effectively interpret sensory data in order to understand and predict the perceived environment, this are mostly external world. Intelligence in nature was evolving through natural selection. The intelligence was developed with the mechanical body that was mobile and it was programmed with survival instincts. So that the subconscious computational process can learn to extract meaning from the sensory modality flow and bind the internal simulation architecture to the physics and objects of the real world. (Siekmann, 2007) Intelligence is something that develops through interactions with the environment. Like a human infant learns to interpret the 2D visual images into 3D virtual objects when exposed to the outside world. Though this process takes place in the mind it is significantly aided through muscular feedback, mobility and other sensory modalities, together with inherited features from their parents. The human organism is just one side of the coin, the other being the environment we live in and moreover, Intelligence is but one aspect of a complex set of processes involved in biological existence. (Siekmann, 2007) Original Intent Original Intent, intentionality, intentional are all related to thoughts or mental states where as Intentions are course of action a person intends to follow but the term Intentionality has both ordinary sense and a philosophical sense. The ordinary sense is set of actions which can be intentional. While the philosophical sense is concerned with the idea of certain mental states are representational, or about other states of affairs. (Allen, 2001) Many mental states are about something like objects or events in the world. Which can be a belief about myth, a desire for food or be angry with a friend? In all these case the mind is directed towards an object. So the idea of directness is known as Intentionality which is a thought or mental state which aims at its object, what it is about, and there is no action or doing required. (Lacewing) For me intentionality is about having a thought of something in mind, something being aware of something, feel something with mind, feeling or perception. All this takes place in the mind where we can target, visualize, feel, analyze, intend an object, present or absent or not in thought. There are primitive intentionality and conceptual intentionality and objective and subjective intentionality as Galesn states. Where he distinguishes between them with a example of a baby that has intentionality when is comes to consciousness in the womb, for it is undoubtedly aware of things, it has objective intentionality, that is, it really does have experience of things. (STRAWSON, 2004) So Intentional states represent the world in particular and partial ways, it is like seeing something from a particular aspect; you can see it, but not all of it. These mental states can be different but with same intentional content, we take different attitudes to that content. For example, I can believe Im arriving ; I can want to be arriving late; I can fear Im arriving late; I can be pleased Im arriving late. An Intentional state, then, comprises a particular attitude or mode towards a particular Intentional content. (Lacewing) Many present-day philosophers introduce experience fewer entities liken robots and pictures,
2

Monish Mohan

0788847

computers and books, when they talk about the problem of intentionality, claiming that such things can be in intentional states or have intentionality even if they are not mental beings. This is very new topic for those unfamiliar with the current debate, even me but the link is made as follows. First, we naturally say that such experience fewer entities are about or of things, or are in states that are about or of things. Second, it can also seem natural to say that the problem of intentionality is nothing other than the problem of how natural phenomena can be about things or of things. Intentionality is thus equated with aboutness. The conclusion that experience less entities can have intentionality follows immediately. (STRAWSON, 2004) Existing Works Some existing work on original intent or lets say intentional system are done by Daniel Dennett and John Searle. Last few decades the discussion between Daniel Dennett and John Searle on the existence of intrinsic intentionality. Dennett denies the existence of phenomena with intrinsic intentionality. Searle, however, is convinced that some mental phenomena exhibit intrinsic intentionality. According to me, this discussion has been obscured by some serious misunderstandings with regard to the concept intrinsic intentionality. For instance, most philosophers fail to realize that it is possible that the intentionality of a phenomenon is partly intrinsic and partly observer relative. Moreover, many philosophers are mixing up the concepts original intentionality and intrinsic intentionality. In fact, there is, in the philosophical literature, no strict and unambiguous definition of the concept intrinsic intentionality. (MOER, 2006) Dennett answers the question about Are original intentionality and intrinsic intentionality the same thing? using an indirect approach by pursuing various attempts to draw a sharp distinction between the way our minds have meaning, and the way other things do. (Dennett, 1988) In his example he talks about encyclopedia which has derived intentionality where it contains information about thousands of things in the world, but its intended for our use. Then he discuss the idea of automating the encyclopedia, by putting all its data into a computer, and turning its index into the basis for an elaborate question-answering system. This way we no longer have to look up material in the volumes; we just have to type in questions and receive answers. Though these seem like users are communicating with another person, another entity endowed with original intentionality, at the end of the day we will still know better. So I agree with Dennett about a question- answering system is still just a tool, and whatever meaning or aboutness we reside in it is just a by-product of our practices in using the device to serve our own goals. It has no goals of its own, except for the artificial and derived goal of "understanding" and "answering" our questions correctly. (Dennett, 1988) Related work on intentionality is Intentional Agent Architecture where the agent is characterized with mental attributes that are commonly used to characterize people. Though is still derived intentionality its a start towards original intentionality. Where the agent uses attributes of beliefs, desires, and intentions) to direct its actions and selection of plan to achieve its goals. This kind of agent works as an interpreter that operates on those representations of mental attitudes. The agents beliefs are predicate logic statements which are in binary logic. I think the agent is thus limited with its beliefs not being able to think other than true or false. Then desires are a set of goal condition that agents wanted to achieve. The agent has a limited computational resource thus the agent has to choose intentions as the state affairs the agent has decided to pursue. In this agent intentions are decisions comprising the goal and actions to take. (Budhitama Subagdja, 2009) So this agent intentions are selected
3

Monish Mohan

0788847

from a lower level of abstraction, until it arrives at the one that can be executed, which is not a bad idea given that it has to through all abstraction level before it executes. Though it would take time to react it still works to some point. How do you create a machine with original intent? This is a hard question given that Original intentionality must be something that is original not borrowed from somewhere else or it cant all be derivative. So the problem for mind design is that artificial intelligence systems have derivative intentionality borrowed from their designers or users which means its not original so it comes back to meaning of Original Intent. Even if the designing and building a system with a mind of its own succeeds, this will open up the possibility for an artificial system to have a genuine original intentionality, like us. Which raises the question is this possible? Intentionality poses this challenge: how is it possible for anything physical to have the property of Intentionality? Physical things are never about anything. To say what it is for a physical thing or state to be the thing or state that it is does not require reference to something else. A particular molecular structure or physical process, described in these terms, is not about anything. However, the states and processes of your brain are just chemical states and processes. So how could they ever be about anything? Dennetts approach implies that rational engagement is more pertinent to whether the intentionality is original or not than is any question of natural or artificial origin. An intention system in his world is one that exhibits an appropriate pattern of consistently rational behaviour by being actively engaging with the world. So an artificial system that behaves on its own in rational manner, consistently enough and in a suitable variety of circumstance then it has original intentionality, a mind of its own. Though designing and building a mind with intentionality is far more challenging. Currently artificial intelligence is more like simulation intelligence. The difference between intentionality and simulation intentionality is consciousness. Without consciousness, a series of functional interactions remain meaningless to the robot even if they look meaningful to us. Intentional states are meaningful from the inside and they are meaningful to the things that have them. So without consciousness meaning is lacking and therefore, so is Intentionality. (Lacewing) About Consciousness There are only few studies of artificial intelligence to consciousness studies, because most researchers in this field work on better defined, less controversial problems. So consciousness stems from the structure of the self-models that intelligent systems use to reason about themselves which is core function towards original intent. A creatures models of itself are like models of other systems, except for some characteristic indeterminacy about what counts as accuracy. In order to explain how an information-processing system can have a model of something, there must be a prior notion of intentionality that explains why and how symbols inside the system can refer to things. (McDermott, 2007) McDermott states that the next step is to show that the model almost certainly will contain ways of thinking about how the systems senses work. So the difference between appearance and reality arises at this point, and allows the system to reason about its errors in order to reduce the chance of making them. However, the self-model also serves to set boundaries to the questions that it can answer. McDermott thinks that sensory quail is a useful way of
4

Monish Mohan

0788847

cutting off useless introspection about how things are ultimately perceived and categorized. Then he raises the question about finding the consciousness between those who believe that the just-so story the self-model that tells its owner is all you need to explain phenomenal consciousness, and those who think that something more is needed. I agree with his point about not being able to create systems and test hypotheses against them in the foreseeable, because real progress on creating conscious programs awaits further progress on enhancing the intelligence of robots. There is no guarantee that AI will ever achieve the requisite level of intelligence, which has been one of the hard things to achieve in the AI field. So until there is any further progress the research on consciousness will be a mere waste. Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion. (McDermott, 2007) So for a system to have intentionality it need some basic understaning of its surroundings and ablity to learn. For instance a human infant is born with a system that enables it to learn. Like other authors mentioned its still a impossible task given that we are the ones who will be creating the system with Orginal Intent. So called derived intent in technical terms which is similar to inhertited features in humans that have been passed on to us from our ancester. Though the system can have the ability to build on from this point to have a new orginal thoughts. So this research has given me a idea how we can move forward by building on specific ideas that could make this system possible.

Monish Mohan

0788847

Reference
Allen, C. (2001, February 26). Intentionality: Natural and Artificial. Texas, Texas, USA. Budhitama Subagdja, A.-H. T. (2009). A Self-Organizing Neural Network Architecture. Budapest, Hungary: International Conference on Autonomous Agents and Multiagent Systems. Dennett, D. C. (1988). Evolution, Error and Intentionality. New Mexico: Sourcebook on the Foundations of Artificial Intelligence. Haugeland, D. C. (2004, April 20). Intentionality. Retrieved April 23, 2011, from www.comlab.ox.ac.uk: www.comlab.ox.ac.uk/activities/ieg/elibrary/sources/intentionality.pdf Haugeland, J. (1997). Mind Design II - Philosophy Psychology Artificial Intelligence . London, England: A Bradford book. Lacewing, M. (n.d.). Intentionality and artificial intelligence. Retrieved April 24, 2011, from www.routledge.com: http://cw.routledge.com/textbooks/philosophy/downloads/a2/unit3/philoso phy-mind/IntentionalityArtificialIntelligence.pdf McDermott, D. (2007). Artificial Intelligence and Consciousness. In M. M. Philip David Zelazo, The Cambridge Handbook (pp. 117-150). Cambridge University Press. MOER, A. V. (2006). THE INTENTIONALITY OF FORMAL SYSTEMS. Foundations of Science, 30. Siekmann, D. M. (2007). Cognitive Technologies. Rockville: Springer. STRAWSON, G. (2004). Real intentionality1. In G. STRAWSON, Phenomenology and the Cognitive Sciences (pp. 287-313). Netherlands: Kluwer Academic Publishers.

Anda mungkin juga menyukai