Anda di halaman 1dari 9

ESCOM IPN

ARTIFICIAL INTELLIGENCE

The following is a summary of Artificial intelligent a modern approach 3 rd edition,


chapters 1-2

INTELLIGENT AGENTS
An agent is something that reason (agent come from latin agere, make). But
informatics agents have other attributes that distinguish it from conventional
programs, thus are: Automatic controls, a perception of their environment, persist over
a long period of time, change adaptability and be able to reach different goals. A
rational agent acts seeking the best result or in case of uncertainty, the best hoped
result.
From the artificial intelligence point, taking account the thinking laws, all emphasis is I
do the right inferring. Get this right inference can be part of that a rational agent is,
and that one rational way to act is arrive to conclusion if some action allows to arrive
an objective, then that action has to be made. However, do a right inference doesnt
ever depends on rationality, due to the existence of different situations where is
nothing right to do and have to take a decision. Also exist ways to act rationally that
doesnt implies to make inference, for example, quit the hand of hot surface is so much
more a reflex act than a slowly response delivered after a careful deliberation.
Get a perfect rationality (do ever the right thing) not is possible in complex
environment. The computational complexity is excessively big. So is established a
limited rationality (act in a fitting manner with limited time).
The idea is to establish a set of design principles that allows constructing meaningful
agents, systems that can be called reasonably intelligent.
We talk about agents, their environment and interaction. Some agents get better result
that others even in the same environment, thus is, it do everything it can. The way in
which an agent proceed highly depends in environment

AGENTS AND ITS ENVIRONMENT


An agent is anything able to perceive its environment helped by sensors and act in that
environment using actuators
Environment
Agent
Sensors

Actuators

ESCOM IPN
ARTIFICIAL INTELLIGENCE

An human agent have eyes, ears and nose between other sensorial inputs and have
legs, hands and other body parts to act. A robotic agent receives keyboard hits,
information files, etc. We assume that each agent perceives its own actions but not
ever its effects.
The perception term is used in this context to indicate that the agent can receive inputs
in any instant. The perceptions sequence of an agent is the complete historical records
of all that have received until that moment. If we are able to specify what decision an
agent can take for each one of the possible sequences of perceptions, then we have
explained almost everything about that agent.
In mathematics terms, an agent behavior is given by the agent function that projects a
given perception into an action.

THE CONCEPT OF RATIONALITY


A rational agent is that that do the right thing; in conceptual terms. Obviously the right
thing is better that do something bad, but what does it mean to do the right thing?. As
first approach that the right thing is what allows to the agent to obtain a better result.
Therefore a performance measure is needed.

EFFICIENCY MEASURAMENTS
The efficiency measurements use the criterion that determines the output. When an
agent is inside environment, this realizes actions accordingly with the perceptions
received from the environment. This action modifies the environment making pass
trough a sequence of environmental states. If the sequence is the correct then the act
of the agent was correct. Obviously, doesnt exist only one measurement for all agents.
We can ask an agent for it opinion about its own actuation, but a lot of agents wouldnt
be able to answer and others would deluding itself
As General rule, is better to design performance measures according to what one
actually wants in the environment, rather than according to how one thinks the agent
should behave.

RATIONALITY
The rationality in a given moment depends of 4 factors:
1.
2.
3.
4.

The performance measure that defines the criterion of success


The agents prior knowledge of the environment
Actions that the agent can perform
The perceptions of the agent until that time

Based in this principles, the definition of a rational agent is: in each possible sequence
of perceptions, an rational agent must to realize that action that maximizes its
performance measure, based in the evidence of perceptions and the stored knowledge.

ESCOM IPN
ARTIFICIAL INTELLIGENCE
A omniscience is impossible, certainly an agent knows the
immediate result of his action in the environment, but it doesnt knows the total impact
and the influences of other agents or the answer of the environment.
Take actions with the intention of modify future perceptions is process called
information recompilation. A rational agent mustnt only recompile information; rather
have to learn about his perceptions.
An agent must to be autonomous, thus is able to complete the incomplete or partial
initial knowledge, is reasonable give to agents a initial knowledge and the ability to
learn. After a successive experiences interacting with the environment, the agent
knowledge is independent of the initial knowledge

TASK ENVIRONMENT
In the specification of a simple agent, is necessary to specify the efficiency,
environment, actuators and sensors. This is the so-called task environment, in agent
design this must be the first step and is desired as filly as possible.

PROPERTIES OF TASK ENVIRONMENTS


The environment range in which IA is developed is very big. However, is possible
identify a small group in which it can be categorized.

FULLY OBSERVABLE VS PARTIALLY OBSERVABLE


If an agents sensors provide it access to the complete state of the environment in each
moment, then the work environment is fully observable. A task environment is
effectively fully observable if the sensors detect all aspects that are relevant to the
choice of action; relevance, in turn, depends on the performance measure. Fully
observable environment are convenient due to that the agent need not maintain any
internal state to keep tack of the world. An environment migh be partially observable
due to noise and existence of poor quality sensors or because the sensors not receive
information from the system, or because parts of the state are simply missing form the
sensor data. If the agent has no sensors at all then the environment is unobservable

SINGLE AGENT VS MULTIAGENT


Distinction between single agent and multiagent system may be simple enough. But
sometimes an entity may be viewed as an agent and sometimes must be viewed as an
agent. Does an agent A, have to treat an objet B as an agent, or it can be treated
merely as an object behaving according to the laws of physics, analogous to waves at
the beach?. The key distinction is whether Bs behavior is best describing as
maximizing a performance measure whose value depends on agent As behavior. In
this way, chess is a competitive multiagent environment, exploring a zone map by two
or more agents is cooperative multiagent environment. The agent design problems in
multiagent environments are often quite different from those in single agent
environment; for example, communication often emerges as a rational behavior in
multiagent environments

DETERMINISTIC VS STOCASTIC

ESCOM IPN
ARTIFICIAL INTELLIGENCE
If the next state of the environment is completely determined by
current state and the action executed by the agent, then the environment is
deterministic, other way is stochastic. If the medium is deterministic, except for the
actions of other agents then the medium is strategic.

EPISODIC VERSUS SECUENTIAL


In an episodic environment, the agent experience is divided in atomic episodes. Each
episode consists in agent perception and the realization of a single posterior action. Is
important to say that the next episode doesnt depend of actions taken on previous
episodes. In episodic mediums the election of action in each episode depends only in
episode itself.

STATIC VS DINAMIC
If environment change while agent deliberate then the environment is dynamic for the
agent; otherwise is static. Static environments are easy to manipulate because the
agent dont need to be aware of the world while take a decision over an action, neither
about the time. Dynamic environment, continuously asks agent what want to do, if still
not decided, assumes the decision to do nothing.
If environment doesnt change with time, but agents efficiency does, then the
environment is semidinamic.

DISCRETE VS CONTINOUS
Distinction between discrete and continuous can be applied to environment state, time
management, and agents perceptions and actions.

AGENT STRUCTURE
So far we have talked about agents by describing behaviortheaction that is
performed after any given sequence of percepts. Now we must bite the bullet and talk
about how the insides work. The job of Al is to design an agentprogramthat implements
the agent function the mapping from percepts to actions. We assume this program
will run on some sort of computing device with physical sensors and actuatorswe call
this the architecture:

agent=architecture+program
The architecture might be just an ordinary PC, or it might be a robotic car with several
onboard computers, cameras, and other sensors. In general, the architecture makes
the percepts from the sensors available to the program, runs the program, and feeds
the program's action choices to the actuators as they are generated.
The agent program applies the methods of the agent function. Exist a great variety of
agent program designs, which varies in efficiency, robustness and flexibility, and reflect
the information that becomes explicit and is used in the decision take process. Due to
this, the appropriate design of the agent program depends in great part in the
environment nature. The basic classification according to their structures is:
1. Simple reflex agent, whose responds directly to the perceptions

ESCOM IPN
ARTIFICIAL INTELLIGENCE
2. Model based reflex agents, this have inner state that
allows him to keep track of world aspects that not are evident with the actual
perceptions.
a. Goal based Agents. They acts with the intention of reach their goals
b. Utility based agents. They try to maximize their utility
Each kind of agent program combines particular components in particular ways to
generate actions.

SIMPLE REFLEX AGENTS


The simplest kind of agent is the simple reflex agent. These agents select actions on
the basis of the current percept, ignoring the rest of the percept history. Simple reflex
behaviors occur even in more complex environments.

Agente

Sensors
What the world is like now

Environment
Condition- action Rules

What action shoul I do now

Actuators

Note that the description in terms of "rules" and "matching" is purely conceptual;
actual implementations can be as simple as a collection of logic gates implementing a
Boolean circuit.
Simple reflex agents have the admirable property of being simple, but they turn out to
be of limited intelligence. The agent will work only if the correct decision can be made
on the basis of only the current perceptthat is. only if the environment is fully observable. Even a little bit of unobservability can cause serious trouble. Infinite loops are
often unavoidable for simple reflex agents operating in partially observable
environments. Escape from infinite loops is possible if the agent can randomize its
actions. a randomized simple reflex agent might outperform a deterministic simple
reflex agent.

ESCOM IPN
ARTIFICIAL INTELLIGENCE
Randomized behavior of the right kind can be rational in some
multiagent environments. In single-agent environments, randomization is usually not
rational. It is a useful trick that helps a simple reflex agent in some situations, but in
most cases we can do much better with more sophisticated deterministic agents.

MODEL-BASED

REFLEX AGENTS

The most effective way to handle partial observability is for the agent to keeptrackofthe

partoftheworlditcan'tseenow.That is, the agent should maintain some sort of internal


statethat depends on the percept history and thereby reflects at least some of the
unobserved aspects of the current state.
Updating this internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
1. First, we need some information about how the world evolves independently of
the agent
2. Second, we need some information about how the agent's own actions affect
the world
This knowledge about "how the world works"whether implemented in simple Boolean
circuits or in complete scientific theoriesis called a model of the world. An agent that
uses such a model is called a model-based agent.
The following figure gives the structure of the model-based reflex agent with internal
state, show- ing how the current percept is combined with the old internal state to
generate the updated description of the current state, based on the agent's model of
how the world works.
Agente
State
How the world evolves
What my actions do

Condition- action Rules

Sensors
What the world is like now

Environment
What action shoul I do now

Actuators

The interesting part is that the estate is actualized,thus is create the new internal state
description. The details of how models and states are represented vary widely
depending on the type of environment and the particular technology used in the agent
design.

ESCOM IPN
ARTIFICIAL INTELLIGENCE
Regardless of the kind of representation used, it is seldom
possible for the agent to determine the current state of a partially observable
environment exactly.Instead, the box labeled "what the world is like now" represents
the agent's "best guess" (or sometimes best guesses).
A perhaps less obvious point about the internal "state" maintained by a model- based
agent is that it does not have to describe "what the world is like now" in a literal sense.
It can be based in the suposition of doing an determined action, n other words, as well
as a current state description, the agent needs some sort of goal information that
describes situations that are desirable. The agent program can combine this with the
model (the same information as was used in the model- based reflex agent) to choose
actions that achieve the goal.

GOAL BASED AGENTS


Knowing something about the current state of the environment is not always enough to
decide what to do. Smetimes a decisin have to be made, and the correct decisin
depends on what is trying to accomplish. The agent program can combine this with the
model (the same information as was used in the model- based reflex agent) to choose
actions that achieve the goal.
Agente
State
How the world evolves

What my actions do

Goals

Sensors
What the world is like now

What it will it be like if I do actinA

Environment

What action shoul I do now

Actuators
Sometimes goal-based action selection is straightforwardfor example, when goal
satisfaction results immediately from a single action. Sometimes it will be more tricky
for example, when the agent has to consider long sequences of twists and turns in
order to find a way to achieve the goal.

ESCOM IPN
ARTIFICIAL INTELLIGENCE
Notice that decision making of this kind is fundamentally different
from the condition- action rules described earlier, in that it involves consideration of
the futureboth "What will happen if I do such-and-such?" and "Will that make me
happy?" In the reflex agent designs ; this information is not explicitly represented,
because the built-in rules map directly from percepts to actions.
Although the goal-based agent appears less efficient, it is more flexible because the
knowledge that supports its decisions is represented explicitly and can be modified.

UTILITY BASED AGENTS


Goals alone are not enough to generate high-quality behavior in most environments.
Goals just provide a crude binary distinction between "happy" and "unhappy" states. A
more general performance measure should allow a comparison of different world states
according to exactly how happy they would make the agent. Because "happy" does not
sound very scientific, economists and computer scientists use the term utilityinstead.
A performance measure assigns a score to any given sequence of environment states,
so it can easily distinguish between more and less desirable ways of achiving a goal. An
agent's utility function is essentially an internalization of the performance measure. If
the internal utility function and the external performance measure are in agreement,
then an agent that chooses actions to maximize its utility will be rational according to
the external performance measure.
a utility-based agent has many advantages in terms of flexibility and learning.
Furthermore, in two kinds of cases, goals are inadequate but a utility-based agent can
still make rational decisions. First, when there are conflicting goals, only some of which
can beachieved (for example, speed and safety), the utility function specifies the
appropriate tradeoff. Second, when there are several goals that the agent can aim for,
none of which can be achieved with certainty, utility provides a way in which the
likelihood of success can be weighed against the importance of the goals.
Partial observability and stochasticity are ubiquitous in the real world, and so,
therefore, is decision making under uncertainty. Technically speaking, a rational utilitybased agent chooses the action that maximizes the expected utility of the action
outcomesthat is, the utility the agent expects to derive, on average, given the
probabilities and utilities of each outcome. An agent that possesses an erplieli utility
fiinctinn can make rational decisions with a general-purpose algorithm that does not
depend on the specific utility function being maximized_ In this way, the "global"
definition of rationalitydesignating as rational those agent functions that have the
highest performanceis turned into a "local" constraint on rational-agent designs that
can be expressed in a simple program.

ESCOM IPN
ARTIFICIAL INTELLIGENCE
The utility-based agent structure appears in the following figure
Agente
State

Sensors

How the world evolves

What the world is like now

What my actions do

What it will it be like if I do actinA

Utility

Environment

How happy i will be in such a state

What action shoul I do now


Actuators

It's true that such agents would be intelligent, but it's not simple. A utility-based agent
has to model and keep track of its environment, tasks that have involved a great deal
of research on perception, representation, reasoning, and learning. Choosing the utilitymaximizing course of action is also a difficult task, requiring ingenious algorithms. Even
with these algorithms, perfect rationality is usually unachievable in practice because of
computational complexity

REFERENCES:
1 Artificial intelligent a modern approach 3rd edition, chapters 1-2

Anda mungkin juga menyukai