Anda di halaman 1dari 60

TME 2073 Intelligent

Systems
Intelligent Agents

1
Learning Objectives
Able to understand what is agent and intelligent agents and how it
operates with the environment
Given a problem or situation, the student should be able to:
 Identify the percepts available to the agent and
 the actions that the agent can execute
Able to understand the different type of agent in AI systems
Understand the performance measures used in evaluate an agent
Understand the definition of rational agent
Be familiar with different type of agent architectures and recommend
the architecture of the desired agent
Identify characteristics of the environment
2
Keys

 Agent Definition
 Intelligent Agent
 Software Agent
 Physical Agent
 Rational Agent
 Type of Intelligent Agent
 Different Type of Environments
 Task Environments – PEAS
 Different type of agent architecture
3
4
• AI has come a long way, but
it still needs human help.

• Neither humans nor AI is


perfect. At least not yet.

5
What is Agent
Your move

Game Agent You

• AI in Games

Its own move


6
What is Agent
Sensors

AGENT ENVIRONMENT

Actuators
7
What is Agent
Rates/News

Stocks market
Trading Agent Commodities
Bonds
• AI in Economy

Trade
8
What is Agent
Symptoms, blood pressure, heart signal

You
Diagnostic
Agent
Doctors
• AI in Medicine

Diagnostics
9
What is Agent
Keywords, web pages
Keywords

Web Crawler World Wide Web


Agent
• AI in Web

Results
10
Agent & Intelligent
Agent
 What is an Agent?
Is an entity that can be viewed as perceiving its environment
through sensors and acting upon its environment through
effectors [Russel & Norvig, 1995]

 What is Intelligence Agents?


Are software entities that carry out some set of operations on
behalf of a user or another program with some degree of
independence or autonomy, and in doing so, employ some
knowledge or representation of the user’s goals or desires
[IBM White Paper, 1995]
11
Agent
• Operate in an environment
• Perceives its environment through sensors
• Acts upon its environment through actuators/sensors
• Have goals
• Implement mapping from percept sequence to actions
• Autonomous agent: decide autonomously which action to take
in the current situation to maximize progress towards its goals.

12
Sensors and Effectors
• An agent perceives its environment through sensors
• The complete set of inputs at a given time is called percept
• The current percept, or a sequence of percepts can influence the
actions of an agent
• It can change the environment through effectors
• An operation involving an actuator is called an action
• Actions can be grouped into action sequences

13
Performance
• Behaviour and performance of IAs in terms
of agent function.
 Perception history (sequence) to Action Mapping
 Ideal mapping: specifies which actions an agent ought to take
at any point in time.
• Performance measure: a subjective measure to
characterize how successful an agent is (e.g speed,
power usage, accuracy, money, etc)

14
Example of Agents
Example:
i.Travel Agency
Task: Booking a flight ticket & facilities in order to go for vacation oversea.
Initial goal: Go for Umrah, Mekah, Saudi Arabia
Act independently, booking everything (flight to-fro, hotel, transportation,
etc) – need to communicate with other agent & recommend anything (related
Umrah’s tasks) where appropriate

ii. Agent for do shopping


Goal: Buy some carrots
Act independently, communicate with other agents, modify what it
is doing in response to what it sees and hears

15
Intelligent Agent
 Software Agents
 Operate within the confines of the computer / computer network.

 Physical Agents
 i.e. robots
 Operate in the physical world & can perceive & manipulate objects.

16
Designing Intelligent
Agent
 Techniques:
 Planning: what to do

 Natural Language Processing: communicate with user

 Expert System: solve specialized problems

 Knowledge Representation: represent required


knowledge

 Vision: to make sense of the physical environment

 Machine Learning: the agent can adapt & improve its


behavior

17
Software Agents

 Is an independent software component which (typically) provides


support for a user of a computer system

Example:
Personal assistants to filter mail, find useful docs., schedule
meetings & do shopping.
 User delegates responsibility for some of their routine tasks to agent,
ensuring the tasks are carried out.
 Properties:
 Autonomous
 Interacts with other agents & the environment
 Re-active to the environment
 Pro-active (goal-directed)

18
Autonomy
 A system is autonomous to the extent that its own behavior is
determined by its own experience & knowledge.

 To survive, agents must have:

 enough built-in knowledge to survive

 ability to learn

19
Software Agents
 Several kinds of software agents:

 Mail Handling Agents

 Information Agents
 Agents-based Interface

20
Mail Handling Agents
 Automatically filter & classify mail
 Needs to have some knowledge of the roles & interests of the people
 A simple system, allow each person to enter rules which specify how
mail should be treated;

Example:
IF subject-line includes “TME2073” THEN put into “TME2073Folder”

 Might using forward chaining inference engine to process the rules &
deal with the messages
 Techniques:

 NLP
 ML

21
Spam filter
• A spam filter is an agent that puts incoming emails into
wanted or unwanted (spam) categories, and deletes any
unwanted emails. Its goal as a goal-based agent is to put
all emails in the right category.
• In the course of this not-so simple task, the agent can
occasionally make mistakes. Because its goal is to classify
all emails correctly, it will attempt to make as few errors
as possible.

Question : Is Agent 2 worst than agent 1? 22


Cost-based agent
If the goal is as a cost-based agent, it will try to calculate the
profit and loss that the agent can generate based on their
actions. Because the goal is to calculate the profit and loss,
we can assume the following calculations:

1. Let say both agent generate errors and we want to


calculate costs created by the errors and compare the
results. Assume here that having to manually delete a spam
email costs one cent and retrieving a deleted email, or the
loss of an email,costs RM1.00.
Costs for agent 1 = (11 x 100 cents) + (1 x 1 cent) = 1, 101
Costs for agent 2 = (0 x 100 cents) + (38 x 1 cent) = 38 cents.
Therefore agent 2 saves 1,101 cents - 38 cents = 1, 063 23

cents.
2. Let say both agents generate by correct classifications of
email. Assume that for every desired email recognized, a
profit of RM 1.00 accrues and for every correctly deleted
spam email, a profit of one cent.

Profit for agent 1 = (189 x 100 cents) + (799 x 1 cent) =


19,699 cents.
Profit for agent 2 = (200 ·x 100 cents) + (762 x 1 cent) =
20,762 cents.
Agent 2 therefore has 20,762 cents - 19,699 cents = 1,063
cents higher profit.
24
Information Agents

 Based on personal profile giving the user’s interest, will searches to find
relevant information, collate & prioritize the information, & present it to
the user on request
 Techniques:
 Expert Systems - to identify relevant information
 Machine Learning - to build & update the personal profile
 Natural Language Processing - to analyze potentially relevant tasks

25
Agent-based Interface
 Based on idea the user communicates with one of a no. of
conversational agents
 Conversational agent: is an animated “talking head” that the user can
communicate using Natural Language
 Use info about the user’s particular needs, preferences, factual info
about user & organization
 Need to communicate with other agents in a system (mail handling &
information agents)
Programming paradigm
 Techniques: that encourages
 Natural Language Processing thinking in terms of
 Machine Learning communicating agents
 Knowledge Representation

 Agent-oriented programming 26
Physical Agents

 Robots:
 relatively simple programmable manipulators used for tasks (e.g.: car
assembly, humanoid robots)

 Have its own goals and will be able to adapt its behavior based on
information received.
 Type of robots (based on different purposes):
 Manufacturing robots
 Autonomous mobile robots
 humanoid robots

27
Manufacturing Robots
 Currently used in a variety of manufacturing tasks (e.g.: welding,
assembly, spray painting)

 Consists of a jointed arm with a device or end effector at the end of it


(i.e.: gripper – to pick things up)
 Number & arrangement of the joints determines the range of positions
and orientations that the end effectors can be positioned in.

 For full flexibility - 6 joints (allows the effectors to be placed in any


position within reach, at any angle); 6 degrees of freedom: three for x-y-z
position & three for orientation.

28
Autonomous Mobile Robots
 Used as “delivery boys” in large organizations.
 To take some objects to some location somewhere in the
building, navigate their way about the building (avoiding
obstacles on the way) and make the delivery.
 To do routine tasks in hazardous environments (e.g. as the surface of
Mars, at nuclear power station, near a fire).
 May sometimes be remotely operated by a human (can see what the robot
sees through its camera, directly control the movement, and actions of the robot).
 Normally operate in dynamic and unpredictable environments.

29
Humanoid Robots
 Human-like robots: complete with head, eyes, arms, hands, fingers, and
possibly even legs.

 Example
 Cog (developed at MIT by Rodney Brooks and team)
 ASIMO (developed at Honda, Japan)
 TOPIO (developed at TOSY Robotics, Vietnam)

30
Rescue robots showdown

The robots competing in DARPA (US Defense


Advance Research Projects Agency) robotics
Challenge face eight complex tasks based on a
disaster-response scenario. The tasks are
designed to test perception, dexterity,
endurance and other capabilities. The team
Whose robot scores the most points wins
USD $2 million. Chimp

Atlas31
Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp
• Agent’s function  look-up table
• For many agents this is a very large table

32
• A table is a simple way to specify a mapping from percepts to action
• Tables may become very large
• All work done by the designer
• No, autonomy, all actions are predetermined
• Learning might take a very long time
• Mapping is implicitly defined by a program
• Rule based
• Algorithm
• Neural Networks

33
Rational agents
• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date

• Rational Agent: An agent which always does the right thing.


For each possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure, given the evidence provided
by the percept sequence and whatever built-in knowledge the agent has.

• E.g., performance measure of a vacuum-cleaner agent could be amount of dirt


cleaned up, amount of time taken, amount of electricity consumed, amount of
noise generated, etc

34
Rationality
• Rational is different from omniscience
• Percepts may not supply all relevant information
• E.g., in card game, don’t know cards of others.
• Rational is different from being perfect
• Rationality maximizes expected outcome while perfection maximizes
actual outcome.
• Perfect rationality assumes that the rational agent knows all and will
take the action that maximizes her utility.
• Human beings do not satisfy this definition of rationality.

35
Autonomy in Agents
The autonomy of an agent is the extent to which its
behaviour is determined by its own experience,
rather than knowledge of designer.

• Extremes
• No autonomy – ignores environment/data
• Complete autonomy – must act randomly/no program

• Example: baby learning to crawl


• Ideal: design agents to have some autonomy
• Possibly become more autonomous with experience

36
PEAS
• PEAS: Performance measure, Environment, Actuators,
Sensors

• Must first specify the setting for intelligent agent design

• Consider, e.g., the task of designing an automated taxi driver:

– Performance measure: Safe, fast, legal, comfortable trip,


maximize profits

– Environment: Roads, other traffic, pedestrians, customers


– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard

37
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of parts in correct bins
• Environment: Conveyor belt with parts, bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors

38
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's score on test
• Environment: Set of students
• Actuators: Screen display (exercises, suggestions, correction
• Sensors: Keyboard

39
Environment types
• Fully observable (vs. partially observable)
• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multiagent):

40
Fully observable (vs. partially observable)
• Is everything an agent requires to choose its actions
available to it via its sensors? Perfect or Full information.
• If so, the environment is fully accessible
• If not, parts of the environment are inaccessible
• Agent must make informed guesses about world.
• In decision theory: perfect information vs. imperfect
information.
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Fully Partially Partially Partially Fully Fully

41
Deterministic (vs. stochastic)
• Does the change in world state
• Depend only on current state and agent’s action?
• Non-deterministic environments
• Have aspects beyond the control of the agent
• Utility functions have to guess at changes in world

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Deterministic Stochastic Stochastic Stochastic Stochastic Deterministic

42
Episodic (vs. sequential):
• Is the choice of current action
• Dependent on previous actions?
• If not, then the environment is episodic
• In non-episodic environments:
• Agent has to plan ahead:
• Current choice will affect future actions
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Sequential Sequential Sequential Sequential Episodic Episodic

43
Static (vs. dynamic):
• Static environments don’t change
• While the agent is deliberating over what to do
• Dynamic environments do change
• So agent should/could consult the world when choosing actions
• Alternatively: anticipate the change during deliberation OR make
decision very fast
• Semidynamic: If the environment itself does not change with the passage of
time but the agent's performance score does.

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Static Static Static Dynamic Dynamic Semi

44
Discrete (vs. continuous)
• A limited number of distinct, clearly defined percepts and
actions vs. a range of values (continuous)

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Discrete Discrete Discrete Conti Conti Conti

45
Single agent (vs. multiagent):
• An agent operating by itself in an environment or there are
many agents working together

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Single Multi Multi Multi Single Single

46
Summary

Observable Deterministic Episodic Static Discrete Agents


Cross Word Fully Deterministic Sequential Static Discrete Single

Poker Partially Stochastic Sequential Static Discrete Multi

Backgammon Partially Stochastic Sequential Static Discrete Multi

Taxi driver Partially Stochastic Multi


Sequential Dynamic Conti

Part picking robotPartially Stochastic Episodic Dynamic Conti Single

Image analysis Fully Deterministic Episodic Semi Conti Single


47
Agent types
• Four basic types in order of increasing generality:
• Simple reflex agents
• Reflex agents with state/model
• Goal-based agents
• Utility-based agents
• All these can be turned into learning agents

48
Simple reflex agents

49
Simple reflex agents
• Simple but very limited intelligence.
• Action does not depend on percept history, only on current percept.
• Therefore no memory requirements.
• Infinite loops
• Suppose vacuum cleaner does not observe location. What do you do giv
location = clean? Left of A or right on B -> infinite loop.
• Fly buzzing around window or light.
• Possible Solution: Randomize action.
• Thermostat.
• Chess – openings, endings
• Lookup table (not a good idea in general)
• 35100 entries required for the entire game

50
Goal-based agents
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
• A destination to get to

• Uses knowledge about a goal to guide its actions


• E.g., Search, planning

51
Goal-based agents

• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
52
Model-based reflex agents
 Know how world evolves
 Overtaking car gets closer from
behind
 How agents actions affect the
world
 Wheel turned clockwise takes you
right

 Model base agents update their


state

53
Utility-based agents
• Goals are not always enough
• Many action sequences get taxi to destination
• Consider other things. How fast, how safe…..
• A utility function maps a state onto a real number which
describes the associated degree of “happiness”, “goodness”,
“success”.
• Where does the utility measure come from?
• Economics: money.
• Biology: number of offspring.
• Your life?
54
Utility-based agents

55
Learning agents
 Performance element is
what was previously the
whole agent
 Input sensor
 Output action
 Learning element
 Modifies performance
element.

56
Learning agents
 Critic: how the agent is
doing
 Input: checkmate?
 Fixed

 Problem generator
 Tries to solve the problem
differently instead of
optimizing.
 Suggests exploring new
actions -> new problems.
57
Learning agents (Taxi driver)
• Performance element
• How it currently drives
• Taxi driver makes quick left turn across 3 lanes
• Critics observe shocking language by passenger and other drivers and
informs bad action
• Learning element tries to modify performance elements for future
• Problem generator suggests experiment out something called Brakes on
different Road conditions
• Exploration vs. Exploitation
• Learning experience can be costly in the short run
• Shocking language from other drivers
• Less tip
• Fewer passengers

58
Summary
• An agent perceives and acts in an
environment, has an architecture, and is implemented by an
agent program.
• An ideal agent always chooses the action which maximizes its
expected performance, given its percept sequence so far.
• An autonomous agent uses its own experience rather than
built-in knowledge of the environment by the designer

59
Summary
• An agent program maps from percept to
action and updates its internal state.
 Simple reflex agent respond immediately to percepts
 Goal-based agents act in order to achieve their goal(s)
 Utility-based agents maximize their own utility function
 Representing knowledge is important for successful agent design
 The most challenging environments are partially observable,
stochastic, sequential, dynamics, and continuous, and contain
multiple intelligent agents

60

Anda mungkin juga menyukai