Anda di halaman 1dari 50

STRUCTURE OF AGENTS

AGENT PROGRAM
The job of AI is to design the agent program that
implements the agent function mapping percepts to actions.
Agent = Architecture + Program
Example: Program for walking; Legs as architecture.
All agent programs have same skeleton
Difference between Agent program and Agent function
Agent program: Takes the current percept as input
Agent function: Takes the entire percept history

TABLE DRIVEN AGENT PROGRAM

Invoked for each new percept and returns an action each time.
Let P be the set of possible percepts and let T be the life time of the
agent (the total number of percepts it will receive).
The lookup table will contain c:=

entries.

CHALLENGE IN AI
To find out how to write programs to the extent possible,
and to produce rational behavior from a small amount of
code rather than from a large number of table entries.
Example: Calculator In 1970s and now?

KINDS OF AGENT PROGRAM

Simple reflex agents.


Model-based reflex agents.
Goal-based agents.
Utility-based agents.

SIMPLE REFLEX AGENT


Agents select actions on the basis of the current percept,
ignoring the rest of the percept history.

Condition-action rule
-> if car-in-front-is-braking then initiate-braking.

CONTD.,
Specific to one particular vacuum environment.
To build a general-purpose interpreter for condition-action
rules and then to create rule sets for specific task environments.

CONTD.,

It acts according to a rule whose condition matches the


current state, as defined by the percept.

PROBLEM?
Will work only if the correct decision can be made on the basis of only
the current percept-that is, only if the environment is fully
observable.
Example: Car braking and Vacuum cleaner without location sensor
To avoid infinite loops.
Randomize the actions

MODEL-BASED REFLEX AGENTS


To keep track of the part of the world it can't see now.
Maintain some sort of internal state - depends on the percept history.
Example: Car turning and braking
Two things needed for this:
1. Information about how the world evolves independently of the agent.
2. Information about how the agent's own actions affect the world.

CONTD.,

CONTD.,

GOAL-BASED AGENTS
Knowing about the current state of the environment is not always enough to
decide what to do.
Example: Road Junction ( ? )
Agent needs some sort of goal information that describes situations that are
desirable.
Agent program can combine this with information about the results of possible
actions(reflex agent internal state).
Search and planning are the subfields of A1 to find action sequences that
achieve the agent's goals.

CONTD.,

CONTD.,
Decision making of this kind is different.
Involves consideration of the future-both "What will happen if I do such-andsuch?
Previous schemes has built-in rules that map directly from percepts to actions.
Example:
The reflex agent brakes when it sees brake lights.
A goal-based agent if the car in front has its brake lights on, it will slow down.
The way the world usually evolves.
Although the goal-based agent appears less efficient, it is more flexible because
the knowledge that supports its decisions is represented explicitly and can be
modified.

UTILITY-BASED AGENTS
Goals alone are not really enough to generate high-quality behavior in most
environments.
Utility function maps a state (or a sequence of states) onto a real number,
which describes the associated degree of happiness.
Example:
Many action sequences that will get the taxi to its destination (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than
others.

CONTD.,

PROBLEM SOLVING AGENTS


An agent with several immediate options of unknown value can decide
what to do by just examining different possible sequences of actions
that lead to states of known value, and then choosing the best sequence.
This process of looking for such a sequence is called search.
A search algorithm takes a problem as input and returns a solution in the
form of an action sequence.
Once a solution is found, the actions it recommends can be carried out.
This is called the execution phase.

WELL-DEFINED PROBLEMS AND


SOLUTIONS
Problem can be defined formally by four components
1. Initial State
2. Successor Function
Initial State + Successor function = State space
3. Goal test
4. Path cost function (Assigns cost to each path)
Solution: A path from the initial state to a goal state.
Optimal Solution: Lowest cost among all solutions.

SEARCHING FOR SOLUTIONS


Done by a search through the state space.
The root of the search tree is a search node
The first step is to test whether this is a goal state.
Since, first may not be goal state consider next states.
Expand next state (Applying successor function to current
state).
Search Strategy: Which state to expand?

CONTD.,
A node is a data structure with five components:
1. STATE
2. PARENT-NODE
3. ACTION
4. PATH-COST
5. DEPTH
Fringe: Collection of nodes generated but not yet expanded.
Search strategy selects the one that needs to be expanded from this set.

CONTD.,
Collection of nodes is implemented as a queue.
Queue operations
1. Make-Queue(element,)
2. Empty?(queue)
3. First(queue)
4. Remove-First(queue)
5. Insert(element, queue)
6. Insert-All(elements, queue)

MEASURING PROBLEM-SOLVING
PERFORMANCE
Output may be either Failure or a Solution.
4 ways to evaluate performance
1.Completeness: Is the algorithm guaranteed to find a solution
when there is one?
2.Optimality: Does the strategy find the optimal solution?
3.Time complexity: How long does it take to find a solution?
4.Space complexity: How much memory is needed to perform
the search?

UNINFORMED SEARCH
STRATEGIES

INTRODUCTION
Also called as Blind Search.
No additional information about states other than defined.
Distinguishes goal state from a non-goal state.
Strategies that know whether one non-goal state is "more
promising" than another are called informed search or
heuristic search.

BREADTH-FIRST SEARCH
Breadth-first search is a simple strategy in which the root
node is expanded first, then all the successors of the root
node are expanded next, then their successors, and so on.
Is a first-in-first-out (FIFO) queue, assuring that the nodes
that are visited first will be expanded first.
Every node generated must remain in memory.

BFS TO BE CONSIDERED
2 things to be considered
1. The memory requirements are a bigger problem for breadth first
search than is the execution time
2. Time requirements are still a major factor particularly if the solution is at
depth.

UNIFORM-COST SEARCH
Expands node n which has the lowest path cost.
Doesnt care about the number of steps but only the path cost.
May lead to infinite loop ( Zero cost Self loop )

DEPTH-FIRST SEARCH
Depth-first search always expands the deepest node in the current fringe of
the search tree.
LIFO (Stack order)
Depth-first search has very modest memory requirements.
It needs to store only a single path from the root to a leaf node.
Once a node has been expanded, it can be removed from memory.
A variant of depth-first search called backtracking search uses still less
memory.
Drawback:
- It can make a wrong choice and will go deep when a different choice will
provide solution near to root node. (Solved using depth limited search)

ITERATIVE DEEPENING DFS

BIDIRECTIONAL STATES

AVOIDING REPEATED STATES


Storing all the nodes visited
For each node visited check
1. If the node already visited through another path
2. If so, discard it
3. Else, add the node to database

INFORMED SEARCH STRATEGIES


Solutions determined more effectively than uninformed.
Specific knowledge beyond definition of problem itself.
Best-first search:
- Node selected for expansion based on evaluation function, f(n).
- Basically node with lowest evaluation is selected for expansion.
- A data structure that will maintain the fringe in ascending order of f
-values.
If the evaluation function is exactly accurate, then this will be the
best.
If not, its not an optimal one!

GREEDY BEST FIRST SEARCH


Tries to expand the node that is closest to the goal, on the:
grounds that this is likely to lead to a solution quickly.
It evaluates nodes by using just the heuristic function:
f (n) = h(n).

A* search: Minimizing the total


estimated solution cost
Most widely-known form of best-first search is called A*
search.
F(n)=g(n)+h(n); g(n) - the cost to reach the node
h(n.),the cost to get from the node to the goal:
f (n) = estimated cost of the cheapest solution through n

CONTD.,
Graph search instead of Tree search
Problems:
Suboptimal solutions returned since graph search
algorithm can discard optimal path to the repeated
state if it is not the first one.
Solutions:
1. The first solution is to extend GRAPH-SEARCH so that it
discards the more expensive of any two paths found
to the same node.
2. The second solution is to ensure that the optimal path to
any repeated state is always the first one followed.

DRAWBACK OF A*
Keeps all generated nodes in memory.
A* usually runs out of space long before it runs out of
time.
A* is not practical for many large-scale problems.

Memory-bounded heuristic
search
Iterative-deepening A" (IDA*) algorithm.
To overcome the problem of A*.
The main difference between IDA* and standard iterative

deepening is that the cutoff used is the


f -cost (g + h) rather than the depth.
Two memory-bounded algorithms:
1. Recursive best first search
2. Memory bounded A*

Recursive best first search


Similar to recursive depth first search.
But instead moving deeper it keeps track of alternative path
available from ancestor of current node.
If the current node exceeds this limit, the recursion unwinds back
to the alternative path.
RBFS replaces the f -value of each node along the path with the
best f -value of its children.
RBFS always remembers f-value of best leaf in forgotten path.
Still suffers from excessive node regeneration.

RBFS suffer from using too little memory

RBFS

To utilize all available memory


1. Memory bounded A* (MA*)
2. Simplified MA* (SMA*)
Simplified SMA*:
Expanding leaf node until memory is full.
To add a new node when memory is full, SMA replaces the one that has
worst leaf node (highest f-value).
SMA* is forced to switch back and forth continually between a set
of candidate solution paths, only a small subset of which can fit in
memory (Thrashing).

LOCAL SEARCH ALGORITHMS AND


OPTIMIZATION PROBLEMS
Local search algorithms operate only using current state rather than
multiple paths.
Generally move only to neighbors of that state.
Paths followed by search are not retained.
Advantages:
Very little memory
Can often find reasonable solutions in large or infinite state spaces.
Provides optimal solutions in addition to finding goals.

Anda mungkin juga menyukai