AGENT PROGRAM
The job of AI is to design the agent program that
implements the agent function mapping percepts to actions.
Agent = Architecture + Program
Example: Program for walking; Legs as architecture.
All agent programs have same skeleton
Difference between Agent program and Agent function
Agent program: Takes the current percept as input
Agent function: Takes the entire percept history
Invoked for each new percept and returns an action each time.
Let P be the set of possible percepts and let T be the life time of the
agent (the total number of percepts it will receive).
The lookup table will contain c:=
entries.
CHALLENGE IN AI
To find out how to write programs to the extent possible,
and to produce rational behavior from a small amount of
code rather than from a large number of table entries.
Example: Calculator In 1970s and now?
Condition-action rule
-> if car-in-front-is-braking then initiate-braking.
CONTD.,
Specific to one particular vacuum environment.
To build a general-purpose interpreter for condition-action
rules and then to create rule sets for specific task environments.
CONTD.,
PROBLEM?
Will work only if the correct decision can be made on the basis of only
the current percept-that is, only if the environment is fully
observable.
Example: Car braking and Vacuum cleaner without location sensor
To avoid infinite loops.
Randomize the actions
CONTD.,
CONTD.,
GOAL-BASED AGENTS
Knowing about the current state of the environment is not always enough to
decide what to do.
Example: Road Junction ( ? )
Agent needs some sort of goal information that describes situations that are
desirable.
Agent program can combine this with information about the results of possible
actions(reflex agent internal state).
Search and planning are the subfields of A1 to find action sequences that
achieve the agent's goals.
CONTD.,
CONTD.,
Decision making of this kind is different.
Involves consideration of the future-both "What will happen if I do such-andsuch?
Previous schemes has built-in rules that map directly from percepts to actions.
Example:
The reflex agent brakes when it sees brake lights.
A goal-based agent if the car in front has its brake lights on, it will slow down.
The way the world usually evolves.
Although the goal-based agent appears less efficient, it is more flexible because
the knowledge that supports its decisions is represented explicitly and can be
modified.
UTILITY-BASED AGENTS
Goals alone are not really enough to generate high-quality behavior in most
environments.
Utility function maps a state (or a sequence of states) onto a real number,
which describes the associated degree of happiness.
Example:
Many action sequences that will get the taxi to its destination (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than
others.
CONTD.,
CONTD.,
A node is a data structure with five components:
1. STATE
2. PARENT-NODE
3. ACTION
4. PATH-COST
5. DEPTH
Fringe: Collection of nodes generated but not yet expanded.
Search strategy selects the one that needs to be expanded from this set.
CONTD.,
Collection of nodes is implemented as a queue.
Queue operations
1. Make-Queue(element,)
2. Empty?(queue)
3. First(queue)
4. Remove-First(queue)
5. Insert(element, queue)
6. Insert-All(elements, queue)
MEASURING PROBLEM-SOLVING
PERFORMANCE
Output may be either Failure or a Solution.
4 ways to evaluate performance
1.Completeness: Is the algorithm guaranteed to find a solution
when there is one?
2.Optimality: Does the strategy find the optimal solution?
3.Time complexity: How long does it take to find a solution?
4.Space complexity: How much memory is needed to perform
the search?
UNINFORMED SEARCH
STRATEGIES
INTRODUCTION
Also called as Blind Search.
No additional information about states other than defined.
Distinguishes goal state from a non-goal state.
Strategies that know whether one non-goal state is "more
promising" than another are called informed search or
heuristic search.
BREADTH-FIRST SEARCH
Breadth-first search is a simple strategy in which the root
node is expanded first, then all the successors of the root
node are expanded next, then their successors, and so on.
Is a first-in-first-out (FIFO) queue, assuring that the nodes
that are visited first will be expanded first.
Every node generated must remain in memory.
BFS TO BE CONSIDERED
2 things to be considered
1. The memory requirements are a bigger problem for breadth first
search than is the execution time
2. Time requirements are still a major factor particularly if the solution is at
depth.
UNIFORM-COST SEARCH
Expands node n which has the lowest path cost.
Doesnt care about the number of steps but only the path cost.
May lead to infinite loop ( Zero cost Self loop )
DEPTH-FIRST SEARCH
Depth-first search always expands the deepest node in the current fringe of
the search tree.
LIFO (Stack order)
Depth-first search has very modest memory requirements.
It needs to store only a single path from the root to a leaf node.
Once a node has been expanded, it can be removed from memory.
A variant of depth-first search called backtracking search uses still less
memory.
Drawback:
- It can make a wrong choice and will go deep when a different choice will
provide solution near to root node. (Solved using depth limited search)
BIDIRECTIONAL STATES
CONTD.,
Graph search instead of Tree search
Problems:
Suboptimal solutions returned since graph search
algorithm can discard optimal path to the repeated
state if it is not the first one.
Solutions:
1. The first solution is to extend GRAPH-SEARCH so that it
discards the more expensive of any two paths found
to the same node.
2. The second solution is to ensure that the optimal path to
any repeated state is always the first one followed.
DRAWBACK OF A*
Keeps all generated nodes in memory.
A* usually runs out of space long before it runs out of
time.
A* is not practical for many large-scale problems.
Memory-bounded heuristic
search
Iterative-deepening A" (IDA*) algorithm.
To overcome the problem of A*.
The main difference between IDA* and standard iterative
RBFS