Always represent any complexity in terms of O(bm) in the case of search algorithms
In many , the goal is found before the entire tree is explored, say at depth m. If a goal isnt found, then m=d
Heuristic Function at a given node Estimates cost to reach goal from a node
Initialization(D);
QUEUE <-- path only containing the root;
WHILE QUEUE is not empty
AND !Termination(Q,D)
DFS None A path in Q None Add to the None None None O(bm) O(m*b)
reaches G front of the
queue
BFS None A path in Q None Add to the None None None O(bm) O(bm)
reaches G back of the
queue
Bidirectional D = Queue2 If both None Add to the None 1. Remove None O(bm/2) O(bm/2)
Search containing Queues share back of the first path BFS
only goal a common queue BFS from queue2
*Have state OR 2. create new O(m*b)
explicit Add to the paths (to all DFS
description of back of the children);
goal state queue DFS 3.reject the
new paths
*Have rules with loops;
for backward
reasoning 4. Add paths
to the back of
the queue2
BFS
OR
Add to the
back of the
queue DFS
Depth limited dmax=MaxDep A path in Q If path has Add to the None None None O(bdmax) O(dmax*b)
DFS th reaches G length < dmax front of the
queue
Uniform Cost A path in Q
Search reaches G
Uniform Cost None The first path None Add to the Sort the None None O(bm log O(bm)
Search with in the Q back of the queue by the O(bm))
Branch and reaches G queue accumulated
Bound cost after
adding the
paths
Hill Climbing None A path in Q None Add to the Sort the paths None None O(bm) O(m*b)
1 reaches G front of the by heuristic
queue before adding
Heuristic to the front of
guided search the queue
with
backtracking
Beam None A path in Q None Add to the Sort the None None O(w*m*b) O(w)
Search(w) reaches G back of the queue by the
Incomplete queue heuristic ,
All paths are Keep only the
removed top w paths
from queue
and extended
Hill Climbing None A path in Q None Add to the Sort the None None O( m*b log O(1)
2= reaches G back of the queue by the b)
BeamSearch( queue heuristic ,
1) = Hill Keep only the
Algorithm Initialization(D Termination(D, PathSelectionC PathAddCondi QueueTransfor ReturnonFail( Time Space
) Q) ondition(Q,Pat tion(Queue,Pat mation(Queue) Modification(D D) Complexity Complexity
h) h) )
A* search None First path None Add to the Sort the None None O(bm log O(bm)
reaches goal back of the queue by the O(bm))
queue f = cost +
heursitic
after adding
the paths
IF QUEUE
contains path
P terminating
in I with cost
cost_P, and
path Q
containing
with cost
cost_Q AND
cost_P >
cost_Q
Algorithm Initialization(D Termination(D, PathSelectionC PathAddCondi QueueTransfor ReturnonFail( Time Space
) Q) ondition(Q,Pat tion(Queue,Pat mation(Queue) Modification(D D) Complexity Complexity
h) h) )
THEN delete
P
F-limited F-bound Goal is not Add paths Add to the None min(f_new,ne f_new Depends on O(b*cost(B)/
search (input) reached with f(path) < front of the w f-values > Number of f delta
F-nw = inf f-bound queue f_bound) counters
A monotonic heuristic implies that every node that was expanded was reached via the shortest path from the root.
If a heuristic is not monotonic it may result in some nodes being visited and expanded several times as it may be reached with shorter paths from the
source until the shortest path to it is taken.
A monotonic (or consistent) heuristic always underestimates the cost to the goal (also called admissible), but the reverse is not true.
A heuristic function obeying the triangle inequality is always monotonic, and hence admissible.
Example Questions on Search and Games
1. Basic search:
In the course, we studied depth-first, breath-first, iterative
deepening and bi-directional search.
Discuss the properties of these methods: speed (worst
time complexity), memory (worst space complexity) and completeness.
Which technique would you choose, possibly depending on
characteristics of the problem.
2. Heuristic search:
In the course, we studied hill climbing, beam search, hill climbing 2
and greedy search. Briefly explain these methods. What are their
properties in terms of speed (worst time complexity), memory (worst
space complexity) and completeness.
4. Properties of A*:
a. When is an A*-algorithm "more informed" than another A*-algorithm? Illustrate the
concept with an example.
An A* algorithm with heuristic h1 is more informed than another A* with heuristic h2 when h1>
h2 for all nodes of the graph but G.
b. Which result holds when an algorithm is more informed than another algorithm? What is
the practical relevance of that?
An A* A algorithm with heuristic h1 is more informed than another A* B with heuristic h2 then B
atleast selects all the nodes A does, which means A will explore fewer or as much paths to G than B.
The monotonicity restriction says that h(A) <= h(B) + cost(A,B) for a path from A to B. This
implies that a situation where the estimated distance to goal from a node decreases as the path
increases (eg) cannot happen. NOTE In the example it is assumed the path to B from root has to go
through A
d. Which result holds under monotonicity? What is the practical relevance of this result?
Under monotonicity, each node expanded by the algorithm is reached via the shortest path to it from
the root. The practical relevance is that the redundant path deletion is not needed any more.
b. Explain how IDA* works. How does the f-bound change through the different
iterations? Illustrate with some example (of your choice).
The IDA* performs several fdepth first searches limited by the f values. The fbound is initialized to
the value of the heuristic at the root. In f limited search, depth first search is performed and a path is
added to the queue only if the fscore is less than the input fbound. If the goal node is found then the
algorithm terminates, if not the f-limited search return the next highest f-bound needed to add a new
path and the search is carried out again with the new fbound.
6. Advanced search techniques: SMA* Explain the different new steps that SMA*
adds to A*:
a. what happens when the memory is full,
The node in the q with the largest f value is dropped, and the parent node is made to
remember this value
b. sequential generation of children
The children are added 1 at a time to check if memory becomes full, and a path has
to be forgotten
c. giving up too long paths,
SMA* does not hold paths longer than the available memory, so when a path that is
longer than memory is encountered, the f value of the last node of the path still in memory is set to
infinity
d. propagation of f-values.
When all the children of a node have been explored, the f value of the node can be updated with the
min of the f-values of the children, this informs the algorithm about the cost of the best path through
this node.
e. When does it stop?
When G is found or when memory is full and no more paths can be expanded.
f. What are the properties of SMA* (time, memory and optionality)?
Complete when memory can store shortest path
Memory - input
Same as a* when memory can hold tree, if not slower
7. Mini-Max
a. Discuss the Mini-Max approach to game playing:
In the minimax approach of game playing, all the possible configurations of the game after k
moves are enumerated and the next move is chosen as the best possible configuration after k moves
assuming that other player is always working to minimize your value of a given board while you are
trying to maximize yours
b. Explain how the basic Mini-max search works.
In minimax search, the tree is generated up to depth k, each level alternates between max
and min starting with max. The evaluation function is then computed bottom up. If it is a max level,
the score is the max score of all children. If it is a min level, the score is the min value of all
children. The score at the root is the best possible score for max over k moves.
c. Explain the extension with Alpha-Beta pruning:
I. how do we get alpha-bata values?
In alpha beta pruning based minimax, the tree is generated incrementally while searching.
The alpha value is the best value for MAX among the descendants seen so far. The beta value is
the best value to MIN among the descendants seen so far. So the invariant is we only need to
consider cases where alpha > beta
II. how are these values used?
If the alpha value of a parent node is greater than its beta value of descendent node, then its
children need not be examined. Similarly if the beta value of a parent node is lesser than the
alpha value of a descendant node then its children need not be examined
III. Illustrate Check the tree for perfectly ordered tree
In a perfectly ordered tree, the best value for each node is in the leftmost child. This will lead to the
least number of board evaluations by alpha beta pruning
11 MAX
MIN
11 3
MAX
11 15 3 7
MIN
11 9 15 13 3 1 7 5
11 12 9 10 15 16 13 14 3 4 1 2 7 8 5 6
g. What is the horizon effect and how can you solve it?
Since we search only till a certain depth, we cannot forecast certain bad scores beyond the depth.
This can be solved by examining below the depth bound in special board configurations like queen
in danger in chess.
Normally games like chess have timed moves, iterative deepening will return best move at depth k.
Once time is up we can stop proceeding to the next depth and just return the best move at the last
depth explored.
Version Spaces Notes
b. S invariant
1. None of the members exclude any of the positive examples
2. None of the members are specializations or equal to other members i.e redundant
3. Each of the members is a specialization of every member of G
4. None of the members cover any of the negative example
III VS Algorithm
Procedure [Success,Convergence,S,G]=VersionSpaceAlgorithm(E)
Success = False, Convergence=False
VS = [G,S],G=[U],S=[]
For each example e in E
If example is positive
[G,S] = ProcessPositiveExample(G,S,e)
Else
[G,S] = ProcessNegativeExample(G,S,e)
End If
End For
If S != [] and G != []
Success = True
If S=G
Convergence = True
End If
End If
End Procedure
Procedure [S,G]=ProcessPositiveExample(G,S,e)
%G Invariant 4
G = RemoveAllNonCoveringHypotheses(G,e)
%S Invariant 1
S = GetAllMinimalGeneralizations(S,e)
%S Invariant 2
S = RemoveRedundantGeneralHypothesis(S)
%S Invariant 3
S = RemoveAllNonSpecializingGHypotheses(S,G)
End Procedure
Procedure [S,G]=ProcessNegativeExample(G,S,e)
%S Invariant 4
S = RemoveAllCoveringHypotheses(S,e)**
%G Invariant 1
G = GetAllMinimalSpecializations(G,e)
%G Invariant 2
G = RemoveRedundantSpecificHypothesis(G)
%G Invariant 3
G = RemoveAllNonGeneralizingSHypotheses(G,S)
End Procedure
** If this is called S may become empty and cause the algorithm to terminate with failure,
especially in case of conjunctive concepts.
Initialization
G = {[?,?,?,?]}
S = {[empty]}
Example 1. [large,friendly,a_lot,spotted_brown_white] +
S = [[empty],[large,friendly,a_lot,spotted_brown_white]
S = [large,friendly,a_lot,spotted_brown_white]
G = {[?,?,?,?]}
Example 2. [small,aggressive,little,black] -
G = { [?,?,?,?],[large,?,?,?],[?,friendly,?,?],[?,?,a_lot,?],[?,?,?,brown]*, [?,?,?,multiple_colors]
G = { [large,?,?,?],[?,friendly,?,?],[?,?,a_lot,?], [?,?,?,multiple_colors]
GI3 is not satisfied unless [?,?,?,brown] is removed, as it does not generalize S
GI2 is satisfied, No redundant hypotheses, None of the members are generalizations of other
members
SI4 is satisfied None of the members of S cover the negative example
S = [large,friendly,a_lot,spotted_brown_white]
G={ [large,?,?,?],
[?,?,a_lot,?], [small,?,a_lot,?],[large,?,a_lot,?], [?,aggressive,a_lot,?],[?,?,a_lot,spotted],
[?,?,a_lot,single_color]
[?,?,?,multiple_colors],[?,?,?,spotted],[large,?,?,mixed_red_black],
[small,?,?,mixed_red_black],[?,aggressive,?,mixed_red_black] ,[?,?,little,mixed_red_black]
}
Example 5. [small,friendly,a_lot,spotted_brown_black] +
S = {[large,?,a_lot,spotted] [?,?,a_lot,spotted] }
G={ [large,?,?,?],[?,?,?,spotted]}
GI4 is not satisfied unless [large,?,?,?],is removed, as it does not include all positive examples
G={[?,?,?,spotted]}
S = {[?,?,a_lot,spotted] }
Termination VS
G={[?,?,?,spotted]}
S = {[?,?,a_lot,spotted] }