Anda di halaman 1dari 21

LAB REPORT

(Artificial Intelligence)

Department of Mechatronics & Control


Engineering

Submitted To:-
Mr. Hamad Hassan
Submitted By:-
M.Zamir ul Hasan (2007-
MCT-34)
Yawar Abbas Khan (2007-MCT-
34)
Imran Afzal (2007-MCT-41)
Habiba Islam (2007-MCT-52)
University of Engineering & Technology
Lahore

Tree
In computer science, a tree is a widely-used data structure that emulates a hierarchical tree
structure with a set of linked nodes.

Mathematically, it is a tree, more specifically an arborescence: an acyclic connected graph


where each node has zero or more children nodes and at most one parent node. Furthermore,
the children of each node have a specific order.

A node is a structure which may contain a value, a condition, or represent a separate data
structure (which could be a tree of its own). Each node in a tree has zero or more child nodes,
which are below it in the tree (by convention, trees are drawn growing downwards). A node
that has a child is called the child's parent node (or ancestor node, or superior). A node has at
most one parent.

Nodes that do not have any children are called leaf nodes. They are also referred to as terminal
nodes.

A free tree is a tree that is not rooted.

The height of a node is the length of the longest downward path to a leaf from that node. The
height of the root is the height of the tree. The depth of a node is the length of the path to its
root (i.e., its root path). This is commonly needed in the manipulation of the various self
balancing trees, AVL Trees in particular. Conventionally, the value -1 corresponds to a subtree
with no nodes, whereas zero corresponds to a subtree with one node.

The topmost node in a tree is called the root node. Being the topmost node, the root node will
not have parents. It is the node at which operations on the tree commonly begin (although
some algorithms begin with the leaf nodes and work up ending at the root). All other nodes can
be reached from it by following edges or links. (In the formal definition, each such path is also
unique). In diagrams, it is typically drawn at the top. In some trees, such as heaps, the root
node has special properties. Every node in a tree can be seen as the root node of the subtree
rooted at that node.

Binary Search Tree


A "binary search tree" (BST) or "ordered binary tree" is a type of binary tree where the nodes
are arranged in order: for each node, all elements in its left subtree are less-or-equal to the
node (<=), and all the elements in its right subtree are greater than the node (>).

Binary Search Tree Program


#include <iostream>

#include <cstdlib>

using namespace std;

class BinarySearchTree

private:

struct tree_node

tree_node* left;

tree_node* right;

int data;

};

tree_node* root;

public:

BinarySearchTree()

root = NULL;

bool isEmpty() const { return root==NULL; }

void print_inorder();
void inorder(tree_node*);

void print_preorder();

void preorder(tree_node*);

void print_postorder();

void postorder(tree_node*);

void insert(int);

void remove(int);

};

// Smaller elements go left

// larger elements go right

void BinarySearchTree::insert(int d)

tree_node* t = new tree_node;

tree_node* parent;

t->data = d;

t->left = NULL;

t->right = NULL;

parent = NULL;

// is this a new tree?

if(isEmpty()) root = t;

else

//Note: ALL insertions are as leaf nodes

tree_node* curr;
curr = root;

// Find the Node's parent

while(curr)

parent = curr;

if(t->data > curr->data) curr = curr->right;

else curr = curr->left;

if(t->data < parent->data)

parent->left = t;

else

parent->right = t;

void BinarySearchTree::remove(int d)

//Locate the element

bool found = false;

if(isEmpty())

cout<<" This Tree is empty! "<<endl;

return;

}
tree_node* curr;

tree_node* parent;

curr = root;

while(curr != NULL)

if(curr->data == d)

found = true;

break;

else

parent = curr;

if(d>curr->data) curr = curr->right;

else curr = curr->left;

if(!found)

cout<<" Data not found! "<<endl;

return;

// 3 cases :
// 1. We're removing a leaf node

// 2. We're removing a node with a single child

// 3. we're removing a node with 2 children

// Node with single child

if((curr->left == NULL && curr->right != NULL)|| (curr->left != NULL

&& curr->right == NULL))

if(curr->left == NULL && curr->right != NULL)

if(parent->left == curr)

parent->left = curr->right;

delete curr;

else

parent->right = curr->right;

delete curr;

else // left child present, no right child

if(parent->left == curr)

parent->left = curr->left;
delete curr;

else

parent->right = curr->left;

delete curr;

return;

//We're looking at a leaf node

if( curr->left == NULL && curr->right == NULL)

if(parent->left == curr) parent->left = NULL;

else parent->right = NULL;

delete curr;

return;

//Node with 2 children

// replace node with smallest value in right subtree

if (curr->left != NULL && curr->right != NULL)

tree_node* chkr;
chkr = curr->right;

if((chkr->left == NULL) && (chkr->right == NULL))

curr = chkr;

delete chkr;

curr->right = NULL;

else // right child has children

//if the node's right child has a left child

// Move all the way down left to locate smallest element

if((curr->right)->left != NULL)

tree_node* lcurr;

tree_node* lcurrp;

lcurrp = curr->right;

lcurr = (curr->right)->left;

while(lcurr->left != NULL)

lcurrp = lcurr;

lcurr = lcurr->left;

curr->data = lcurr->data;

delete lcurr;

lcurrp->left = NULL;
}

else

tree_node* tmp;

tmp = curr->right;

curr->data = tmp->data;

curr->right = tmp->right;

delete tmp;

return;

void BinarySearchTree::print_inorder()

inorder(root);

void BinarySearchTree::inorder(tree_node* p)

if(p != NULL)

if(p->left) inorder(p->left);
cout<<" "<<p->data<<" ";

if(p->right) inorder(p->right);

else return;

void BinarySearchTree::print_preorder()

preorder(root);

void BinarySearchTree::preorder(tree_node* p)

if(p != NULL)

cout<<" "<<p->data<<" ";

if(p->left) preorder(p->left);

if(p->right) preorder(p->right);

else return;

void BinarySearchTree::print_postorder()

postorder(root);

}
void BinarySearchTree::postorder(tree_node* p)

if(p != NULL)

if(p->left) postorder(p->left);

if(p->right) postorder(p->right);

cout<<" "<<p->data<<" ";

else return;

int main()

BinarySearchTree b;

int ch,tmp,tmp1;

while(1)

cout<<endl<<endl;

cout<<" Binary Search Tree Operations "<<endl;

cout<<" ----------------------------- "<<endl;

cout<<" 1. Insertion/Creation "<<endl;

cout<<" 2. In-Order Traversal "<<endl;

cout<<" 3. Pre-Order Traversal "<<endl;

cout<<" 4. Post-Order Traversal "<<endl;

cout<<" 5. Removal "<<endl;


cout<<" 6. Exit "<<endl;

cout<<" Enter your choice : ";

cin>>ch;

switch(ch)

case 1 : cout<<" Enter Number to be inserted : ";

cin>>tmp;

b.insert(tmp);

break;

case 2 : cout<<endl;

cout<<" In-Order Traversal "<<endl;

cout<<" -------------------"<<endl;

b.print_inorder();

break;

case 3 : cout<<endl;

cout<<" Pre-Order Traversal "<<endl;

cout<<" -------------------"<<endl;

b.print_preorder();

break;

case 4 : cout<<endl;

cout<<" Post-Order Traversal "<<endl;

cout<<" --------------------"<<endl;

b.print_postorder();

break;

case 5 : cout<<" Enter data to be deleted : ";

cin>>tmp1;
b.remove(tmp1);

break;

case 6 :

return 0;

Searching
After formulating our problem we are ready to solve it, this can be done by
searching through the state space for a solution, this search will be applied on a
search tree or generally a graph that is generated using the initial state and the
successor function.

Searching is applied to a search tree which is generated through state expansion,


that is applying the successor function to the current state, note that here we mean
by state a node in the search tree.

Generally, search is about selecting an option and putting the others aside for later
in case the first option does not lead to a solution, The choice of which option to
expand first is determined by the search strategy used.

The structure of a node in the search tree can be as follows:

1. State: the state in the state space to which this state corresponds
2. Parent-Node: the node in the search graph that generated this node.

3. Action: the action that was applied to the parent to generate this node.

4. Path-Cost: the cost of the path from the initial state to this node.

5. Depth: the number of steps along the path from the initial state.

It is important to make a distinction between nodes and states, A node in the search
tree is a data structure holds a certain state and some info used to represent the
search tree, where state corresponds to a world configuration, that is more than one
node can hold the same state, this can happened if 2 different paths lead to the
same state.

. Measuring problem-solving performance


Search as a black box will result in an output that is either failure or a solution,
We will evaluate a search algorithm`s performance in four ways:

1. Completeness: is it guaranteed that our algorithm always finds a solution


when there is one ?

2. Optimality: Does our algorithm always find the optimal solution ?

3. Time complexity: How much time our search algorithm takes to find a
solution ?

4. Space complexity: How much memory required to run the search


algorithm?

Time and Space in complexity analysis are measured with respect to the number of
nodes the problem graph has in terms of asymptotic notations.

In AI, complexity is expressed by three factors b, d and m:

1. b the branching factor is the maximum number of successors of any node.

2. d the depth of the deepest goal.

3. m the maximum length of any path in the state space.

Uninformed Searches
Breadth First Search
Breadth First Search (also known as BFS) is a search method used to broaden all
the nodes of a particular graph. It accomplishes this task by searching every single
solution in order to examine and expand these nodes (or a combination of
sequences therein). As such, a BFS does not use a heuristic algorithm (or an
algorithm that searches for a solution through multiple scenarios). After all the
nodes are obtained, they are added to a queue known as the First In, First Out
queue. Those nodes that have not been explored are ‘stored’ in a container marked
‘open’; once explored they are transported to a container marked ‘closed’.
The features of the BFS are space and time complexity, completeness, proof of
completeness, and optimality. Space complexity refers to the proportion of the
number of nodes at the deepest level of a search. Time complexity refers to the
actual amount of ‘time’ used for considering every path a node will take in a search.
Completeness is, essentially, a search that finds a solution in a graph regardless of
what kind of graph it is. The proof of the completeness is the shallowest level at
which a goal is found in a node at a definite depth. Finally, optimality refers to a BFS
that is not weighted – that is a graph used for unit-step cost.

Depth First Search


Depth First Search (also known as DFS) is a search method that burrows deeper
into a child node of a search until a goal is reached (or until there is a node without
any other permutations or ‘children’). After one goal is found, the search backtracks
to a previous node that has gone with a solution, repeating the process until all the
nodes have been successfully searched. As such, nodes continue to be put aside for
further exploration – this is called non-recursive implementation.

A DFS is the most natural output using a spanning tree – which is a tree made up of
all vertices and some edges in an undirected graph. In this formation, the graph is
divided into three classes: Forward edges, pointing from a node to a child node;
back edges, pointing from a node to an earlier node; and cross edges,which do not
do either one of these.

Iterative Deepening Depth First Search


Iterative deepening depth-first search (IDDFS) is a state space search strategy
in which a depth-limited search is run repeatedly, increasing the depth limit with
each iteration until it reaches d, the depth of the shallowest goal state. On each
iteration, IDDFS visits thenodes in the search tree in the same order as depth-first
search, but the cumulative order in which nodes are first visited, assuming
nopruning, is effectively breadth-first.

IDDFS combines depth-first search's space-efficiency and breadth-first search's


completeness (when the branching factor is finite). It is optimal when the path cost
is a non-decreasing function of the depth of the node.

The space complexity of IDDFS is O(bd), where b is the branching factor and d is the
depth of shallowest goal. Since iterative deepening visits states multiple times, it
may seem wasteful, but it turns out to be not so costly, since in a tree most of the
nodes are in the bottom level, so it does not matter much if the upper levels are
visited multiple times.[1]

The main advantage of IDDFS in game tree searching is that the earlier searches
tend to improve the commonly used heuristics, such as thekiller heuristic and alpha-
beta pruning, so that a more accurate estimate of the score of various nodes at the
final depth search can occur, and the search completes more quickly since it is done
in a better order. For example, alpha-beta pruning is most efficient if it searches the
best moves first

A second advantage is the responsiveness of the algorithm. Because early iterations


use small values for d, they execute extremely quickly. This allows the algorithm to
supply early indications of the result almost immediately, followed by refinements
as d increases. When used in an interactive setting, such as in a chess-playing
program, this facility allows the program to play at any time with the current best
move found in the search it has completed so far. This is not possible with a
traditional depth-first search.

The time complexity of IDDFS in well-balanced trees works out to be the same as
Depth-first search:O(bd).

In an iterative deepening search, the nodes on the bottom level are expanded once,
those on the next to bottom level are expanded twice, and so on, up to the root of
the search tree, which is expandedd + 1 times.[1] So the total number of expansions
in an iterative deepening search is

For b = 10 and d = 5 the number is

6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

All together, an iterative deepening search from depth 1 to


depth d expands only about 11% more nodes than a single breadth-
first or depth-limited search to depth d, when b = 10. The higher the
branching factor, the lower the overhead of repeatedly expanded
states, but even when the branching factor is 2, iterative deepening
search only takes about twice as long as a complete breadth-first
search. This means that the time complexity of iterative deepening is
still O(bd), and the space complexity is O(bd). In general, iterative
deepening is the preferred search method when there is a large search
space and the depth of the solution is not known.

Bidirectional Search
Bidirectional search is a graph search algorithm that finds ashortest path from an
initial vertex to a goal vertex in a directed graph. It runs two simultaneous searches:
one forward from the initial state, and one backward from the goal, stopping when
the two meet in the middle. The reason for this approach is that in many cases it is
faster: for instance, in a simplified model of search problem complexity in which
both searches expand a tree with branching factor b, and the distance from start to
goal is d, each of the two searches has complexity O(bd/2) (in Big O notation), and
the sum of these two search times is much less than the O(bd) complexity that
would result from a single search from the beginning to the goal.

This speedup does not come without a price: a bidirectional search algorithm must
include additional logic to decide which search tree to extend at each step,
increasing the difficulty of implementation. The goal state must be known (rather
than having a general goal criterion that may be met by many different states), the
algorithm must be able to step backwards from goal to initial state (which may not
be possible without extra work), and the algorithm needs an efficient way to find
the intersection of the two search trees. Additionally, the branching factor of
backwards steps may differ from that for forward steps. The additional complexity
of performing a bidirectional search means that the A* search algorithm is often a
better choice if we have a reasonable heuristic.

As in A* search, bi-directional search can be guided by a heuristic estimate of the


remaining distance to the goal (in the forward tree) or from the start (in the
backward tree). An admissible heuristic will also produce a shortest solution, as was
proven originally for A*.

A node to be expanded is selected from the frontier that has the least number of
open nodes and which is most promising. Termination happens when such a node
resides also in the other frontier. A descendant node's f-value must take into
account the g-values of all open nodes at the other frontier. Hence node expansion
is more costly than for A*. The collection of nodes to be visited can be smaller as
outlined above. Thus one trades in less space for more computation. The 1977
reference showed that the bi-directional algorithm found solutions where A* had run
out of space. Shorter paths were also found when non admissible heuristics were
used. These tests were done on the 15-puzzle used by Ira Pohl.

Comparison Of Uninformed Searches


(References: Artificial Intelligence: A Modern Approach (2nd Edition) by
Stuart Russell and Peter Norvig )

Heuristic Searches

A* Search
A* uses a best-first search and finds the least-cost path from a given initial node to
one goal node (out of one or more possible goals).

It uses a distance-plus-cost heuristic function (usually denoted f(x)) to determine


the order in which the search visits nodes in the tree. The distance-plus-cost
heuristic is a sum of two functions:

 the path-cost function, which is the cost from the starting node to the current
node (usually denoted g(x))
 and an admissible "heuristic estimate" of the distance to the goal (usually
denoted h(x)).

The h(x) part of the f(x) function must be an admissible heuristic; that is, it must not
overestimate the distance to the goal. Thus, for an application
like routing, h(x) might represent the straight-line distance to the goal, since that is
physically the smallest possible distance between any two points or nodes.
If the heuristic h satisfies the additional condition for every
edge x, y of the graph (where d denotes the length of that edge), then h is
called monotone, or consistent. In such a case, A* can be implemented more
efficiently—roughly speaking, no node needs to be processed more than once
(see closed set below)—and A* is equivalent to running Dijkstra's algorithm with
thereduced cost d'(x,y): = d(x,y) − h(x) + h(y).

As A* traverses the graph, it follows a path of the lowest known cost, keeping a
sorted priority queueof alternate path segments along the way. If, at any point, a
segment of the path being traversed has a higher cost than another encountered
path segment, it abandons the higher-cost path segment and traverses the lower-
cost path segment instead. This process continues until the goal is reached

Anda mungkin juga menyukai