Binary search is one of the fundamental algorithms in computer science. In order to explore it,
well first build up a theoretical backbone, then use that to implement the algorithm properly and
avoid those nasty off-by-one errors everyones been talking about.
For example, consider the following sequence of integers sorted in ascending order and say we
are looking for the number 55:
0 5 13 19 22 41 55 68 72 81 98
We are interested in the location of the target value in the sequence so we will represent the
search space as indices into the sequence. Initially, the search space contains indices 1 through
11. Since the search space is really an interval, it suffices to store just two numbers, the low and
high indices. As described above, we now choose the median value, which is the value at index 6
(the midpoint between 1 and 11): this value is 41 and it is smaller than the target value. From this
we conclude not only that the element at index 6 is not the target value, but also that no element
at indices between 1 and 5 can be the target value, because all elements at these indices are
smaller than 41, which is smaller than the target value. This brings the search space down to
indices 7 through 11:
55 68 72 81 98
Proceeding in a similar fashion, we chop off the second half of the search space and are left with:
55 68
Depending on how we choose the median of an even number of elements we will either find 55
in the next step or chop off 68 to get a search space of only one element. Either way, we
conclude that the index where the target value is located is 7.
If the target value was not present in the sequence, binary search would empty the search space
entirely. This condition is easy to check and handle. Here is some code to go with the
description:
binary_search(A, target):
lo = 1, hi = size(A)
while lo <= hi:
mid = lo + (hi-lo)/2
if A[mid] == target:
return mid
else if A[mid] < target:
lo = mid+1
else:
hi = mid-1
The logarithm is an awfully slowly growing function. In case youre not aware of just how
efficient binary search is, consider looking up a name in a phone book containing a million
names. Binary search lets you systematically find any given name using at most 21 comparisons.
If you could manage a list containing all the people in the world sorted by name, you could find
any person in less than 35 steps. This may not seem feasible or useful at the moment, but well
soon fix that.
Note that this assumes that we have random access to the sequence. Trying to use binary search
on a container such as a linked list makes little sense and it is better use a plain linear search
instead.
Youre best off using library functions whenever possible, since, as youll see, implementing
binary search on your own can be tricky.
Consider a predicate p defined over some ordered set S (the search space). The search space
consists of candidate solutions to the problem. In this article, a predicate is a function which
returns a boolean value, true or false (well also use yes and no as boolean values). We use the
predicate to verify if a candidate solution is legal (does not violate some constraint) according to
the definition of the problem.
What we can call the main theorem states that binary search can be used if and only if for all x
in S, p(x) implies p(y) for all y > x. This property is what we use when we discard the second
half of the search space. It is equivalent to saying that p(x) implies p(y) for all y < x (the
symbol denotes the logical not operator), which is what we use when we discard the first half
of the search space. The theorem can easily be proven, although Ill omit the proof here to reduce
clutter.
Behind the cryptic mathematics I am really stating that if you had a yes or no question (the
predicate), getting a yes answer for some potential solution x means that youd also get a yes
answer for any element after x. Similarly, if you got a no answer, youd get a no answer for any
element before x. As a consequence, if you were to ask the question for each element in the
search space (in order), you would get a series of no answers followed by a series of yes answers.
Careful readers may note that binary search can also be used when a predicate yields a series of
yes answers followed by a series of no answers. This is true and complementing that predicate
will satisfy the original condition. For simplicity well deal only with predicates described in the
theorem.
If the condition in the main theorem is satisfied, we can use binary search to find the smallest
legal solution, i.e. the smallest x for which p(x) is true. The first part of devising a solution based
on binary search is designing a predicate which can be evaluated and for which it makes sense to
use binary search: we need to choose what the algorithm should find. We can have it find either
the first x for which p(x) is true or the last x for which p(x) is false. The difference between the
two is only slight, as you will see, but it is necessary to settle on one. For starters, let us seek the
first yes answer (first option).
The second part is proving that binary search can be applied to the predicate. This is where we
use the main theorem, verifying that the conditions laid out in the theorem are satisfied. The
proof doesnt need to be overly mathematical, you just need to convince yourself that p(x)
implies p(y) for all y > x or that p(x) implies p(y) for all y < x. This can often be done by
applying common sense in a sentence or two.
When the domain of the predicate are the integers, it suffices to prove that p(x) implies p(x+1) or
that p(x) implies p(x-1), the rest then follows by induction.
These two parts are most often interleaved: when we think a problem can be solved by binary
search, we aim to design the predicate so that it satisfies the condition in the main theorem.
One might wonder why we choose to use this abstraction rather than the simpler-looking
algorithm weve used so far. This is because many problems cant be modeled as searching for a
particular value, but its possible to define and evaluate a predicate such as Is there an
assignment which costs x or less?, when were looking for some sort of assignment with the
lowest cost. For example, the usual traveling salesman problem (TSP) looks for the cheapest
round-trip which visits every city exactly once. Here, the target value is not defined as such, but
we can define a predicate Is there a round-trip which costs x or less? and then apply binary
search to find the smallest x which satisfies the predicate. This is called reducing the original
problem to a decision (yes/no) problem. Unfortunately, we know of no way of efficiently
evaluating this particular predicate and so the TSP problem isnt easily solved by binary search,
but many optimization problems are.
Let us now convert the simple binary search on sorted arrays described in the introduction to this
abstract definition. First, lets rephrase the problem as: Given an array A and a target value,
return the index of the first element in A equal to or greater than the target value. Incidentally,
this is more or less how lower_bound behaves in C++.
We want to find the index of the target value, thus any index into the array is a candidate
solution. The search space S is the set of all candidate solutions, thus an interval containing all
indices. Consider the predicate Is A[x] greater than or equal to the target value?. If we were
to find the first x for which the predicate says yes, wed get exactly what decided we were
looking for in the previous paragraph.
The condition in the main theorem is satisfied because the array is sorted in ascending order: if
A[x] is greater than or equal to the target value, all elements after it are surely also greater than
or equal to the target value.
0 5 13 19 22 41 55 68 72 81 98
1 2 3 4 5 6 7 8 9 10 11
And apply our predicate (with a target value of 55) to it we get:
This is a series of no answers followed by a series of yes answers, as we were expecting. Notice
how index 7 (where the target value is located) is the first for which the predicate yields yes, so
this is what our binary search will find.
Another thing you need to be careful with is how high to set the bounds. By high I really mean
wide since there are two bounds to worry about. Every so often it happens that a coder
concludes during coding that the bounds he or she set are wide enough, only to find a
counterexample during intermission (when its too late). Unfortunately, little helpful advice can
be given here other than to always double- and triple-check your bounds! Also, since execution
time increases logarithmically with the bounds, you can always set them higher, as long as it
doesnt break the evaluation of the predicate. Keep your eye out for overflow errors all around,
especially in calculating the median.
Now we finally get to the code which implements binary search as described in this and the
previous section:
if p(lo) == false:
complain // p(x) is false for all x in S!
If we wanted to find the last x for which p(x) is false, we would devise (using a similar rationale
as above) something like:
if p(lo) == true:
complain // p(x) is true for all x in S!
The code will get stuck in a loop. It will always select the first element as mid, but then will not
move the lower bound because it wants to keep the no in its search space. The solution is to
change mid = lo + (hi-lo)/2 to mid = lo + (hi-lo+1)/2, i.e. so that it rounds up instead of down.
There are other ways of getting around the problem, but this one is possibly the cleanest. Just
remember to always test your code on a two-element set where the predicate is false for the first
element and true for the second.
You may also wonder as to why mid is calculated using mid = lo + (hi-lo)/2 instead of the usual
mid = (lo+hi)/2. This is to avoid another potential rounding bug: in the first case, we want the
division to always round down, towards the lower bound. But division truncates, so when lo+hi
would be negative, it would start rounding towards the higher bound. Coding the calculation this
way ensures that the number divided is always positive and hence always rounds as we want it
to. Although the bug doesnt surface when the search space consists only of positive integers or
real numbers, Ive decided to code it this way throughout the article for consistency.
Real numbers
Binary search can also be used on monotonic functions whose domain is the set of real numbers.
Implementing binary search on reals is usually easier than on integers, because you dont need to
watch out for how to move bounds:
If you need to do as few iterations as possible, you can terminate when the interval gets small,
but try to do a relative comparison of the bounds, not just an absolute one. The reason for this is
that doubles can never give you more than 15 decimal digits of precision so if the search space
contains large numbers (say on the order of billions), you can never get an absolute difference of
less than 10-7.
Example
At this point I will show how all this talk can be used to solve a TopCoder problem. For this I
have chosen a moderately difficult problem, FairWorkload, which was the division 1 level 2
problem in SRM 169.
In the problem, a number of workers need to examine a number of filing cabinets. The cabinets
are not all of the same size and we are told for each cabinet how many folders it contains. We are
asked to find an assignment such that each worker gets a sequential series of cabinets to go
through and that it minimizes the maximum amount of folders that a worker would have to look
through.
After getting familiar with the problem, a touch of creativity is required. Imagine that we have an
unlimited number of workers at our disposal. The crucial observation is that, for some number
MAX, we can calculate the minimum number of workers needed so that each worker has to
examine no more than MAX folders (if this is possible). Lets see how wed do that. Some
worker needs to examine the first cabinet so we assign any worker to it. But, since the cabinets
must be assigned in sequential order (a worker cannot examine cabinets 1 and 3 without
examining 2 as well), its always optimal to assign him to the second cabinet as well, if this does
not take him over the limit we introduced (MAX). If it would take him over the limit, we
conclude that his work is done and assign a new worker to the second cabinet. We proceed in a
similar manner until all the cabinets have been assigned and assert that weve used the minimum
number of workers possible, with the artificial limit we introduced. Note here that the number of
workers is inversely proportional to MAX: the higher we set our limit, the fewer workers we will
need.
Now, if you go back and carefully examine what were asked for in the problem statement, you
can see that we are really asked for the smallest MAX such that the number of workers required is
less than or equal to the number of workers available. With that in mind, were almost done, we
just need to connect the dots and see how all of this fits in the frame weve laid out for solving
problems using binary search.
With the problem rephrased to fit our needs better, we can now examine the predicate Can the
workload be spread so that each worker has to examine no more than x folders, with the limited
number of workers available? We can use the described greedy algorithm to efficiently evaluate
this predicate for any x. This concludes the first part of building a binary search solution, we now
just have to prove that the condition in the main theorem is satisfied. But observe that increasing
x actually relaxes the limit on the maximum workload, so we can only need the same number of
workers or fewer, not more. Thus, if the predicate says yes for some x, it will also say yes for all
larger x.
while ( lo < hi ) {
int x = lo + (hi-lo)/2;
return lo;
}
Note the carefully chosen lower and upper bounds: you could replace the upper bound with any
sufficiently large integer, but the lower bound must not to be less than the largest cabinet to
avoid the situation where a single cabinet would be too large for any worker, a case which would
not be correctly handled by the predicate. An alternative would be to set the lower bound to zero,
then handle too small xs as a special case in the predicate.
To verify that the solution doesnt lock up, I used a small no/yes example with folders={1,1} and
workers=1.
The overall complexity of the solution is O(n log SIZE), where SIZE is the size of the search
space. This is very fast.
As you see, we used a greedy algorithm to evaluate the predicate. In other problems, evaluating
the predicate can come down to anything from a simple math expression to finding a maximum
cardinality matching in a bipartite graph.
Conclusion
If youve gotten this far without giving up, you should be ready to solve anything that can be
solved with binary search. Try to keep a few things in mind:
Design a predicate which can be efficiently evaluated and so that binary search can be
applied
Decide on what youre looking for and code so that the search space always contains that
(if it exists)
If the search space consists only of integers, test your algorithm on a two-element set to
be sure it doesnt lock up
Verify that the lower and upper bounds are not overly constrained: its usually better to
relax them as long as it doesnt break the predicate
Generally, to find a value in unsorted array, we should look through elements of an array one by
one, until searched value is found. In case of searched value is absent from array, we go through
all elements. In average, complexity of such an algorithm is proportional to the length of the
array.
Situation changes significantly, when array is sorted. If we know it, random access capability can
be utilized very efficiently to find searched value quick. Cost of searching algorithm reduces to
binary logarithm of the array length. For reference, log2(1 000 000) 20. It means, that in worst
case, algorithm makes 20 steps to find a value in sorted array of a million elements or to say, that
it doesn't present it the array.
Algorithm
Algorithm is quite simple. It can be done either recursively or iteratively:
Now we should define, when iterations should stop. First case is when searched element is found. Second
one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present
in the array.
Examples
Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Complexity analysis
Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically
in worst case. In practice it means, that algorithm will do at most log2(n) iterations, which is a
very small number even for big arrays. It can be proved very easily. Indeed, on every step the
size of the searched part is reduced by half. Algorithm stops, when there are no elements to
search in. Therefore, solving following inequality in whole numbers:
n / 2iterations > 0
resulting in
Java
/**
* @param array
* array to search in
* @param value
* searched value
* @param left
* @param right
* it is absent
*/
return -1;
if (array[middle] == value)
return middle;
else
C++
/*
* or -1, if it is absent
*/
if (arr[middle] == value)
return middle;
right = middle - 1;
else
left = middle + 1;
}
return -1;
Visualizers
1. Binary Search in Java Applets Centre
Hi, greetings from Argentina. I don't know whether this site is too old or very new.
Anyway, I believe there is a mistake with the binary search. I noticed that when "if
(arr[middle] > value)" is true, it means that you discard the first half of your array,
considering 0,1,2,...,n. Then, in that case, the next sentence should be left=middle + 1
instead of right=middle-1, which makes you consider only the first half of the array. Best
wishes, Andres from Buenos Aires, Argentina.
There is no mistake. If condition "value < arr[middle]" is true, it means, that value may
present only in the first part of an array. In this case second part of an array is discarded
and search continued in the first part. Thanks for your reply.
Euclids Algorithm
void remainder ( int a, int b) //Remainder function Recursive version:
{
int x, remainder; int gcd(int a, int b) {
remainder = 0;
if ( a > b) if (b == 0)
{
x = a; return a;
a = b;
b = x; return gcd(b, a % b);
remainder = a; }
while (remainder != 0)
{
x = remainder;
remainder = b % remainder;
b = x;
}
cout << "GCD is " << r << endl;
}
if (b > a)
{
x = b;
b = a;
a = x;
remainder = b;
while (remainder != 0)
{
x = remainder;
remainder = a % remainder;
a = x;
}
cout << "GCD is " << r << endl;
}
}
Iterative version:
int t;
while (b != 0) {
t = b;
b = a % b;
a = t;
return a;
in the form
c=ax+by
for integers x
and y
? If so, is there more than one solution, and what are they? Before answering this, let us answer a
seemingly unrelated question:
How do you find the greatest common divisor (gcd) of two given integers a,b
are coprime.
and b, and look for the greatest one they have in common. However, this requires a and b
divides a and d divides b, then d divides their sum. Similarly, d must also divide their difference, a - b,
where a is the larger of the two. But this means weve shrunk the original problem to a smaller size: we
just need to find gcd(a,ab)
by doing something that most people learn in primary school: division and remainder. We give
an example and leave the proof of the general case to the reader.
33=127+6
Thus gcd(33,27)=gcd(27,6)
27=46+3
. Lastly,
6=23+0
This algorithm does not require factorizing numbers, and is fast. We obtain a crude bound for the
number of steps required by observing that if we divide a
by b to get a=bq+r, and r>b/2, then in the next step we get a remainder rb/2
. Thus every two steps, the numbers shrink by at least one bit.
such that
3=33m+27n
First rearrange all the equations so that the remainders are the subjects:
6=33127
3=2746
Then we start from the last equation, and substitute the next equation into it:
3=274(33127)=(4)33+527)
. (If there were more equations, we would repeat this procedure, until we have used all the equations and
found m and n
.)
such that
d=ma+nb
such that
c=xa+yb.
Let d=gcd(a,b)
. Now xa+yb is a multiple of d for any integers x,y, thus if c is not a multiple of d
So say c=kd
. Using the extended Euclidean algorithm we can find m,n such that d=ma+nb, thus we have a
solution x=km,y=kn
Suppose x,y
c=xa+yb=xa+yb
Rearranging,
(xx)a=(yy)b
Since d
is the greatest common divisor, b/d does not divide a. But it must divide the right-hand side (since b
appears there) so (xx) is some multiple of b/d
, that is
xx=tb/d
for some integer t
gives
yy=ta/d
Thus x=x+tb/d
But if we replace t
. Thus there are infinitely many solutions, and they are given by
x=km+tb/d,y=kn+ta/d.
x=m+tq,y=n+tp.
The Euclidean Algorithm
Suppose a and b are integers, not both zero. The greatest common divisor (gcd, for short) of a and b,
written (a,b) or gcd(a,b), is the largest positive integer that divides both a and b. We will be concerned
almost exclusively with the case where a and b are non-negative, but the theory goes through with
essentially no change in case a or b is negative. The notation (a,b)
might be somewhat confusing, since it is also used to denote ordered pairs and open intervals.
The meaning is usually clear from the context. We begin with some simple observations:
and b
a) (a,b)=(b,a)
b) if a>0
c) if ac(modb)
, then (a,b)=(c,b)
Proof.
Part (a) is clear, since a common divisor of a
and b is a common divisor of b and a. For part (b), note that if a|b, then a is a common divisor of a and
b. Clearly a is the largest divisor of a, so we are done. Finally, if ac(modb), then b|ac, so there is a y
such that ac=by, i.e., c=aby. If d divides both a and b, then it also divides aby. Therefore any
common divisor of a and b is also a common divisor of c and b. Similarly, if d divides both c and b, then
it also divides c+by=a, so any common divisor of c and b is a common divisor of a and b. This shows
that the common divisors of a and b are exactly the common divisors of c and b
It perhaps is surprising to find out that this lemma is all that is necessary to compute a gcd, and
moreover, to compute it very efficiently. This remarkable fact is known as the Euclidean
Algorithm. As the name implies, the Euclidean Algorithm was known to Euclid, and appears in
The Elements; see section 2.6. As we will see, the Euclidean Algorithm is an important
theoretical tool as well as a practical algorithm. Here is how it works:
To compute (a,b)
, divide the larger number (say a) by the smaller number, so a=bq1+r1 and r1<b. By 3.3.1(c),
(a,b)=(b,r1). Now b=r1q2+r2, r2<r1, and (b,r1)=(r1,r2); then r1=r2q3+r3, r3<r2, and (r1,r2)=(r2,r3),
and so on. Since r1>r2>r3, eventually some rk=0 and (a,b)=(rk1,rk)=(rk1,0)=rk1; in other words,
(a,b) is the last non-zero remainder we compute. Note that (a,0)=a
, by 3.3.1(b).
Example 3.3.2
(198,168)=(168,30)=(30,18)=(18,12)=(12,6)=(6,0)=6.
If you have done some computer programming, you should see just how easy it is to implement
this algorithm in any reasonable programming language. Since it is a very fast algorithm it plays
an important role in many applications.
With a little extra bookkeeping, we can use the Euclidean Algorithm to show that gcd(a,b)
and b=198:
3018126=198168=ba,=168530=a5(ba)=6a5b,=3018=(ba)(6a5b)=7a+6b,=18
12=(6a5b)(7a+6b)=13a11b
Notice that the numbers in the left column are precisely the remainders computed by the
Euclidean Algorithm. With a little care, we can turn this into a nice theorem, the Extended
Euclidean Algorithm.
and b are integers, not both zero. Then there are integers x and y such that (a,b)=ax+by
Proof.
The Euclidean Algorithm proceeds by finding a sequence of remainders, r1
, r2, r3, and so on, until one of them is the gcd. We prove by induction that each ri is a linear combination
of a and b. It is most convenient to assume a>b and let r0=a and r1=b. Then r0 and r1 are linear
combinations of a and b, which is the base of the induction. The repeated step in the Euclidean Algorithm
defines rn+2 so that rn=qrn+1+rn+2, or rn+2=rnqrn+1. If rn and rn+1 are linear combinations of a and b
(this is the induction hypothesis) then so is rn+2
Exercises 3.3
Ex 3.3.1 For the pairs of integers a
, b given below, find the gcd g and integers x and y satisfying g=ax+by
a) a=13,b=32
b) a=40,b=148
c) a=55,b=300
Ex 3.3.2 If p
Ex 3.3.3 Suppose g
is the gcd of a and b. If i and j are integers and c=ai+bj, prove g|c
Ex 3.3.4 Suppose g
is the gcd of a and b. If g|c, prove that there are integers i and j such that c=ai+bj
Ex 3.3.5 If g=(a,b)
and x=ab, prove g2|x
and x is a multiple of g2. Show that there are integers a and b such that (a,b)=g and ab=x. (Hint: there is
an n such that x=g2n; aim for a trivial case remembering that you get to define a and b
.)
Ex 3.3.8 Show that there are, in fact, an infinite number of ways of expressing (a,b)
as a combination of a and b
and rn+1=xn+1a+yn+1b, by the induction hypothesis. Write rn+2 as an explicit linear combination of a
and b, and identify xn+2 and yn+2
Ex 3.3.10 The Euclidean algorithm works so well that it is difficult to find pairs of numbers that
make it take a long time. Find two numbers whose gcd is 1, for which the Euclidean Algorithm
takes 10 steps.
where Fn is the n
Ex 3.3.12 Write a computer program to implement the Extended Euclidean Algorithm. That is,
given a and b, the program should compute and display gcd(a,b), x and y.
GCD(x,y) = GCD(x,x) = x
GCD(x,y) = GCD(x-y,y)
Actually, this is easy to prove. Suppose that d is a divisor of both x and y. Then there exist
integers q1 and q2 such that x = q1d and y = q2d. But then
Using similar reasoning, one can show the converse, i.e., that any divisor of x-y and y is also a
divisor of x. Hence, the set of common divisors of x and y is the same as the set of common
divisors of x-y and y. In particular, the largest values in these two sets are the same, which is to
say that GCD(x,y) = GCD(x-y,y).
As an example, suppose that we use this method to compute the GCD of 420 and 96. If we were
to take a snapshot of the method's local variables immediately before the first loop iteration and
immediately after each iteration, we'd get
When k m
-------------------- ----- -----
Before 1st iteration 420 96
After 1st iteration 324 96
After 2nd iteration 228 96
After 3rd iteration 132 96
After 4th iteration 36 96
After 5th iteration 36 60
After 6th iteration 36 24
After 7th iteration 12 24
After 8th iteration 12 12
A significant improvement in performance (in some cases) is possible, however, by using the
remainder operator (%) rather than subtraction. Notice in the above that we subtracted m's value
from k's four times before k became less than m. The same effect results from replacing k's value
by k % m. (In this case, 420 % 96 is 36.) Using this approach forces us to change the loop
condition, however, as here what will eventually happen is that one of k or m will become zero.
(Indeed, k == m is impossible to arrive at unless K == M.) Note that GCD(x,0) = GCD(0,x) = x. (After
all, x's greatest divisor is itself, which is also a divisor of zero.)
When k m
-------------------- ----- -----
Before 1st iteration 420 96
After 1th iteration 36 96
After 2nd iteration 36 24
After 3rd iteration 12 24
After 4th iteration 12 0
Notice that the number of loop iterations has been cut in half. Further code-simplification
improvements are possible, however, by ensuring that k m. We can achieve this by replacing,
on each iteration, k's value by m's and m's by k % m. This way, no if statement is needed:
int gcd(int K, int M) {
int k = Math.max(K,M);
int m = Math.min(K,M);
// loop invariant: k m GCD(K,M) = GCD(k,m)
while (m != 0) {
int r = k % m;
k = m;
m = r;
}
// At this point, GCD(K,M) = GCD(k,m) = GCD(k,0) = k
return k;
}
When k m
-------------------- ----- -----
Before 1st iteration 420 96
After 1th iteration 96 36
After 2nd iteration 36 24
After 3rd iteration 24 12
After 4th iteration 12 0
All must be well aware of the problem of Tower of Hanoi, for those who dont know, lets
discuss it once again.
The Tower of Hanoi (also called the Tower of Brahma or Lucas Tower, and sometimes
pluralized) is a mathematical game or puzzle.
It consists of three rods, and a number of disks of different sizes which can slide onto any rod.
The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest
at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following
simple rules:
1. Using Recursion
2. Using Stacks
Using Recursion:
#include <iostream>
using namespace std;
int main()
{
int num;
Sample Output:
Enter the number of disks : 3
The sequence of moves involved in the Tower of Hanoi are :