Computer Science
Dr J Frost
(jfrost@tiffin.kingston.sch.uk)
Slide Guidance
Any box with a ? can be clicked to reveal the
? answer (this works particularly well with
interactive whiteboards!).
Make sure youre viewing the slides in
slideshow mode.
For multiple choice questions (e.g. SMC), click your
choice to reveal the answer (try below!)
C: Madrid
Contents
1.
2.
3.
4.
5.
6.
7.
8.
Suppose
we want to check if the number 8 is in the list.
If the size of the problem is , (i.e. there are cards in the list), then in
the worst case, how much time will it take to check that some
number is in there?
And given that the list is stored on a disc (rather than in memory),
how much memory (i.e. space) do we need for our algorithm?
(Worst Case) Time
Space Complexity
Complexity
Time
Space
If theres n items to check, and each
takes some constant amount of
time to check, so we know the time
will be at most some constant times
n.
Big O notation
So the time and space complexity of an algorithm gives us a measure of
how complex the algorithm is in terms of the time itll take, and the
space required to do its handywork.
= 3 10x
2
2x
+
3
2x
3
We say
that:
i.e. grows
cubically.
Big O notation
Formally,
if , then there is some constant such that for all sufficiently
large . So technically we could say that , because the big-O just provides
an upper bound to the growth. But we would want to keep this upper
bound as low as possible, so it would be more useful to say that .
While
big-O notation has been around for centuries (particularly in
number theory), in the 1950s, it started to be used to describe the
complexity of algorithms.
Returning to our probably of finding a number in an ordered list, we
can now express our time and space complexity using big-O notation
Remember that the
(in terms of the list size ):
Time Complexity
Space Complexity
constant scaling
Big O notation
Well see some examples of more algorithms and their complexity in
a second, but lets see how we might describe algorithms based on
their complexity
Time Complexity
We say the time complexity of the
algorithm is
constant time
linear? time
quadratic
? time
polynomial
? time
exponential
? time
logarithmic
? time
Sets
<4, -2, 3, 6,
3>
Does ordering of
items matter?
Yes
?
Duplicates
allowed?
Yes
?
No.
No
?
Binary Search
12 15 20
Binary Search
12 15 20
A sensible thing to do
is to look at the
number just after the
centre. That way, we
can round down our
search by half in one
step.
In this case , so we
know that if the
number is in the
Looking to
see if 14 is in
our list/set.
Binary Search
12 15 20
Now we looking
Looking to
see if 14 is in
our list/set.
Binary Search
12 15 20
Looking to
see if 14 is in
our list/set.
Binary Search
12 15 20
We can see on each step, we half the number of items that need to be
search. The number of steps (i.e. the time complexity) in terms of the
number of items must therefore be:
Time Complexity ?
This makes sense when you think about it. If , then , i.e. we can half 16 four
Space Complexity ?
Adding an item
to the list.
Merging
two lists
(of size and
respectively,
where )
Unsorted
the
list. two lists (of
Merging
size and
respectively, where )
Hash Table
Hash Tables are structure which allow us to do certain operations to do
with collections much more quickly: e.g. inserting a value into the
collection, and retrieving!
Imagine
we had 10 buckets to put new values into. Suppose we
had a rule which decided what bucket to put a value into:
Find the remainder when is divided by 10 (i.e. )
Hash Table
We can use our mod 10 hash function to insert new values into our
hash table.
3
1
6
7
4
2
1
9
11
2
55
57
29
33
69
Hash Table
The great thing about a hash table is that if we want to check if
some value is contained within it, we only need to check within the
bucket it corresponds to.
e.g. Is 65 in our hash table?
Using the same hash function, wed just check Bucket 5. At this
point, we might just do a linear search of the items in the bucket to
see if the 65 matches. In this case, wed conclude that 65 isnt part
of our collection of numbers.
3
1
11
2
4
2
2
33
69
55
4
57
6
7
7
29
1
9
9
Hash Table
Suppose weve put n items in a hash table with k buckets:
Operation
Time Complexity
3
1
O(n/k)
11
2
4
2
2
33
69
55
4
57
6
7
7
29
1
9
9
Recursive Algorithms
Recursive Algorithms
Recursive Algorithms
Recursive Algorithms
Next move the 1 remaining disc (or whatever disc is at the top
of the peg) from the start to goal peg.
i.e. MOVE(START,GOAL)
Recursive Algorithms
Recursive Algorithms
Recursive Algorithms
A
We can see this algorithm in action. If the 3 pegs are A, B and C, and we
have 3 discs, then we want to execute HANOI(A, B, C, 3) to get our moves:
HANOI(A, B, C, 3)
= HANOI(A, C, B, 2), MOVE(A, C), HANOI(B, A, C, 2)
= HANOI(A, B, C, 1), MOVE(A, B), HANOI(C, B, A, 1), MOVE(A, C),
HANOI(B, C, A, 1), MOVE(B, C), HANOI(A, B, C, 1)
= MOVE(A, C), MOVE(A, B), MOVE(C, A), MOVE(A, C), MOVE(B,
A),
MOVE(B, C), MOVE(A, C)
Recursive Algorithms
Sorting Algorithms
One very fundamental algorithm in Computer Science is sorting a
collection of items so that they are in order (whether in numerical order,
or some order weve defined).
Well look at the main well-known algorithms, and look at their time
complexity.
1
9
3
1
4
2
55
6
7
11
2
Bubble Sort
This looks at each pair of numbers first, starting with the 1 st
and 2nd, then the 2nd and 3rd , and swaps them if theyre in
the wrong order:
3
1
1
9
55
4
2
11
2
6
7
Click to
Animate
Bubble Sort
3
1
1
9
55
4
2
11
2
6
7
Time Complexity?
O(n2)
The first pass requires n-1 comparisons,
the next pass
?
requires n-2 comparisons, and so on, giving us the sum of an
arithmetic sequence.
So the exact number of comparison is n(n-1)
This is growth quadratic in n, i.e. O(n2)
Merge Sort
First treat each individual value as an individual list (with
1 item in it!)
3
1
1
9
4
2
55
1
9
3
1
4
2
1
9
3
1
4
2
55
1
9
3
1
55
4
2
55
11
2
6
7
11
2
6
7
4
6
7
4
6
7
11
2
11
2
Merge Sort
At each point in the algorithm, we know each smaller list will be in
order.
Merging two sorted lists can be done quite quickly (click the button
below):
1 3
11
4 6
Click to
55
2 4
Animate
9 1
2
2 7
New merged
list
Merge Sort
Time Complexity?
O(n log n)
Each merging phase requires exactly n steps because when merging
each pair of lists, each comparison
? puts an element in our new list. So
theres exactly n comparisons.
Theres log2 n phases, because similarly to the binary search, each
phase halves the number of mini-lists.
Bogosort
The Bogosort, also known as Stupid Sort, is intentionally a joke
sorting algorithm, but provides some educational value. It simply goes
like this:
1. Put all the elements of the list in a completely random order.
2. Check if the elements are in order. If so, youre done. If not,
then go back to Step 1.
We can describe time complexity in different ways: the worst-case
behaviour (i.e. the longest amount of time the algorithm can
Worst
Case Time
Average
Case Time
possibly
take)Complexity?
and the average-case
behaviour
(i.e.Complexity?
how long we
expect the algorithm to take on average)
The algorithm theoretically
O(n n!)
may never terminate, because
There are n! possible ways the items
the order may be wrong every
can be ordered. Presuming no
duplicates in the list, theres a 1 in n!
time.