Anda di halaman 1dari 12

Testing the Programs

Testing the programs


In this part we look at
classification of faults
the purpose of testing
unit testing
integration testing strategies
when to stop testing
Testing is part of SWE Process irrespective any model

Concept change!!

Classification of faults

Many programmers view testing as a


demonstration that their program performs
properly.
The idea of demonstrating correctness is
really the reverse of what testing is all about.
We test a program to demonstrate the
existence of a fault! Because our goal is to
discover faults, we consider a test successful
only when a fault is discovered or a failure
occurs as a result of our testing procedures.

In an ideal world, we produce programs


where everything works flawlessly every time.
Unfortunately this is not the case!
We say that our software has failed, usually
when its behaviour deviates from the one
described in the requirements.
First we identify the fault, i.e. determine what
fault or faults caused the failure. Next we
correct the fault by making changes to the
system so that the fault is removed.

IBM Orthogonal Defect


Classification

Classification of faults
Why do we classify faults?

Fault type
Function

Meaning
Fault that affects capability, end-user interfaces,
product interfaces, interface with hardware
architecture, or global data structure

Interface

Fault in interacting with other components or


drivers via calls, macros,
control blocks or parameter lists

Checking

Fault in program logic that fails to validate data


and values properly before
they are used
Fault in data structure or code block initialization.

In order to improve our development


process!

We would like to match a fault to a


specific area of our development
process.
In other words, we would like our
classification scheme to be orthogonal.

Assignment

IBM Orthogonal Defect


Classification
Fault type
Timing/serialization

Hewlett-Packard fault
classification

Meaning
Fault that involves timing of shared and realtime resources

Build/package/merge Fault that occurs because of problems in


repositories, management changes,
or version control
Documentation
Algorithm

Fault that affects publications and


maintenance notes
Fault involving efficiency or correctness of
algorithm or data structure but
not design

Testing steps
Fault Classification for HP (1970)

Other Code
11%
Data Handling
6%

Computation
18%

Documentation
19%
Logic
32%

Requirements
5%
Hardware
4%

Process/
Interprocess
5%

Views of Test Objects


Black (opaque) box
In this type of testing, the test object is
viewed from the outside and its contents
are unknown.
Testing consists of feeding input to the
object and noting what output is produced.
The test's goal is to be sure that every kind
of input is submitted and that the observed
output matches the expected output.

Black Box Example


Suppose we have a component that accepts
as input the three numbers a, b, c and
outputs the two roots of the equation ax2 + bx
+ c = 0 or the message no real roots.
It is impossible to test the component by
submitting every possible triple of numbers
(a,b,c).
Representative cases may be chosen so that
we have all combinations of positive, negative
and zero for each of a, b, and c.
Additionally we may select values that ensure
that the discriminant, (b2 4ac) is positive,
zero, or negative.

Black Box Example


If the tests reveal no faults, we have no
guarantee that the component is faultfree!
There are other reasons why failure
may occur.
For some components, it is impossible
to generate a set of test cases to
demonstrate correct functionality for all
cases.

White Box Testing


White (transparent) box
In this type of testing, we use the structure
of the test object to test in different ways.
For example, we can devise test cases that
execute all the statements or all the control
paths within the component(s).
Sometimes with many branches and loops
it may be impractical to use this kind of
approach.

Unit Testing
Our goal is to find faults in components.
There are several ways to do this:
Examining the code
Code walkthroughs
Code inspections

Proving code correct


Testing components

Examining the Code

Examining the Code

Code walkthroughs are an informal type


of review.
Your code and documentation is
presented to a review team and the
team comments on their correctness.
You lead and control the discussion.
The focus is on the code not the
programmer

A code inspection is similar to a walkthrough


but is more formal.
Here the review team checks the code and
documentation against a prepared list of
concerns.

Correcting the Code

Testing components

Proof techniques are not widely used. It is difficult


to create proofs, these can sometimes be longer
than the program itself!

Choosing test cases: To test a component,


we select input data and conditions and
observe the output.
A test point or case is a particular choice of
test data.
A test is a finite collection of test cases.
Create tests that can convince ourselves and
our customers that the program works
correctly, not only for the test cases but for all
input.

Additionally customers require demonstration that


the program is working correctly.
Whereas a proof tells us how a program will work
in a hypothetical environment described by the
design and requirements, testing gives us
information on how the program works in its
actual operating environment.

For example, the team may examine the


definition and use of data types and structures to
see if their use is consistent with the design and
with system standards and procedures.

The team may review algorithms and


computations for their correctness and
efficiency

We start by defining test objectives and define


tests designed to meet a specific objective.
One objective can be that all statements should
execute correctly another can be that every
function performed by the code is done correctly.

Testing Components

Testing Components

As seen before we view the component as


either a white or black box.
If we use black box testing, we supply all
possible input and compare the output with what
was expected.
For example, with the quadratic equation seen
earlier we can choose values for the coefficients
that range over combinations of positive, zero and
negative numbers.
Or select combinations based on the relative sizes
e.g. a > b > c, b > c > a, c > b > a, ...etc

Equivalence Classes
Every possible input belongs to one of the
classes. That is, the classes cover the entire
set of input data.
No input datum belongs to more than one
class. That is, the classes are disjoint.
If the executing code demonstrates a fault
when using a particular class member is used
as input, then the same fault can be detected
using any other member of the class as input.
That is , any element of the class represents all
elements of that class.

We can go further and select values


based upon the discriminant.
We even supply non-numeric input to
determine the program's response.
In total we have four mutually exclusive
types of test input.
We thus use the test objective to help
us separate the input into equivalence
classes.

Equivalence Classes
It is not always easy of feasible to tell if
the third restriction can be met and it is
usually rewritten to say:
if a class member is used to detect a fault
then the probability is high that the other
elements in the class will reveal the same
fault.

Common Practice
Usually 'white' box and 'black' box testing are
combined.
Suppose we have a component expects a
positive input value. Then, using 'black' box
testing, we can have a test case for each of
the following:

a very large positive integer


a positive integer
a positive, fixed point decimal
a number greater than 0 but less than 1
a negative number
a non numeric character

Common Practice
Using 'white' box testing we can chose one or
more of the following:
Statement testing: Every statement in the
component is executed at least once in some test.
Branch testing: For every decision point in the the
code, each branch is chosen at least once in
some test.
Path testing: Every distinct path through the code
is executed at least once in some test.

White box testing


Statement testing
choose X > K that produces a +ve result
1-2-3-4-5-6-7

Branch testing

Check how many


paths are possible?

choose two test cases to traverse each branch of


the decision points
1-2-3-4-5-6-7
1-2-4-5-6-1

Path testing

four test cases needed


1-2-3-4-5-6-7
1-2-3-4-5-6-1
1-2-4-5-6-7
1-2-4-5-6-1

a = 5;

four test cases needed


1-2-3-4-5-6-7
1-2-3-4-5-6-1
1-2-4-5-6-7
1-2-4-5-6-1

for (b=1; b<=10; b++)


{

b=1

k = (a+b) / (a-b);

b=2

y = k / z;

b=3

cout <<k <<y;

a = 5; z = 2;
for (b=1; b<=10; b++)
{
k = (a+b) / (a-b);
z++;
y = k / z;
}

k
b=1
z =3
b=2
z =4
b=3
z =5

y
a = 5; b = 10; c = 10;
k = add_two(&a, &b, &c)
cout <<k;
float add_two(float *a, float *b, float *c);
{
c = a + b;
return (&c);
}

cout <<k <<y;

Integration testing

Integration testing

When each component has been completed


and tested, we can then combine them into a
working system.
This integration must be planned and
coordinated so that in the case of a failure,
we would be able to determine what may
have caused it.
Suppose we view the system as a hierarchy
of components (shown on the following slide).

Bottom-up integration

Top-down integration

Big-bang integration

Sandwich integration

Comparison of integration
strategies

When to Stop Testing?

10

Fault Seeding
We intentionally insert or seed a know
number of faults in a program.
Then another member of the team locate as
many faults as possible.
The number of undiscovered seeded faults
act as an indicator of the total number of
faults(unseeded and seeded) remaining in
the program.
We say:

Fault Seeding
Problems:
It is assumed that the seeded faults are of the
same kind and complexity as the actual faults
in the program.
This is difficult to do since we do not know what
are the typical faults until we have found them.

We can attempt to overcome this by basing


the seeded faults on historical data about
previous faults.
This, however requires that we have built similar
systems before.

Fault Seeding

Fault Seeding

Solution
Use two independent groups, Test Group 1
and Test Group 2.
Let x be the number detected by Group 1 and
y the number detected by Group 2.
Some faults will be detected by both groups
say q, such that q <= x and q <= y.
Finally let n be the total number of faults in
the program which we want to estimate.
The effectiveness of each group can be given
by E1 = x/n and E2 = y/n

The group effectiveness measures the


group's ability to detect faults from among a
set of existing faults.
If we assume that Group 1 is just as effective
at finding faults in any part of the program as
in any other part,
we can look at the ratio of faults found by Group 1
from the set of faults found by Group 2.
E1 = x/n = q/y
E2 = y/n = q/x
Which gives n = (xy)/q = q/(E1 * E2)

11

Confidence in the Software


If we seeded a program with S faults
and we claim that the code has only N
actual faults.
Suppose we tested until all S faults
have been found as well as n nonseeded faults, then a confidence level
can be calculated as
1
, if n > N
C=
S/(S N + 1) , if n N

Confidence in the Software


With that approach we cannot predict the
level of confidence until all the seeded faults
are detected.
Richards (1974) suggests a modification,
where the confidence level can be estimated
whether or not all the seeded faults have
located.
1
, if n > N
C=
S
S + N + 1 ,if n <= N
s -1
N+s

Other stopping criteria


We can use the test strategy to
determine when to stop.
If we are doing statement, branch or path
testing, we can track how many
statements, branches or paths yet need to
be executed and gauge our progress in
terms of those statements, branches or
paths left to test.

There are many tools that can calculate


these coverage values for us.

12

Anda mungkin juga menyukai