Anda di halaman 1dari 193

WWW.UNIVERSALEXAMS.

COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 1


ISEB/ISTQB
Foundation Certificate in Software Testing



Version 1.2








Study Guide Overview
The study guide is designed in modules, each one containing sections covering the
topics of the examination syllabus. Questions covering the topics learned in the
preceding module are provided to ensure the student has retained the information.
The student will also be provided with revision questions to ensure that the knowledge
has been taken on board.

Any unauthorized copying or re-selling of the contents of this document without
permission will constitute an infringement of copyright.

Commercial copying, re-selling, hiring, lending is strictly prohibited.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 2

Index


Module 1 Fundamentals of Testing

Why is Testing necessary?
What is Testing?
General Testing Principles
The Fundamental Test Process
The Psychology of Testing
Module 1 Review Questions

Module 2 Testing Throughout the Software Lifecycle

Software Development Models
Test Levels
Test Types
Maintenance Testing
Module 2 Review Questions

Module 3 Static Techniques

Static Techniques and the Test Process
Review Process
Static Analysis by Tools
Module 3 Review Questions

Module 4 Test Design Techniques

The Test Development Process
Categories of Test Design Techniques
Specification-based or Black-box Techniques
Structure-based or white-box techniques
Experienced-based Techniques
Choosing Test Techniques
Module 4 Review Questions

Module 5 Test Management

Test Organization
Test Planning & Estimation
Test Process Monitoring and Control
Configuration Management
Risk and Testing
Incident Management
Module 5 Review Questions

Module 6 Tool Support for Testing

Types of Test Tool
Effective Use of Tools: potential benefits and risks
Introducing a Tool into an Organization
Module 6 Review Questions
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 3

1 Fundamentals of testing (K2)


Why is testing necessary? (K2)

What is testing? (K2)

General testing principles (K2)

Fundamental test process (K1)

The psychology of testing (K2)



K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 4

1.1 Why is testing necessary?


Terms used in this section:

Bug, defect, error, failure, fault, mistake, quality, risk.


Software systems context (K1)

Most people have had some exposure to a software system of some kind, whether its a
consumer product (e.g. i-pod, mobile phone, PC application etc) or a business
application (e.g. banking, production application etc). Many people who have used such
software systems would also most likely have experienced a situation where the
software system did not behave as they expected. The impact of this unexpected
behaviour can result in widely varying outcomes including anything from delay and
incorrectness, to complete failure.



Causes of software defects (K2)

Consider the following excerpt...

According to news reports in April of 2004, a software bug was determined to be a
major contributor to the 2s003 Northeast blackout, the worst power system failure in
North American history. The failure involved loss of electrical power to 50 million
customers, forced shutdown of 100 power plants, and economic losses estimated at $6
billion. The bug was reportedly in one utility company's vendor-supplied power
monitoring and management system, which was unable to correctly handle and report
on an unusual confluence of initially localized events. The error was found and corrected
after examining millions of lines of code.


Unfortunately, examples such as the one above are still commonplace today. Even the
smallest mistake in a program can cause devastating results.


In April of 2003 it was announced that the largest student loan company in the U.S.
made a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company will
still reportedly lose $8 million in interest. The error was uncovered when borrowers
began reporting inconsistencies in their bills.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 5
When a developer writes hundreds, maybe thousands of lines of code, we couldnt
possibly expect it to be 100% correct. This could be due to several factors, including:
Poor communication
Programming errors
Changing requirements
Software complexity
All of the above factors have human involvement, and humans make mistakes. The
chances of mistakes can also increase when pressures such as tight deadlines are
present. Through the concept of testing we can try to detect the mistake, and ensure it is
rectified before the software product is released. Note the use of the word try in the last
sentence. Software testing is actually a process used to identify the correctness and
quality of developed software. Testing can never establish the correctness of software,
as this can only be done by formal verification (and then only when the verification
process is faultless).
In July 2004 newspapers reported that a new government welfare management system
in Canada costing several hundred million dollars was unable to handle a simple
benefits rate increase after being put into live operation. Reportedly the original contract
allowed for only 6 weeks of acceptance testing and the system was never tested for its
ability to handle a rate increase.






Lets start looking at the terminology used in software testing in relation to mistakes in a
program, as there are subtle differences which are worth knowing:

Error:
A human action that produces an incorrect result (may also be termed a mistake).

Defect:
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system. (may
also be termed a fault or bug).

Failure:
Deviation of the software from its expected delivery or service.
!
Remember, testing can only find defects, not
prove that there are none!

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 6
Imagine a scenario where a software program prints out a warning message when the
core temperature reaches its critical value. An explosion in the reactor could occur if no
action is taken when the temperature reaches its critical value.

Program Extract:

$temperature = $temperature + $input
if $tempereture > 100 then
Print Reactor Core Temperature Critical !!!
Else Print Reactor Core Temperature Normal

End:

Did you notice the spelling mistake?


If this program was actually being used, then the output would always be Reactor Core
Temperature Normal. This is because there is no such variable of $tempereture. This
could possibly result in an explosion of the reactor, as no one would see the warning
message. We can use the definitions previously described of an error, defect and failure
with this example.

The error would be; the misspelling of the variable $temperature.

The defect would be; the correct variable $temperature is never used.

The failure would be; the fact that the warning message is never displayed when
required.


In our simple example above, a human made a mistake which produced the defect in the
code. Bear in mind that a defect can occur not just in lines of code within a software
program, but also in a system or even in a document. If an action takes place that
effectively executes the defect, then the system may fail to do something, or do
something that it shouldnt, more often than not causing a failure. Be aware that not
every defect will cause a failure.

A defect can occur due to a number of very different reasons. A pre-cursor to a defect;
an error, may be occur due to time pressures being put on the developer, the complexity
of the code itself, technological changes etc.

A failure can also be attributed to variety of causes including environmental conditions,
for example:

Pollution
Radiation
Electronic fields
Magnetism
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 7
Role of testing in software development, maintenance and operations (K2)

Testing is performed in an attempt to reduce the risk of a problem occurring during the
operational use of a software system. This is achieved by actively testing the system or
documentation, with the intention of finding any defects before the products release. In
addition, testing may be required to be performed by any or all of the following:

Legal requirements

Contractual requirements

Industry standards



Testing and quality (K2)

If the tests are well designed, and they pass, then we can say that the overall level of
risk in the product has been reduced. If any defects are found, rectified and
subsequently successfully tested, then we can say that the quality of the software
product has increased. The testing term Quality can be thought of as an overall term,
as the quality of the software product is dependent upon many factors.





In general, quality software should be reasonably bug-free, delivered on time and within
the original budget. But, often there may be additional quality requirements from a
number of different origins, such as; the customers acceptance testing, future
maintenance engineers, sales people etc. and all of those mentioned may have a
different view on quality. Many software related products have numerous
versions/releases, which are normally individual projects in their development stage. By
analyzing why certain defects were found on previous projects, it is possible to improve
processes with the aim to prevent the same kind of defects from occurring again. This is
effectively an aspect of quality assurance, which testing should be a part of. As you can
now see, with good testing practices, we can rely on testing to provide us with a useful
measurement of the quality of a software product in terms of defects found.

Risk

Tests are well designed,
and they pass

Quality

Defects found, rectified
and subsequently
successfully tested
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 8
How much testing is enough? (K2)

From what we discussed so far, it is obvious that software testing has an important part
to play in the lifecycle of the development of the software product. So why dont we test
everything in the program?

The answer is simple......We dont have enough time!

There is in fact a software testing process that does attempt to test everything. This is
called Exhaustive Testing. The purpose of Exhaustive Testing is to execute a program
with all possible combinations of inputs or values for program variables.


For example, consider a simple program that displayed a page which contained ten input
boxes:




If each input box would accept any one of ten values, you would require a test case for
each and every permutation of the entries that could be entered e.g.:

10(input boxes) to the 10(values)
th
power

10 to the 10
th
power

= 10,000,000,000 test cases

Lets now look at that example a little further in practical terms. If it took only one second
to perform each test case, it would take approximately 317 years to complete!
Field 1 7
Field 2 4
Field 3 3
Field 4 4
Field 5 8
Field 6 9
Field 7 1
Field 8 9
Field 9 5
Field 10 5
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 9

So if we cant test everything, then how do we choose what to test?


The answer is simplyRisk!


The amount of risk based on testing, or not testing software components may
sometimes dictate how much time will be allocated for testing. Using the above
examples, then obviously if a failure could cause loss of life, then the amount of testing
required would intensify based on this potential risk. Normally, risk is expressed as
likelihood and impact.

There may also exist project specific constraints such as a limited budget or a time
limit/deadline. Other aspects of risk may include technical, business and project related
risks. It is the function of testing to give to the project stakeholders enough information
for them to make an informed decision on the release of the software or system being
tested. So from a stakeholders perspective, it is extremely important to get the balance
right between the level of risk, and level of testing to be performed.



Testing is typically focused on specific areas of the software product that have the
greatest potential risk of failure. Additionally, priority can be assigned to the test cases
that have been chosen to be performed. This priority is often helpful when the time
allocated to testing is limited. You can simply perform the high priority test cases first,
and then any remaining time can be used to perform any low priority test cases.

Plenty of testing
time available
Test all relevant
Areas
Perform all test
cases

Time is
Limited
Test only high
risk areas
Perform high
priority tests first

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 10

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Bug




Defect




Error




Failure




Fault




Mistake




Quality




Risk






WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 11
You should now be familiar with the following terms:
Bug:
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect,
if encountered during execution, may cause a failure of the component or system.
(same as defect)
Defect:
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect,
if encountered during execution, may cause a failure of the component or system.
Error:
A human action that produces an incorrect result.
Failure:
Deviation of the component or system from its expected delivery, service or result.
Fault:
A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect,
if encountered during execution, may cause a failure of the component or system.
(same as defect)
Mistake:
A human action that produces an incorrect result. (same as error)
Quality:
The degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.
Risk:
A factor that could result in future negative consequences; usually expressed as
impact and likelihood.

Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 12

1.2 What is testing? (K2)


Terms used in this section:

Debugging, requirement, review, test case, testing, test objective.


To many people, when you mention the word testing, they immediately think of
executing test cases. But executing the actual tests is only one of the testing activities.
Testing activities actually exist before and after the activity of performing tests. For
example:

Planning and control
Choosing test conditions
Designing test cases
Checking results
Evaluating exit criteria,
Reporting on the testing process and system under test
Finalizing or closure

Also, during the development lifecycle additional tasks may include reviews. These can
be document reviews or source code reviews. The intention of a review is to evaluate an
item to see if there are any discrepancies or to recommend improvements.

Static analysis may also be used during the development lifecycle. Its worthy of note that
static analysis and dynamic testing can both be used to achieve similar testing
objectives. They may also contribute to improving the testing and development
processes, as well as potentially improve the system under test.

Traditionally, the purpose of testing can range from, finding defects, gaining confidence
and preventing defects. When planning and designing the testing, we must take into
consideration the purpose of why we are testing in order to produce meaningful results.
A typical software related project will have specific requirements to satisfy. A tester will
use the requirements to help with test case design, and will aim to ensure that the
requirements are satisfied by successful execution of the test cases.

It is also important to consider designing the tests as early in the project lifecycle as
possible. This helps prevent defects from being introduced into the software code before
it is actually written. A similar result can be achieved from reviewing documents that will
be used to create the code.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 13

Testing objectives can also be strongly influenced by the originating environment. For
example; if you were working within a development environment (e.g. component,
integration and system testing), you would probably find that the testing would be
focused on finding as many defects as possible. This would be to ensure that any
problems could be fixed very early on in the development lifecycle. If you were
performing Acceptance testing, then the focus would be on ensuring the quality of the
product. This would provide a level of confidence in the product to provide to
stakeholders to give them confidence prior to its actual release. Maintenance testing
often includes testing to ensure that that no new defects have been introduced during
development of the changes. During operational testing, the main objective may be to
assess system characteristics such as reliability or availability.



That may sound strange, as at first thought you might think that a successful test would
be one that finds no problems at all, especially when a defect is found it generally
causes delays in the product development. But if you consider that if a defect is found
during testing and rectified before it is released, it could save a fortune compared to a
released software product containing defects being used by a customer, and the
problems that could bring.










A common misconception is that debugging is the same as testing, but these practices
are really quite different. Testing effectively highlights the defect by identifying failures,
whereas debugging is the activity of investigating the cause of the defect. Subsequently,
fixing the code and checking that the fix actually works are also considered debugging
activities. Once the fix has been applied, a tester will often be called in to perform
confirmation testing on the fix.

!
Testers Test, Developers Debug!

Is debugging the same as testing?
?
!
A successful test is one that finds a defect
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 14

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Debugging




Requirement




Review




Test case




Testing




Test objective




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 15

You should now be familiar with the following terms:

Debugging
The process of finding, analyzing and removing the causes of failures in software.


Requirement
A condition or capability needed by a user to solve a problem or achieve an objective
that must be met or possessed by a system or system component to satisfy a
contract, standard, specification, or other formally imposed document. [After IEEE
610]


Review
An evaluation of a product or project status to ascertain discrepancies from planned
results and to recommend improvements. Examples include management review,
informal review, technical review, inspection, and walkthrough. [After IEEE 1028]


Test case
A set of input values, execution preconditions, expected results and execution post
conditions, developed for a particular objective or test condition, such as to exercise
a particular program path or to verify compliance with a specific requirement. [After
IEEE610]


Testing
The process used to assess the quality of the item under test.


Test objective
A reason or purpose for designing and executing a test.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 16

1.3 General testing principles (K2)


Terms used in this section:

Exhaustive testing


General Testing Principles

In order to assist a tester, there do actually exist some general testing principles, or
guidelines. These are merely suggestions developed over the past 40 years, and are
open to interpretation, but you often come across them in various testing environments,
and may find them useful in given circumstances.



Principle 1 Testing shows presence of defects

If defects exist within a piece of software, then testing may show that defects exist. But
testing cannot prove that no defects are present in the piece of software. What testing
can provide is a decrease in the probability of defects existing in the software. In simple
terms; if the tests show up no defects, it doesnt mean they are not present.



Principle 2 Exhaustive testing is impossible

As we have discussed in a previous section, in the majority of situations, it is impossible
for us to test everything. If we do attempt to test everything, then this termed Exhaustive
Testing. An alternative is to prioritize testing effort based on testing whats most
important. Additionally, analyzing the risks to determine where to focus testing efforts is
a sensible alternative.



Principle 3 Early Testing

The earlier on in the development lifecycle testing starts, the better. By testing early we
can ensure the requirements can actually be tested, and also influence the way the
development proceeds. The cost of fixing a defect earlier on in the project significantly
decreases, and so makes the decision to involve the test team early, a simple one.
Careful thought should be taken as to which defined objectives the test team will be
focused on though, so as to ensure the best productivity.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 17

Principle 4 Defect Clustering:

Some areas (or modules) tested may contain significantly higher defects than others. By
being aware of defect clustering, we can ensure that testing is focused in those areas
that contain the most defects. If the same area or functionality is tested again, then
previous knowledge gained can be used to great effect as to the potential risk of more
defects being found, allowing a more focused test effort.



Principle 5 Pesticide paradox:

If we ran the same tests over and over again, we would probably find the amount of new
defects found would decrease. This could be due to the fact that all defects found using
these test cases had been fixed. So re-running the same tests would not show any new
defects. To avoid this, the tests should be regularly reviewed to ensure all expected
areas of functionality are covered. New tests can be written to exercise the code in new
or different ways to highlight potential defects.



Principle 6 Testing is context dependant:

Depending on the item being developed, the way the testing is carried out will often
differ. For example an air traffic control system will undoubtedly be tested in a different
way to a childrens story book program. This is because in the case of our air traffic
control system, this would be tested from a safety-critical perspective, which would
undoubtedly involve stringent testing. The childrens story book program would still be
tested, but would be less stringent based purely on the end result of a failure occurring
not causing loss of life.



Principle 7 Absence of errors fallacy:

There is no point in developing and testing an item of software, only for the end user to
reject it on the grounds that it does not do what was required of it. Considerable time
may be spent testing to ensure that no errors are apparent, but it could be a wasted
effort if the end result does not satisfy the requirement. Early reviews of requirements
and designs can help with highlighting any discrepancies between the customers
requirements and what is actually being developed.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 18

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Exhaustive testing




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 19

You should now be familiar with the following terms:

Exhaustive testing
A test approach in which the test suite comprises all combinations of input values
and preconditions.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 20

1.4 Fundamental test process (K1)


Terms used in this section:

Confirmation testing, retesting, exit criteria, incident, regression testing, test basis,
test condition, test coverage, test data, test execution, test log, test plan, test
procedure, test policy, test strategy, test suite, test summary report, testware.


Introduction

In order to perform effective testing, the testing must be planned. Once the testing has
been planned, it is also important to adhere to it. A common pitfall among testers is to
create a good test plan, but then not follow it correctly. We already know that it is
impossible to completely test everything, but with careful planning that includes the right
selection of tests and the way that they are tested, we can effectively test the software
product to a high standard.

Although the diagram on the next page displays a logical sequence, this is by no means
rigid and is often adapted to suit individual testing requirements. The activities may also
overlap or take place at the same time.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 21



Test Planning & Control (K1)
Test Planning & Control
Begin
Test Analysis & design
End
Fundamental Test Process

Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
Important Note:

Although logically sequential, each of the above activities in the
process may overlap or occur at the same time.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 22


Test Planning basically involves determining what is going to be tested, why it is going to
be tested, and how it is going to be tested. It is also important to clarify what is not going
to be tested in the software product too. Here are some examples of Test Planning
activities:
Determining Risks and objectives of testing

Organization of the equipment and people involved

Determining the approach to testing. A Test strategy may be required at this
stage of planning, which is a high-level description of the test levels to be
performed and the testing within those levels for the organization.

Ensuring any policies are adhered to. Some organizations produce what is
known as a Test policy. This high level document will describe the principles,
approach and major objectives of the organization regarding testing, and can be
useful as a high-level guide for a tester on how to perform their testing tasks in
accordance with their companys approach.

Organization of timescales for design, execution and evaluation stages

Specifying the Exit Criteria
In order to meet the objectives of the testing, it is very important not only to have good
plans, but also to ensure that they are adhered to. Test Control will help to achieve this,
as its purpose is to ensure that the ongoing testing progress is compared to the Test
Plan. This is achieved by taking into account information received from monitoring test
activities.
Test Planning & Control
Test Analysis & design
Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 23
After analyzing the information, decisions may have to be made regarding the best way
forward for the testing. For example, the testing may be proceeding slower than planned,
and so this information will have to be fed back to the managers to avoid impacting the
project timescales. Or a defect may have been found that has is blocking the tester from
executing the remaining tests, which may require reprioritization of developers effort.

The following diagram summarizes activities of Test Control:





Analyzing the results and
feedback


Performing actions to correct
mistakes or changes


Decision making


Consistent monitoring and
documentation of testing activities


Project
Management
should be aware
of this information
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 24
Test Analysis & Design (K1)

Test analysis and design is the activity of transforming test objectives into actual test
conditions and test cases. Test objectives can come from a variety of sources, and will
often take the form of a set of requirements. Once the requirements are clearly
understood, it should be possible to design tests or conditions based upon these
requirements. The following list contains examples of Test Analysis & Design activities:

Reviewing the Test basis. The Test basis is effectively the documentation that
the test cases were built upon. This may include items such as requirements,
architecture, design, interfaces etc.

Evaluating testability of the test basis and test objects.

Identifying and prioritizing test conditions based on analysis of test items, which
could be based on a function, transaction, feature, quality attribute, or structural
element etc.

Designing and prioritizing test cases.

Identifying necessary test data to support the test conditions and test cases. This
may take the form of preparing a database for example to support certain test
cases. The test data may affect the item under test, or be affected itself by the
item under test.

Test Planning & Control
Test Analysis & design
Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
Test Objectives
Test Conditions
& Test Cases
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 25
Designing the test environment set-up and identifying any required infrastructure
and tools.

Test coverage should also be a consideration here, as coverage is the extent
that a structure has been exercised by a test suite, and so is an important factor
to stakeholders to provide confidence that all areas expected to be tested, have
actually been tested. Test coverage is expressed as a value and is normally a
percentage of the items being covered. If coverage is not 100%, then more tests
may be designed to test those items that were missed and, therefore, increase
coverage.









Test Implementation and Execution (K1)

This stage is where the actual testing is performed. It can mean running the test cases
manually or by use of an automated testing tool. Before this can happen though,
everything must be put in place in relation to the test environment, including which test
cases will be ran in which order. This is known as a test procedure or script.
Test Planning & Control
Test Analysis & design
Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 26
The following are all considered to be Test Implementation and Execution tasks:
Developing, implementing and prioritizing test cases.

Developing and prioritizing test procedures, creating test data and, optionally,
preparing test harnesses and writing automated test scripts.

Creating test suites from the test procedures for efficient test execution. A Test
suite is basically a set of test cases for a component or system under test, where
the post condition of one test is often used as the precondition for the next one.

Verifying that the test environment has been set up correctly.

Executing test procedures either manually or by using test execution tools,
according to the planned sequence.

The task of actually executing the test cases and recording the results is carried
out here and is known as test execution and additionally involves recording the
identities and versions of the software under test, test tools and testware. The
term testware essentially means items produced during the test process
required to plan, design, and execute tests. These could be documents, scripts,
inputs, expected results, procedures, databases etc. In addition to recording the
results, a Test log should also be used, which consists of a chronological record
of relevant details about the execution of the tests

Comparing actual results with expected results.

Reporting discrepancies as incidents and analyzing them in order to establish
their cause. The task of raising an incident may also require some investigative
work on behalf of the tester to provide useful information about the incident.

Repeating test activities as a result of action taken for each discrepancy. This
task may involve Confirmation testing which is effectively re-testing test cases
that may have previously failed, to make sure any corrective actions have been
successful. Additionally, Regression testing may also be performed to ensure
that defects have not been introduced or uncovered in unchanged areas of the
software, as a result of any changes that have been made.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 27
Evaluating Exit Criteria and Reporting (K1)



This stage is designed to ensure that any specified Exit criteria has been met by the
performed testing activities. The Exit criteria should have been previously specified in
the Test planning stage. The stored Test results/logs can be checked in this stage
against the Exit criteria.



If the Exit criteria has not been met, then more tests may be required, or even changes
to the Exit criteria may be recommended. This is a good stage to create a Test
summary. The Test summary can be used by any interested parties (stakeholders) to
quickly ascertain the status of testing completeness and outcome, leading to a level of
confidence in the product.

What if the Exit criteria has not been met?
?
Test Planning & Control
Test Analysis & design
Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 28
Test Closure Activities (K1)



This stage is concerned with collecting test results and test related documentation in
order to achieve a milestone prior to a release of the product. Checking that planned
deliverables have in fact been delivered, and any defects found during the testing should
have been fixed and verified fixed at this stage.


It is worth knowing that it is common place for defect fixes to be deferred for future
developments, which is predominantly due to time constraints of the current project.

A formal handover to another department or even customer may happen in this stage.
Finalizing and archiving the testware, including the possible handover of the testware to
a support or maintenance department may happen at this stage. As with most
development projects there are always problems encountered, here is a good stage to
evaluate those and attempt to learn any lessons for future developments. This can
contribute to the improvement of test maturity.
!
It is rare that all defects would have been
fixed within a software development project.
Test Planning & Control
Test Analysis & design
Test Implementation & Execution
Evaluating Exit Criteria & Reporting
Test Closure Activities
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 29

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Confirmation testing




Retesting




Exit criteria




Incident




Regression testing




Test basis




Test condition



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 30


Test coverage




Test data




Test execution




Test log




Test plan




Test procedure




Test policy




Test strategy



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 31


Test suite




Test summary report




Testware




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 32

You should now be familiar with the following terms:

Confirmation testing
(same description as retesting) Testing that runs test cases that failed the last time
they were run, in order to verify the success of corrective actions.


Retesting
Testing that runs test cases that failed the last time they were run, in order to verify
the success of corrective actions.


Exit criteria
The set of generic and specific conditions, agreed upon with the stakeholders, for
permitting a process to be officially completed. The purpose of exit criteria is to
prevent a task from being considered completed when there are still outstanding
parts of the task which have not been finished. Exit criteria are used to report against
and to plan when to stop testing. [After Gilb and Graham]


Incident
Any event occurring that requires investigation. [After IEEE 1008]


Regression testing
Testing of a previously tested program following modification to ensure that defects
have not been introduced or uncovered in unchanged areas of the software, as a
result of the changes made. It is performed when the software or its environment is
changed.


Test basis
All documents from which the requirements of a component or system can be
inferred. The documentation on which the test cases are based. If a document can
be amended only by way of formal amendment procedure, then the test basis is
called a frozen test basis. [After TMap]


Test condition
An item or event of a component or system that could be verified by one or more test
cases, e.g. a function, transaction, feature, quality attribute, or structural element.


Test coverage
The degree, expressed as a percentage, to which a specified coverage item has
been exercised by a test suite.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 33

Test data
Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.


Test execution
The process of running a test on the component or system under test, producing
actual result(s).


Test log
A chronological record of relevant details about the execution of tests. [IEEE 829]


Test plan
A document describing the scope, approach, resources and schedule of intended
test activities. It identifies amongst others test items, the features to be tested, the
testing tasks, who will do each task, degree of tester independence, the test
environment, the test design techniques and entry and exit criteria to be used, and
the rationale for their choice, and any risks requiring contingency planning. It is a
record of the test planning process. [After IEEE 829]


Test procedure
(same description as test procedure specification) A document specifying a
sequence of actions for the execution of a test. Also known as test script or manual
test script. [After IEEE 829]


Test policy
A high level document describing the principles, approach and major objectives of
the organization regarding testing.


Test strategy
A high-level description of the test levels to be performed and the testing within those
levels for an organization or programme (one or more projects).


Test suite
A set of several test cases for a component or system under test, where the post
condition of one test is often used as the precondition for the next one.


Test summary report
A document summarizing testing activities and results. It also contains an evaluation
of the corresponding test items against exit criteria. [After IEEE 829]

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 34

Testware
Artifacts produced during the test process required to plan, design, and execute
tests, such as documentation, scripts, inputs, expected results, set-up and clear-up
procedures, files, databases, environment, and any additional software or utilities
used in testing. [After Fewster and Graham]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 35

1.5 The psychology of testing (K2)


Terms used in this section:

Error guessing, independence.


One of the primary purposes of software tester is to find defects and failures within
software. This can often be perceived as destructive to the development lifecycle, even
though it is very constructive in managing product risks. Whereas, the purpose of a
developer is often seen a more creative one. This in some circumstances naturally
causes friction between developers and testers.





A Developer will often spend long hours working on a piece of software, sometimes for
many months. They may take great pride in their piece of work. Then a tester comes
along and finds fault with it. You can quickly see where friction might emerge from!

Good communication is essential on any project, whether it is verbal or through
documentation. The sooner the problem is understood by all parties, the sooner the
problem can be resolved. This is particularly the case when it comes to the relationship
between a Developer and a Tester. The way in which a Tester approaches a Developer
with a problem is important to the progression of the project. The following example is
how NOT to approach a Developer:


Tester
Developer
A Developers perspective?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 36
Tester: Hey stupid, I found another bug in your software

You can imagine the Developers response to that remark. How about the following more
tactful approach:

Tester: Hi, I seem be to getting some strange results when running my test. Would you
mind taking a look at my setup, just in case I have configured it incorrectly?

Even if you are convinced that your setup is correct, at least this way you are implying
that the problem could possibly be somewhere else, and not the Developers fault. When
the Developer sees the test fail for himself, he will probably explain what he thinks is
going wrong, and you will now be in a better position to work together to resolve the
problem.


An example of an ideal Testers attributes:

Professionalism
A critical eye
Curiosity
Attention to detail
Good communication skills
Experience (particularly useful for anticipating what defects might be present and
designing tests to expose them, known as Error guessing).



It is often more effective for someone other than the Developer themselves to test the
product, and this is called Independent testing which encourages the accomplishment of
objective testing. However, this should not be seen a critical method, as developers can
effectively find defects in their own code due to the fact that they are familiar with it. In
most cases though, an independent approach is generally more effective and finding
defects and failures.

There are in fact different levels of independence used in testing, with the bottom of the
list being the most independent:

Test cases are designed by the person(s) writing the software.
Test cases are designed by another person(s).
Test cases are designed by a person(s) from a different section.
Test cases are designed by a person(s) from a different organization.
Test cases are not chosen by a person.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 37

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Error guessing




Independence




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 38

You should now be familiar with the following terms:

Error guessing
A test design technique where the experience of the tester is used to anticipate what
defects might be present in the component or system under test as a result of errors
made, and to design tests specifically to expose them.


Independence
Separation of responsibilities, which encourages the accomplishment of objective
testing.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 39

Module 1 Review Questions


1) Describe what the following terms mean:

Error

Defect

Failure



2) What are the stages of the Fundamental Test Process?



3) Try and list as many Tester attributes as you can.



4) What is termed a successful test?



5) Which stage of the Fundamental Test Process would you expect to find
Collecting the test results?



6) Is Debugging the same as testing?



7) List the General Testing Principles



8) What is the main purpose of test analysis and design?



9) Must all defects be fixed on a project?



10) If any defects are found, rectified and subsequently successfully tested, then we
can say that the quality of the software product has ___________.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 40

Module 1 Answers


1) Describe what the following terms mean:

Error: A human action that produces an incorrect result.

Defect: A flaw in a component or system that can cause the component or
system to fail to perform its required function.

Failure: Deviation of the software from its expected delivery or service.



2) What are the stages of the Fundamental Test Process?

Test Planning & Control
Test Analysis & Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities



3) Try and list as many Tester attributes as you can:

Professionalism
A critical eye
Curiosity
Attention to detail
Good communication skills
Experience



4) What is termed a successful test?

A successful test is one that finds a defect



5) Which stage of the Fundamental Test Process would you expect to find
Collecting the test results?

Test Closure Activities



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 41
6) Is Debugging the same as testing?

A common misconception is that debugging is the same as testing, but these
practices are really quite different. Testing effectively highlights the defect by
identifying failures, whereas debugging is the activity of investigating the cause of
the defect. Subsequently, fixing the code and checking that the fix actually works
are also considered debugging activities. Once the fix has been applied, a tester
will often be called in to perform confirmation testing on the fix.



7) List the General Testing Principles

Principle 1 Testing shows presence of defects
Principle 2 Exhaustive testing is impossible
Principle 3 Early Testing
Principle 4 Defect Clustering
Principle 5 Pesticide paradox
Principle 6 Testing is context dependant
Principle 7 Absence of errors fallacy



8) What is the main purpose of test analysis and design?

Test analysis and design is the activity of transforming test objectives into actual
test conditions and test cases.



9) Must all defects be fixed on a project?

No, this would vary from project to project, but it is common place for defect fixes
to be deferred for future developments, which is predominantly due to time
constraints of the current project.



10) If any defects are found, rectified and subsequently successfully tested, then we
can say that the quality of the software product has ___________.

If any defects are found, rectified and subsequently successfully tested, then we
can say that the quality of the software product has increased.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 42

2 Testing Throughout the Software Lifecycle (K2)


Software development models (K2)

Test levels (K2)

Test types (K2)

Maintenance testing (K2)


K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 43

2.1 Software development models


Terms used in this section:

Commercial off-the-shelf (COTS), iterative-incremental development model,
validation, verification, V-model.


As the activity of testing is not performed in isolation; testing activities are closely related
to development activities. Different approaches to testing are needed as there are
different development life-cycle models. This chapter looks at a sample of popular
development lifecycles, and its associated test approaches.


V-model (sequential development model) (K2)

The V-Model is an industry standard framework that clearly shows the software
development lifecycle in relation to testing. It also highlights the fact that the testing is
just as important as the software development itself. As you can see from the diagram
below, the relationships between development and testing are clearly defined.




Requirements
Specification
Functional
Specification
Technical
Specification
Component
Design
Software
Coding
Component
Testing
Integration
Testing
Systems
Testing
Acceptance
Testing
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 44

Looking at the diagram, we can not only see the addition of the kind of testing activities
that we would expect to be present. But also, we can see how each testing activity ties in
with each development phase, thus verification of the design phases is included. This V-
Model improves the presence of the testing activities to display a more balanced
approach.

You will often see different names for each of the software development stages in a V-
Model. You may also see more or fewer stages, as this is dependent on the individual
product and on the individual software product or company procedures/practices.
Although the V-Model shows clear relationships between each development level and
testing level, this is by no means rigid. For example Integration testing can actually be
performed at any level of testing.

A common basis for testing in todays modern business environment is using a software
work product. Common software work products include the Capability Maturity Model
Integration (CMMI) or Software life cycle processes (IEEE/IEC 12207). During the
development of the software work products, processes such as Verification and
Validation can be carried out. Verification and Validation is often referred to as V & V.
Software validation and verification can involve analysis, reviewing, demonstrating or
testing of all software developments. When implementing this model, we must be sure
that everything is verified. This will include the development process and the
development product itself. Verification and validation should be carried out at the end of
the development lifecycle (after all software developing is complete).

Verification would normally involve meetings and reviews and to evaluate the
documents, plans, requirements and specifications. This can be achieved by using
reviews and meetings etc. Validation involves the actual testing. This should take place
after verification phase has been completed. Verification and validation, if implemented
correctly can be very cost-effective if planned correctly.






! Validation: Are we building the right product?
!
Verification: Are we building the product right?

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 45

Iterative-incremental development models (K2)

Once requirements have been established, an iterative-incremental process can be used
by designing, building and testing a system in a series of short development cycles. The
aim is to add each increment together to form a complete system. Testing can be
performed on each increment at several levels. Regression testing each increment is
important on each iteration to ensure the functionality of previous increments has not
been compromised. Validation and verification can also be performed on each
increment.



Some examples of an iterative-incremental approach are:

RAD
RAD represents Rapid Application Development. In order to implement a RAD
development, all of the requirements must be known in advance. With RAD, the
requirements are formally documented. Each requirement is categorised into individual
components. Then each component is developed and tested in parallel. All this is done
in a set period of time..

RUP:

Rational Unified Process (RUP) is an object-oriented and Web-enabled program
development methodology. RUP works by establishing four separate phases of
development, each of which is organised into a number of separate iterations that must
satisfy defined criteria before the next phase is undertaken.

Inception phase: developers define the scope of the project and its business
case

Elaboration phase: developers analyze the project's needs in greater detail

Construction phase: developers create the application design and source code

Transition phase: developers deliver the system to users.

RUP provides a prototype at the completion of each iteration.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 46

Agile
Agile Software Development is a conceptual framework for software development that
promotes development iterations throughout the life-cycle of the project. Many different
types of Agile development methods exist today, but most aim to minimize risk by
developing software in short amounts of time. Each period of time is referred to as an
iteration, which typically lasts from one to four weeks. Each iteration will normally go
through each of the following phases:
Planning
Requirements analysis
Design
Coding,
Testing
Documentation
The goal of each iteration is to have a functional release without bugs. After each
iteration, the team can re-evaluate priorities to decide what functionality to include in the
next release. Agile methods emphasize face-to-face communication over written
documents. Most Agile teams are located in the same office typically referred to as a
Scrum.




Testing within a life cycle model (K2)

When thinking about testing within a life cycle, there are some good practices that can
be observed to ensure the best result comes from the testing:

For every development activity there is a corresponding testing activity.

Each test level has test objectives specific to that level.

The analysis and design of tests for a given test level should begin during the
corresponding development activity.

Testers should be involved in reviewing documents as soon as drafts are
available in the development life cycle.

When considering the test levels to apply to a given project, careful consideration must
be taken to ensure it is right for that project. An example of this would be a software
product that is developed for the general market, otherwise known as a COTS
(Commercial Off-The-Shelf) product, where the customer decides to perform integration
testing at the system level.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 47

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Commercial off-the-shelf (COTS)




Iterative-incremental Development Model




Validation




Verification




V-model




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 48

You should now be familiar with the following terms:

Commercial off-the-shelf (COTS)
A software product that is developed for the general market, i.e. for a large number of
customers, and that is delivered to many customers in identical format.


Iterative-incremental Development Model
Iterative-incremental development is the process of establishing requirements,
designing, building and testing a system, done as a series of shorter development
cycles. Examples are: prototyping, rapid application development (RAD), Rational
Unified Process (RUP) and agile development models.


Validation
Confirmation by examination and through provision of objective evidence that the
requirements for a specific intended use or application have been fulfilled. [ISO 9000]


Verification
Confirmation by examination and through provision of objective evidence that
specified requirements have been fulfilled. [ISO 9000]


V-model
A framework to describe the software development life cycle activities from
requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development life cycle.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 49

2.2 Test levels (K2)



Terms used in this section:

Alpha testing, beta testing, component testing (also known as unit, module or
program testing), driver, field testing, functional requirement, integration, integration
testing, non-functional requirement, robustness testing, stub, system testing, test
level, test-driven development, test environment, user acceptance testing.




Component Testing (K2)

Component testing is also known as Unit, Module, or Program Testing. In simple terms,
this type of testing focuses simply on testing of the individual components themselves,
that are separately testable. Component testing can include testing of functional and
also non-functional characteristics, such as resource-behavior, for example memory
leaks or robustness testing (testing to determine the robustness of the software product),
as well as structural testing, for example branch coverage. As access to the code is a
highly likely requirement of component testing, it is common for the testing to be carried
out by the Developer of the software. This however has a very low rating of testing
independence. A better approach would be to use what we call the buddy system. This
simply means that two Developers test each others work giving a higher rating of
independence.

One approach to component testing is to create automated test cases before the code
has been written (test-driven approach). This is an iterative approach as it is based on
cycles of creating test cases, then creating pieces of code followed by running the tests
cases until they pass. The test cases themselves are normally derived from work
products including component specifications, software design or a data model.

An explanation of stubs and drivers:

If we need to test a low level module, then something called a driver can be used. A
driver is a high level routine that will call lower level sub-programs. The way in which a
driver works is to pass input data to the item under test and compare the output to the
truth. In order to test a high level module of a program, then we should use something
called a stub. A stub actually works by taking the place of a real routine. This saves the
complexity of actually having the real routines configured, and can efficiently show
whether or not the item under test is actually interfacing with the routine as expected,
without having the real routine present.
!
A group of test activities that are organized and managed
together is known as a Test level and is directly linked to the
responsibilities in a project.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 50
Example:

Suppose you have a program that plays MP3 files from a stored MP3 collection on the
users computer. The user enters the title of the artist they want to listen to, a procedure
then locates the folder containing the MP3s by that artists name, then it informs the
MP3 player program to play the MP3 files in that folder.

So what happens if the wrong MP3s are played?

You could check that the procedure that locates the artists folder is correct by creating a
test driver that simply asks for the artists name and then prints out the location of the
folder where it thinks the MP3s are stored. This will effectively check the lookup part of
the program.

If you think that the lookup part of the program may not be getting the correct input, i.e.
the artists name you entered is not being received correctly. Then you can replace part
of the program with a stub. Then when its called it could display something like:

Where are MP3s for artists named XYZ located?

You would then enter the correct folder location ABC.

The program would then send this information to the MP3 player. The above stub would
show you that the XYZ was actually the artist name you originally entered, and if the
MP3 file location was correctly passed to the MP3 player.





Integration testing (K2)

Integration testing is commonly termed:

Testing performed to expose defects in the interfaces and in the interactions between
integrated components or systems.

In simple terms, Integration testing is basically placing the item under test in a test
environment (an environment contains hardware, instrumentation, simulators, software
tools, and other support elements needed to conduct a test). The main purpose of
Integration testing is to find defects in the way the item under test carries out these
interactions. Integration testing can be thought of as two separate entities which are
Component Integration testing and Systems Integration Testing.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 51

Component Integration Testing

This type of Integration testing is concerned with ensuring the interactions between the
software components behave as expected. It is commonly performed after any
Component Testing has completed.





System Integration Testing

This type of Integration testing is concerned with ensuring the interactions between
systems behave as expected. It is commonly performed after any Systems testing has
completed. Typically not all systems referenced in the testing are controlled by the
developing organization. Some systems maybe controlled by other organizations, but
interface directly with the system under test.


The greater the amount of functionality involved within a single integration phase, then
the harder it will be to track down exactly what has gone wrong when a problem is found.
It makes good sense to increment the amount of functionality in a structured manner.
This way when a problem arises, you will already have a rough idea of where the
problem may be. Integration strategies may be based on the system architecture, for
example Top Down and Bottom-up.

System under test
Component under test
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 52

Top-Down Integration

Top-down integration testing is considered to be an incremental integration testing
technique. It works by testing the top level module first, and then progressively adds
lower level modules each one at a time. Normally near the beginning of the process, the
lower level modules may not be available, and so they are normally simulated by stubs
which effectively stand-in for the lower level modules. As the development progresses,
more of the stubs can be replaced with the actual real components.


Advantages:

Design defects can be found early

Drivers are not a requirement



Disadvantages:

High importance on stubs needing to be correct as they affect output parameters

Developer will need to perform the testing, or at least be heavily involved







WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 53

Bottom-Up Integration

Bottom-up integration testing works by first testing each module at the lowest level of the
hierarchy. This is then followed by testing each of the modules that called the previously
tested ones. The process is then repeated until all of the modules have been tested.
Bottom-up integration will also use test drivers to drive and pass data to the lower level
modules. When code for the remaining modules is available, the drivers are replaced
with the actual real module. With this approach, the lower level modules are tested
thoroughly with the aim to making sure that the highest used module is tested to a
reasonable level to provide confidence.


Advantages:

The behaviour of the interactions on the modules interfaces is apparent, as each
component is added in a controlled way and tested repetitively


Disadvantages:

Cannot be used with software developed using a top-down approach

Drivers are traditionally more difficult than stubs to create and maintain










WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 54

Big-Bang Integration

This type of integration testing involves waiting until all modules are available, and then
testing all modules at once as a complete system. This method not normally
recommended and is typically performed by inexperienced developers/testers. If testing
a simple sequential program, the method can sometimes work, but due to the complexity
of modern systems/software, it would more often than not provide meaningless results
and more investigative work to track down defects. This is because when a defect is
found, it is difficult to know exactly where the problem is, and so the modules would
probably have to be separated out and tested individually. This process may have to be
repeated for each defect found, and so may lead to confusion and delay.


Disadvantages:

Defects are discovered at a very late stage

Isolating the defects can be difficult and time consuming

Likelihood of critical defects being missed






Incremental Test Strategy Comparisons


Top-Down Bottom-Up Big-Bang
Integration Early Early Late
Development Lifecycle Early Late Late
Requires Drivers No Yes Yes
Requires Stubs Yes No Yes


When performing Integration testing, it is important to remember the goal. When
integrating two modules for example, the goal is to test the communication between the
two modules and not test the individual modules functionality. Aim to avoid waiting for all
components to be ready, and integrating everything at the end (Big Bang). This will
normally result in defects being found late in the project, and potentially a great deal of
work pin-pointing the problem, followed of course re-development and re-testing.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 55
System testing (K2)

System Testing is defined as:

The process of testing an integrated system to verify that it meets specified
requirements.

Standard glossary of terms used in Software Testing


System testing is used to test the behaviour of a complete system or product, the scope
of which is defined by the project/programme. In order to reduce risk, the System test
environment should closely match a real-world environment. The aim of this is to
reduce the chance of defects being missed by testing and found by the end user.

Systems testing should be used to investigate the two separate entities of Functional
and Non-Functional requirements of the system. A Functional requirement is a
requirement that specifies a function that a component or system must perform. A Non-
functional requirement is a requirement that does not relate to functionality, but to
attributes such as reliability, efficiency, usability, maintainability and portability etc.

System testing of functional requirements starts by using the most appropriate
specification-based (black-box) techniques for the part of the system that is going to be
tested. An example of this is the use of a decision table which may be created to assist
with testing the combinations of possible outcomes from a set of business rules.
Following on from this, Structure-based techniques (white-box) may then be used to
check the thoroughness of the testing performed with respect to a structural element,
such as menu structure or web page navigation. System testing may also include tests
based upon risks, business processes and use cases. In many organizations, the
System testing is performed by a separate dedicated team.


Types of Functional System Testing:

Requirements-based Testing:

This is simply testing the functionality of the software/system based on the requirements.
The tests themselves should be derived from the documented requirements and not
based on the software code itself. This method of functional testing ensures that the
users will be getting what they want, as the requirements document basically specifies
what the user has asked for. In simple terms, Functional systems testing focuses on
what the system is actually supposed to do. So how do we know exactly what it is
supposed to do? It is defined in what is known as a Functional requirement.

The IEEE defines functional requirement as:

A requirement that specifies a function that a system component must perform.

An example of a requirement may be:

The system must process the user input and print out a report
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 56

Business Process Functional Testing:

Different types of users may use the developed software in different ways. These ways
are analysed and business scenarios are then created. User profiles are often used in
Business Process Functional Testing. Remember that all of the functionality should be
tested for, not just the most commonly used areas.




Types of Non-Functional Systems Testing

Non-functional systems testing is related to testing areas of the system functionality that
are not directly related to the functionality of the system. It is important to remember that
non-functional requirements are just as important as functional requirements. The
following are all considered to be non-functional areas of Systems Testing:


Load Testing:

Testing the ability of the system to be able to bear loads. An example would be testing
that a system could process a specified amount of transactions within a specified time
period. So you are effectively loading the system up to a high level, then ensuring it can
still function correctly whilst under this heavy load.


Performance Testing:

A program/system may have requirements to meet certain levels of performance. For a
program, this could be the speed of which it can process a given task. For a networking
device, it could mean the throughput of network traffic rate. Often, Performance testing is
designed to be negative, i.e. prove that the system does not meet its required level of
performance.


Stress Testing:

Stress testing simply means putting the system under stress. The testing is not normally
carried out over a long period, as this would effectively be a form of duration testing.
Imagine a system was designed to process a maximum of 1000 transactions in an hour.
A stress test would be seeing if the systems could actually cope with that many
transactions in a given time period. A useful test in this case would be to see how the
system copes when asked to process more than 1000.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 57
Security Testing:

A major requirement in todays software/systems is security, particularly with the
internet revolution. Security testing is focused at finding loopholes in the programs
security checks. A common approach is to create test cases based on known problems
from a similar program, and test these against the program under test.


Usability Testing:

This is where consideration is taken into account of how the user will use the product. It
is common for considerable resources to be spent on defining exactly what the customer
requires and how simple it is to use the program to achieve their aims. For example; test
cases could be created based on the Graphical User Interface, to see how easy it would
be to use in relation to a typical customer scenario.


Storage Testing:

This type of testing may focus on the actual memory used by a program or system under
certain conditions. Also disk space used by the program/system could also be a factor.
These factors may actually come from a requirement, and should be approached from a
negative testing point of view.


Volume Testing:

Volume testing is a form of Systems testing. Its primary focus is to concentrate on
testing the system while subjecting it to heavy volumes of data. Testing should be
approached from a point of view to show that the program/system can operate correctly
when using the volume of data specified in the requirements.


Installability Testing:

A complicated program may also have a complicated installation process. Consideration
should be made as to whether the program will be installed by a customer or an
installation engineer. Customer installations commonly use some kind of automated
installation program. This would obviously have to undergo significant testing in itself, as
an incorrect installation procedure could render the target machine/system useless.


Documentation Testing:

Documentation in todays environment can take several forms, as the documentation
could be a printed document, an integral help file or even a web page. Depending of the
documentation media type, some example areas to focus on could be; spelling, usability,
technical accuracy etc.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 58
Recovery Testing:

Recovery Testing is normally carried out by using test cases based on specific
requirements. A system may be designed to fail under a given scenario, for example if
attacked by a malicious user; the program/system may have been designed to shut
down. Recovery testing should focus on how the system handles the failure and how it
handles the recovery process.






Acceptance testing (K2)

The IEEE refers to acceptance testing as:

Formal testing conducted to enable a user, customer, or other authorized entity to
determine whether to accept a system or component.

Acceptance testing (also known a User acceptance testing) is commonly the last testing
performed on the software product before its actual release. It is common for the
customer to perform this type of testing, or at least be partially involved. Often, the
testing environment used to perform acceptance testing is based on a model of the
customers environment. This is done to try and simulate as closely as possible the way
in which the software product will actually be used by the customer. Finding defects is
not the goal here. It is really aimed at ensuring the system is ready to be deployed.
Acceptance testing may often contain more than one level, for example:

A COTS software product may be acceptance tested when it is installed or
integrated.

Acceptance testing of the usability of a component may be done during
component testing.

Acceptance testing of a new functional enhancement may come before system
testing.




Typical forms of acceptance testing include the following:


Contract and Regulation Acceptance Testing:

This type of Acceptance testing is aimed at ensuring the acceptance criteria within the
original contract have indeed been met by the developed software. Normally any
acceptance criteria is defined when the contract is agreed. Regulation acceptance
testing is performed when there exist specific regulations that must be adhered to, for
example, there may be safety regulations, or legal regulations.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 59
Operational Acceptance Testing:

This form of acceptance testing is commonly performed by a System administrator and
would typically be concerned with ensuring that functionality such as; backup/restore,
maintenance, disaster recovery and security functionality is present and behaves as
expected.


Alpha & Beta Testing:

Once the developed market/COTS software product is stable, it is often good practice to
allow representatives of the customer market to test it. Often the software will not contain
all of the features expected in the final product and will commonly contain defects, but
the resulting feedback could be invaluable.

Alpha Testing should be performed at the developers site, and predominantly performed
by internal testers only. Often, other company department personnel can act as testers.
The marketing or sales departments are often chosen for this purpose.

Beta Testing (sometimes known as Field testing) is commonly performed at the
customers site, and normally carried out by the customers themselves. Potential
customers are often eager to trial a new product or new software version. This allows the
customer to see any improvements at first hand and ascertain whether or not it satisfies
their requirements. On the flip side, it gives invaluable feedback to the developer, often
at little or no cost.


Alpha Testing

Beta Testing

Developers Site

Testing performed from
internal testers
and
other departments


Customers Site


Testing performed by the
customer



Developers
Feedback
Feedback
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 60

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Alpha testing




Beta testing




Component testing (also known as unit, module or program testing),




Driver




Field testing




Functional requirement




Integration



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 61


Integration testing




Non-functional requirement




Robustness testing




Stub




System testing




Test level




Test-driven development




Test environment




User acceptance testing




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 62

You should now be familiar with the following terms:

Alpha testing
Simulated or actual operational testing by potential users/customers or an
independent test team at the developers site, but outside the development
organization. Alpha testing is often employed for off-the-shelf software as a form of
internal acceptance testing.


Beta testing
Operational testing by potential and/or existing users/customers at an external site
not otherwise involved with the developers, to determine whether or not a component
or system satisfies the user/customer needs and fits within the business processes.
Beta testing is often employed as a form of external acceptance testing for off-the-
shelf software in order to acquire feedback from the market.


Component testing (also known as unit, module or program testing)
The testing of individual software components. [After IEEE 610]


Driver
A software component or test tool that replaces a component that takes care of the
control and/or the calling of a component or system. [After TMap]


Field testing
(Same description as beta-testing) Operational testing by potential and/or existing
users/customers at an external site not otherwise involved with the developers, to
determine whether or not a component or system satisfies the user/customer needs
and fits within the business processes. Beta testing is often employed as a form of
external acceptance testing for off-the-shelf software in order to acquire feedback
from the market.


Functional requirement
A requirement that specifies a function that a component or system must perform.
[IEEE 610]


Integration
The process of combining components or systems into larger assemblies.


Integration testing
Testing performed to expose defects in the interfaces and in the interactions between
integrated components or systems.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 63

Non-functional requirement
A requirement that does not relate to functionality, but to attributes such as reliability,
efficiency, usability, maintainability and portability.


Robustness testing
Testing to determine the robustness of the software product.


Stub
A skeletal or special-purpose implementation of a software component, used to
develop or test a component that calls or is otherwise dependent on it. It replaces a
called component. [After IEEE 610]


System testing
The process of testing an integrated system to verify that it meets specified
requirements. [Hetzel]


Test level
A group of test activities that are organized and managed together. A test level is
linked to the responsibilities in a project. Examples of test levels are component test,
integration test, system test and acceptance test. [After TMap]


Test-driven development
A way of developing software where the test cases are developed, and often
automated, before the software is developed to run those test cases.


Test environment
An environment containing hardware, instrumentation, simulators, software tools, and
other support elements needed to conduct a test. [After IEEE 610]


User acceptance testing.
(Same description as Acceptance Testing) Formal testing with respect to user needs,
requirements, and business processes conducted to determine whether or not a
system satisfies the acceptance criteria and to enable the user, customers or other
authorized entity to determine whether or not to accept the system. [After IEEE 610]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 64

2.3 Test types (K2)



Terms used in this section:

Black-box testing, code coverage, functional testing, interoperability testing, load
testing, maintainability testing, performance testing, portability testing, reliability
testing, security testing, specification-based testing, stress testing, structural testing,
usability testing, white-box testing.



Background

What we mean by a test type is actually focusing on a specific test objective. This could
be a specific action that the software is expected to do, or verifying that a defect has
been fixed. Verifying defects have been fixed is also referred to as Confirmation testing.
Common terms in todays software development lifecycles are White box testing and
Black-box testing. It is important to fully understand what these terms mean.


Testing of function (functional testing) (K2)

In simple terms a function is basically what the system or piece of software actually
does. The functions that a system or a piece of software is required to perform can be
tested by referring to items such as a work product or a requirements specification.
Functional testing is based on the external behaviour of the system and is referred to as
Black-box testing or Specification-based testing. When referring to Black-box testing, we
mean that knowledge of the internal workings of the system are not known to the tester.
This method is typically used for testing functional requirements, and often used in
Systems testing and Acceptance testing by a dedicated tester. This method can
effectively be used throughout the lifecycle of the product development.

Example functional test cases:

A software program designed to calculate mortgage repayments.

Test that the program does not accept invalid entries
Test that the program does accept valid entries
Test that the program calculates the correct results


Some types of functional testing investigate the functions relating to the detection of
threats, such as viruses from malicious outsiders, which is termed Security testing.
Another type of functional testing evaluates the capability of the software product to
interact with one or more specified components or systems, which is termed
Interoperability testing.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 65

Testing of non-functional software characteristics (non-functional testing) (K2)

While Functional Testing concentrates on what the system does, Non-functional Testing
concentrates on how the system works. Some examples of this type of testing are:


Performance testing
The process of testing to determine the performance of a software
product.

Load testing
A type of performance testing conducted to evaluate the behavior of a
component or system with increasing load, e.g. numbers of parallel users
and/or numbers of transactions, to determine what load can be handled
by the component or system.

Stress testing
A type of performance testing conducted to evaluate a system or
component at or beyond the limits of its anticipated or specified
workloads, or with reduced availability of resources such as access to
memory or servers.

Usability testing
Testing to determine the extent to which the software product is
understood, easy to learn, easy to operate and attractive to the users
under specified conditions.

Maintainability testing
The process of testing to determine the maintainability of a software
product.

Reliability testing
The process of testing to determine the reliability of a software product.

Portability testing
The process of testing to determine the portability of a software product.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 66


Non-functional testing can be performed at all stages of the development and include
objectives such as response times. It is also typically considered to be a type type of
performance testing conducted to evaluate the behavior of a component or system with
increasing load, e.g. numbers of parallel users and/or numbers of transactions, to
determine what load can be handled by the component or system.

Example:

A software program designed to calculate mortgage repayments.

Test that the program does not crash

Test that the program can process results within a set time period

Test that the program can be upgraded


These tests can be referenced to a quality model such as the one defined in Software
Engineering Software Product Quality (ISO9126).




Testing of software structure/architecture (structural testing) (K2)

Structural testing requires some knowledge of the inside of the box to design the test
cases. This can be particularly useful for creating any required test data, as you would
have access to how the data will actually be processed. Structural testing is also known
as White-box testing. It is commonly performed by Developers and is used
predominantly for Unit testing and Sub-system testing. As the development lifecycle
continues; the effectiveness of White-box testing decreases. This method is a prime
candidate for automated test tools, and can effectively increase the quality of the
developed work. Glass-box testing is also a term in use today, and is a synonym for
White Box Testing.

Code coverage is also an important analysis method of Structural testing, and evaluates
which parts of the code have effectively been covered by the tests. In other words,
Code coverage is the extent to which a structure has been exercised by a test suite. The
aim of Structural Testing is to achieve 100% coverage. Tools are available to assist in
calculating coverage.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 67

An overview of the main differences between Black-box and White-box testing:




Device
User Interface
Program
Black-Box Testing
Object under test

Performed by a
Dedicated tester



Systems Testing
User Acceptance Testing


Effective throughout
Whole Lifecycle

111100101010110011
001010101101110110
100001110110100110
110100010111010101
101111110101000100
101010101010101011
100000101101100101
White-Box Testing
Object under test

Performed by
Software Developer

Unit Testing
Sub-systems Testing


Effective towards
Start of project

Functional Structural
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 68

Testing related to changes (confirmation testing (retesting) and regression
testing) (K2)

It is imperative that when a defect is fixed it is re-tested to ensure the defect has indeed
been correctly fixed. This is typically termed Confirmation testing or Retesting.

There are many tools used in a test environment today that allow a priority to be
assigned to a defect when it is initially logged. You can use this priority again when it
comes to verifying a fix for a defect, particularly when it comes to deciding how much
time to take over verifying the fix. For example if you are verifying that a typo has been
fixed in a help file, it would probably have been raised as a low priority defect. So you
can quickly come to the conclusion that it would probably only take a few minutes to
actually verify the defect has been fixed. If, however a high priority defect was initially
raised that wiped all of the customers stored data, then you would want to make sure
that sufficient time was allocated to make absolutely sure that the defect was fixed. It is
important that consideration of the possible consequences of the defect not being fixed
properly is considered during verification.

To assist you on what to additionally look for when Confirmation testing, it is always a
good idea to communicate with the Developer who created the fix. They are in a good
position to tell you how the fix has been implemented, and it is much easier to test
something when you have an understanding of what changes have been made.

Another important factor when it comes to testing is when there is suspicion that the
modified software could affect other areas of software functionality. For example, if there
was an original defect of a field on a user input form not accepting data. Then not only
should you focus on re-testing that field, you should also consider checking that other
functionality on the form has not been adversely affected. This is referred to as
Regression testing.

For example; there may be a sub-total box that may use the data in the field in question
for its calculation. That is just one example; the main point is not to focus specifically on
the fixed item, but to also consider the effects on related areas. If you had a complete
Test specification for a software product, you may decide to completely re-run all of the
test cases, but often sufficient time is not available to do this. So what you can do is
cherry-pick relevant test cases that cover all of the main features of the software with a
view to prove existing functionality has not been adversely affected. This would
effectively form a Regression test.

Regression test cases are often combined to form a Regression test suite. This can then
be ran against any software that has undergone modification with an aim of providing
confidence in the overall state of the software. Common practice is to automate
Regression tests.



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 69

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Black-box testing




Code coverage




Functional testing




Interoperability testing




Load testing




Maintainability testing




Performance testing



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 70


Portability testing




Reliability testing




Security testing




Specification-based testing




Stress testing




Structural testing




Usability testing




White-box testing





WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 71

You should now be familiar with the following terms:

Black-box testing
Testing, either functional or non-functional, without reference to the internal structure
of the component or system.


Code coverage
An analysis method that determines which parts of the software have been executed
(covered) by the test suite and which parts have not been executed, e.g. statement
coverage, decision coverage or condition coverage.


Functional testing
Testing based on an analysis of the specification of the functionality of a component
or system.


Interoperability testing
The process of testing to determine the interoperability of a software product.


Load testing
A type of performance testing conducted to evaluate the behavior of a component or
system with increasing load, e.g. numbers of parallel users and/or numbers of
transactions, to determine what load can be handled by the component or system.


Maintainability testing
The process of testing to determine the maintainability of a software product.


Performance testing
The process of testing to determine the performance of a software product.


Portability testing
The process of testing to determine the portability of a software product.


Reliability testing
The process of testing to determine the reliability of a software product.


Security testing
Testing to determine the security of the software product.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 72

Specification-based testing,
(Same description as Black box testing) Testing, either functional or non-functional,
without reference to the internal structure of the component or system.


Stress testing
A type of performance testing conducted to evaluate a system or component at or
beyond the limits of its anticipated or specified workloads, or with reduced availability
of resources such as access to memory or servers. [After IEEE 610]


Structural testing
(Same description as White box testing) Testing based on an analysis of the internal
structure of the component or system.


Usability testing
Testing to determine the extent to which the software product is understood, easy to
learn, easy to operate and attractive to the users under specified conditions. [After
ISO 9126]


White-box testing
Testing based on an analysis of the internal structure of the component or system.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 73

2.4 Maintenance testing (K2)


Terms used in this section:

Impact analysis, maintenance testing.


Maintenance Testing (K2)

Often after its release, a software system can be in use for years, even decades. During
its lifespan, it will commonly be subjected to changes. These changes can be changes to
its environment, changes to its functionality. Maintenance testing may be required if, for
example, a system is going to be retired. In this example, testing of data migration
and/or data archiving should be a consideration. If for example, the software was being
migrated to another platform, then operational tests of the new environment should be
considered. Additionally, functional tests of the software should be considered after the
migration has completed.

For Maintenance testing, the supporting documentation plays an important role. You
would probably need to look at the original design documentation for the software prior
to the new update. This documentation could be quite old, and may not conform to the
same standard that you are used to. Effectively it could easily be a poor quality
document and may not provide enough suitable information to derive test cases from. In
a worse case scenario, there may not be any original design documents at all.

An important part of Maintenance Testing is Regression Testing. This is because any
update to an existing system or software could adversely affect any other existing
functionality. To determine how much Regression we should perform, we should
consider the items that have been updated or added, and their level of risk to the
functionality of the rest of the system. This is typically referred to as Impact analysis.

When determining Impact analysis, the following points should be considered when it
comes to deciding how much testing (regression) should be carried out.

What exactly are the changes?

Which existing functionality could be affected by the changes?

What are the potential risks?

Who is currently using the System? (in case of a required reboot etc.)

Should I back-up any part of the current system?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 74

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Impact analysis




Maintenance testing




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 75

You should now be familiar with the following terms:

Impact analysis
Determining how the existing system may be affected by changes is called impact
analysis, and is used to help decide how much regression testing to do.


Maintenance testing
Testing the changes to an operational system or the impact of a changed
environment to an operational system.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 76

Module 2 Review Questions


1) What does RAD represent?



2) Which type of testing can be termed Testing performed to expose defects in
the interfaces and in the interactions between integrated components or
systems.?



3) Which type of Testing checks the ability of the system to be able to bear
loads?



4) Is the V-Model Sequential or Iterative?



5) Which type of Integration Testing involves testing all individual modules at
once as a complete system?



6) What does verification mean?



7) What is defined as: The process of testing an integrated system to verify that
it meets specified requirements.?



8) What does validation mean?



9) What type of testing requires some knowledge of the inside of the box to
design the test cases?



10) What type of testing is predominantly performed at the developers site?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 77

Module 2 Review Answers



1) What does RAD represent?

RAD represents Rapid Application Development



2) Which type of testing can be termed Testing performed to expose defects in
the interfaces and in the interactions between integrated components or
systems.?

Integration Testing



3) Which type of Testing checks the ability of the system to be able to bear
loads?

Load Testing:

Testing the ability of the system to be able to bear loads.



4) Is the V-Model Sequential or Iterative?

The V-Model model is known as a Sequential development model.



5) Which type of Integration Testing approach involves testing all individual
modules at once as a complete system?

Big-Bang:

This involves testing all individual modules at once as a complete system.



6) What does verification mean?

Are we building the product right?





WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 78
7) What is defined as: The process of testing an integrated system to verify that
it meets specified requirements.?

System testing



8) What does validation mean?

Are we building the right product?



9) What type of testing requires some knowledge of the inside of the box to
design the test cases?

Structural testing



10) What type of testing is predominantly performed at the developers site?

Alpha testing








WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 79

3 Static Techniques (K2)


Static techniques and the test process (K2)

Review process (K2)

Static analysis by tools (K2)


K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 80

3.1 Static techniques and the test process (K2)


Terms used in this section:

Dynamic testing, static testing, static technique.


Background

Static testing techniques rely on the manual examination of the software code or project
documentation, commonly referred to as reviews. Additionally, the automated analysis
of software code or project documentation (commonly referred to as static analysis) can
also be considered a static technique. This is in direct opposition to dynamic testing,
which requires the actual execution of the software to fulfill its purpose.

Reviews are an excellent way of testing software work products (including code) and can
be performed well before dynamic test execution. Defects detected during reviews early
in the life cycle are often much cheaper to remove than those detected while running
tests, for example:

Imagine a software product is released without any testing at all. Customers may find
defects which affects their own business. The customer would never buy software
products from the development company again, word would spread quickly and before
long the software development company goes out of business. A dramatic example, but
it gets the point across that spending some of the development cost of testing could
save money in the long run. The cost of testing is nearly always lower than the cost of
releasing a poor quality product. As a general rule of thumb, the earlier on in the
development lifecycle the defect is found, the cheaper it is to rectify.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 81

The following table shows some examples of the effects as to when a defect is found:

When defect is found Additional work Potential Effect

Prior to testing

Defects generally found at this
stage will be documentation
based. Defects with Test
Specifications and Design docs
can pretty much be eliminated
by an effective review process.
If the defects are not found at
this stage, the development
would go ahead. This could in
itself create additional defects.

Just prior to
products release

A defect found at this stage
would probably be a software
defect. There could be many
reasons for this, but effective
testing leading up to this stage
should prevent this from
occurring.
When the defect is eventually
found, software re-work, re-
design and additional testing
would be required wasting a
considerable amount of time,
possibly delaying the products
release.



Found by a customer



If the customer finds the defect,
additional manpower would be
required to resolve the problem.
Involving additional development
work probably resulting in a
patch being created.
If the defect occurs after the
product has been released, the
potential cost of the defect
could be devastating.



Traditionally, reviews are performed manually, but there are some tools to support this.
The main manual activity is to examine a work product and make comments about it.
Any software work product can be reviewed, including requirements specifications,
design specifications, code, test plans, test specifications, test cases, test scripts, user
guides or web pages. Reviews can also find omissions, for example in requirements,
which are unlikely to be found in dynamic testing.

Reviews, static analysis and dynamic testing effectively have the same objective
identifying defects. They are complementary to each other and with proper use can each
find different types of defects. An example of a typical defect that is easier to find in a
review rather than in dynamic testing would be deviations from standards, requirement
defects, design defects, insufficient maintainability and incorrect interface specifications.



!
Compared to dynamic testing, static techniques find causes of
failures (defects) rather than the failures themselves.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 82


Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Dynamic testing




Static testing




Static technique




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 83


You should now be familiar with the following terms:

Dynamic testing
Testing that involves the execution of the software of a component or system.


Static testing
Testing of a component or system at specification or implementation level without
execution of that software, e.g. reviews or static code analysis.


Static technique
Static testing techniques rely on the manual examination (reviews) and automated
analysis (static analysis) of the code or other project documentation.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 84

3.2 Review process (K2)



Terms used in this section:

Entry criteria, formal review, informal review, inspection, metric, moderator/inspection
leader, peer review, reviewer, scribe, technical review, walkthrough.



Review: A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for comment
or approval. [IEEE]

It is important to remember that anything can be reviewed. Documentation for example
may include requirement specifications, design documents, test specifications etc. can,
and are reviewed. It is common knowledge that reviews are cost effective. The actual
cost of an on-going review process is considered to be approximately 15% of the
development budget. This may at first sound quite considerable, but compared to not
performing reviews and the associated risk of producing products containing errors, it is
obvious of the advantages that a review process can bring. These advantages include
development productivity improvements, reduced amounts of product development time,
and above all, a reduction in the amount of defects.

Reviews commonly find errors that are not possible to be detected by regular testing.
Reviews also provide a form of training, including technical and standards related, for
every participant.

From a testing point of view, we can use reviews to allow ourselves to be involved much
earlier on in the development lifecycle. Obviously at the beginning of the project there is
nothing we can physically test. But what we can do is be involved in the review process
of various documents. For example, we could get involved in the review process of the
product requirements. From our involvement at this very early stage of the development
lifecycle, we would have an idea of what will be required to be tested. This would give us
a head-start on thinking about how we might approach testing the requirements too.

Ideally, a review should only take place when the source documents (documents that
specify the requirements of the product to be reviewed) and standards to which the
product must conform are available. If the source documents are not available, then the
review may be limited to just finding simple errors within the product under review.

!
Many organisations have their own practices for reviewing
techniques, all of which may be perfectly valid without
strictly adhering to a known standard

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 85

Phases of a formal review (K1)



Planning
Kick-off
Preparation
Meeting
Rework
Follow-up
Exit Criteria
Selecting the personnel, allocating roles;
defining the entry and exit criteria for more
formal review types (e.g. inspection); and
selecting which parts of documents to look
at.
Distributing documents; explaining the
objectives, process and documents to the
participants; and checking entry criteria (for
more formal review types).

Work done by each of the participants on
their own, before the review meeting. Noting
potential defects, questions and comments.

Discussion or logging, with documented
results or minutes (for more formal review
types). The meeting participants may simply
note defects, make recommendations for
handling the defects, or make decisions
about the defects.
Fixing defects found, typically done by the
author.

Checking that defects have been addressed,
gathering metrics and checking on exit
criteria (for more formal review types).

Exit Criteria can take the form of ensuring
that all actions are completed, or that any
uncorrected items are properly documented,
possibly in a defect tracking system.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 86

Roles and responsibilities (K1)


A typical formal review will include the roles below:

Manager

The Manager will be the person who makes the decision to hold the review. The
Manager will ultimately decide if the review objectives have been met. Managing
peoples time with respect to the review is also a Managers responsibility.


Moderator

The Moderator effectively has overall control and responsibility of the review. They will
schedule the review, control the review, and ensure any actions from the review are
carried out successfully. Training may be required in order to carry out the role of
Moderator successfully.


Author

The Author is the person who has created the item to be reviewed. The Author may also
be asked questions within the review.


Reviewer

The reviewers (sometimes referred to as checkers or inspectors) are the attendees of
the review who are attempting to find defects in the item under review. They should
come from different perspectives in order to provide a well balanced review of the item.


Scribe

The Scribe (or Recorder) records each defect mentioned and any suggestions for
process improvement during a review meeting, on a logging form. The scribe has to
ensure that the logging form is readable and understandable.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 87

Types of review (K2)

When deciding on which type of review to use on a product, we dont have to use one
review type, we could use several. In order to provide early feedback to an author of a
document for example, an informal review could be used against an early draft of the
document. Then later on, a formal review could be held. It is important to know which
types of review and when to use them in order to get the most benefit out of them, and
also to review products effectively.




Informal Review

An Informal review is not based on a formal (documented) procedure. This type of
review is an extremely popular choice early on in the development lifecycle of both
software and documentation. Often, pieces of work in software product development can
be lengthy, whether its a piece of software or a detailed Test Specification. You dont
want to present your completed piece of work at a formal review, only to find that you
have completely misunderstood the requirements and wasted the last two months work.
Think of starting a journey at point A with an aim of arriving at point Z when your piece
of work is complete.

This is where informal reviews can be invaluable. Why not have an informal review at
point B?

If, for example you are working on creating a detailed test specification which you know
will take several weeks to complete. You have just completed the first section, and you
are thinking should I continue writing the rest of the specification in the same way?
Then, now is the perfect time for an informal review. You can then ascertain whether you
are travelling in the right direction. Maybe take on additional suggestions to incorporate
in the remaining work. The review is commonly performed by peer or someone with
relevant experience, and should be informal and brief.


Summary:

Low cost
No formal process
No documentation required
Widely used review
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 88
Technical Review

A Technical review is a type of Peer review (a review of a software work product by
colleagues of the producer of the product for the purpose of identifying defects and
improvements) and is considered to be a formal (a review characterized by documented
procedures and requirement etc.) type of review, even though no Managers are
expected to attend. It involves a structured encounter, in which a peer/s analyse the
work with a view to improve the quality of the original work.

The actual review itself is driven by checklists. These checklists are normally derived
from the software requirements and provide a procedure to the review. If the piece of
work is software code, the reviewer will read the code, and may even develop and run
some unit tests to check that the code works as advertised.

The documentation from the outcome of the review can provide invaluable information to
the author relating to defects. On the other side of the fence, it also provides information
to peers on how the development is being implemented. The status of the product can
also be obtained by Managers from this type of review.

Summary:

Ideally led by the Moderator
Attended by peers / technical experts
Documentation is required
No Management presence
Decision making
Solving technical problems






Walkthrough

A walkthrough is a set of procedures and techniques designed for a peer group, lead by
the author to review software code. It is considered to be a fairly informal type of Peer
review. The walkthrough takes the form a meeting, normally between one and two hours
in length. It is recommended that between three to five people attend. The defined roles
for the walkthrough attendees would be a Moderator, a Scribe and a Tester. As to who
actually attends can vary based upon availability, but a suggested list from who to pick
from would be:


High experienced programmer
Programming language expert
Low experienced programmer
Future maintenance engineer of the software
Someone external to the project
Peer programmer from same project

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 89

It is important for the participants of the walkthrough to have access to the materials that
will be discussed several days prior to the meeting. This gives essential time to read
through available documentation in order to digest it and make some notes, basically
prepare them for the meeting.

When the walkthrough starts, the person acting as the Tester provides some scenario
test cases. These test cases should include a representative set of input data and also
the expected output from the program. The test data is then walked through the logic of
the program. The test cases themselves simply assist in the generating of dialog
between members of the walkthrough. In effect, the test cases should provoke the
attendees into raising questions directed towards the program itself. The aim of the
walkthrough is not to find fault in the programmer but in the program itself.


Summary:

Led by the Author
Attended by a peer group
Varying level of formality
Knowledge gathering
Defect finding




Inspection

This formal type of Peer review relies on visual examinations of documents to detect
defects, e.g. violations of development standards and non-conformance to higher level
documentation. It requires preparation on the part the review team members before the
inspection meeting takes place. A person will be in charge of the inspection process,
making sure the process is adhered to correctly. This person is called a Moderator. The
Moderator is normally a technical person by nature and may have Quality Assurance
experience. It is also suggested that the Moderator comes from an unrelated project.
This is to ensure an unbiased approach, and prevent a conflict of interests.

At the beginning of the process, the Entry criteria will be defined, which is a set of
generic and specific conditions for permitting a process to go forward. The aim of entry
criteria in this case is to prevent an Inspection from starting which would entail more
(wasted) effort, compared to the effort needed to remove the failed Entry criteria. The
Moderator will be responsible for arranging the inspection meeting and inviting the
attendees. An agenda will be sent out by the Moderator to the attendees containing a
checklist of items to be dealt with at the inspection meeting. At the inspection meeting
the producer of the item to be reviewed, will present it to the meeting attendees. As the
item is presented, items on the checklist will be addressed accordingly. Someone will be
assigned the task of documenting any findings, known as the Scribe. Inspection metrics
(a measurement scale and the method used for measurement) will also play a part
during the meeting. These are basically a set of measurements taken from the
inspection in order to assist with quality prediction, and preventing defects in the future.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 90

When all checklist items have been addressed, the inspection meeting naturally draws to
a close. At this stage a Summary Report will be created basically outlining what has
happened during the inspection, and what is to be done next. A follow-up stage is also a
requirement of the inspection. This ensures that any re-working is carried out correctly.
Once any outstanding work has been completed and checked by a re-inspection or just
by the Moderator, the inspection will be considered to be complete.


Summary:

Led by a Moderator
Attended by specified roles
Metrics are included
Formal process
Entry and Exit Criteria
Defect finding




Walkthroughs, technical reviews and inspections can be performed within a peer group
(colleagues at the same organizational level). This type of review is referred to as a
peer review.





Success factors for reviews (K2)

Goals:
The goals of a review should be to validate and verify the item under review against
specifications and standards, with an aim to achieve consensus.


Pitfalls:
The pitfalls of a review could be a lack of training, insufficient documentation, and a lack
of support from Management.


Success factors for reviews:

Have an objective for the review
Ensure you invite the right people to the review
Views should be expressed positively to the author
Apply tried and tested review techniques
All actions are documented clearly
Training is available in reviewing techniques if required
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 91


Have a reasonable objective for the review



Ensure you invite the right people



Views should be expressed positively to
the Author



Apply tried and tested review techniques




All actions are documented clearly




Training is available in reviewing
techniques if required



Moderator:
So what exactly
does a Moderator do
then?
Scribe:
I dont know, should
I minute that?
Manager:
All of these actions
dont have enough
details or dates
Scribe:
Dont worry, I have
a good memory
Author:
Why dont we start
with some role
playing games?
Reviewer:
Or maybe a game
of poker?
Author:
What are your
thoughts on my test
plan?
Reviewer:
I will be honest, its
the worst document I
have ever seen
Author:
I want this
document to become
a world best-seller
Reviewer:
But its only a test
plan
Author:
What are your
thoughts on my test
plan?
Reviewer:
I am a Vicar, how
should I know?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 92

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Entry criteria




Formal review




Informal review




Inspection




Metric




Moderator/inspection leader




Peer review



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 93


Reviewer




Scribe




Technical review




Walkthrough





WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 94

You should now be familiar with the following terms:

Entry criteria
The set of generic and specific conditions for permitting a process to go forward with
a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from
starting which would entail more (wasted) effort compared to the effort needed to
remove the failed entry criteria. [Gilb and Graham]


Formal review
A review characterized by documented procedures and requirements, e.g.
inspection.


Informal review
A review not based on a formal (documented) procedure.


Inspection
A type of peer review that relies on visual examination of documents to detect
defects, e.g. violations of development standards and non-conformance to higher
level documentation. The most formal review technique and therefore always based
on a documented procedure. [After IEEE 610, IEEE 1028]


Metric
A measurement scale and the method used for measurement. [ISO 14598]


Moderator/inspection leader
The leader and main person responsible for an inspection or other review
Process


Peer review
A review of a software work product by colleagues of the producer of the product for
the purpose of identifying defects and improvements. Examples are inspection,
technical review and walkthrough.


Reviewer
The person involved in the review that identifies and describes anomalies in the
product or project under review. Reviewers can be chosen to represent different
viewpoints and roles in the review process.



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 95


Scribe
The person who records each defect mentioned and any suggestions for process
improvement during a review meeting, on a logging form. The scribe has to ensure
that the logging form is readable and understandable.


Technical review
A peer group discussion activity that focuses on achieving consensus on the
technical approach to be taken. [Gilb and Graham, IEEE 1028]


Walkthrough
A step-by-step presentation by the author of a document in order to gather
information and to establish a common understanding of its content. [Freedman and
Weinberg, IEEE 1028]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 96

3.3 Static analysis by tools (K2)


Terms used in this section:

Compiler, complexity, control flow, data flow, static analysis.


Static analysis is a set of methods designed to analyse software code in an effort to find
defects prior to actually running it. As we already know, the earlier we find a defect the
cheaper it is to fix. By using Static Analysis, a certain amount of useful testing of the
program can be achieved, even before the program has reached the stage of being able
run. The ascertaining of the softwares complexity (the degree to which a component or
system has a design and/or internal structure that is difficult to understand, maintain and
verify) can also be assessed by using Static analysis.


This would obviously only find a limited number of problems, but at least it is something
that can be done early on in the development lifecycle. Similar to a review, Static
analysis will find defects rather than failures. A Static analysis tool will analyse the
program code by examining the control flow and data flow, and can generate outputs in
formats including XML and HTML. Static analysis tools are normally used by developers
during component and integration testing. Often, the static analysis tools will generate a
large amount of warning messages. These messages require careful management to
allow the most effective use of the tool.

Some advantages of using Static analysis are:

Finding defects before any tests are even run
Early warning of unsatisfactory/suspicious code design by the calculation of
metrics (e.g. high complexity).
Finding dependency issues, such as bad links etc.
Identifying defects that are not easily found by dynamic testing
Code maintainability improvement


The types of errors that can be detected by Static analysis tools are:

Unreachable code
Uncalled functions
Undeclared variables
Programming standard violations
Syntax errors
Inconsistent interfaces between components
!
Static Analysis can effectively test the program,
even before it has actually been run.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 97
Compiler:

A compiler is a program or set of programs that translates text written in a computer
language (source) into another computer language (target). The input is usually called
the source code and the output called the object code. Most people use a compiler to
translate source code to create an executable program. The actual term "compiler" is
primarily used for programs that will translate source code from a high-level
programming language to a lower level language (e.g., assembly language or machine
language). A compiler is said to perform Static Analysis when it detects problems, an
example of this would be syntax errors.



Control Flow:

This refers to the sequence of events (paths) in the execution through a component or
system. Within a programming language, a control flow statement is an instruction that
when executed can cause a change in the subsequent control flow to differ from the
natural sequential order in which the instructions are listed.
Some example control flow statement types available:
Continuation at a different statement (jump)

Executing a set of statements only if some condition is met (choice)

Executing a set of statements zero or more times, until a condition is met (loop)

Stopping the program, preventing any further execution (halt)


Dataflow:

Dataflow can be thought of as a representation of the sequence and possible changes of
the state (creation, usage, or destruction) of data objects. A good example of dataflow is
a spreadsheet. As in a spreadsheet, you can specify a cell formula which depends on
other cells; then when any of those cells is updated the first cell's value is automatically
recalculated. It's possible for one change to initiate a whole sequence of changes, if one
cell depends on another cell which depends on yet another cell, and so on. Dataflow is
also sometimes referred to as reactive programming.
Source language Target language
Source code Executable program
High-level language Low-level language
Compiler Functions:
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 98

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Compiler




Complexity




Control flow




Data flow




Static analysis




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 99

You should now be familiar with the following terms:

Compiler
A software tool that translates programs expressed in a high order language into their
machine language equivalents. [IEEE 610]


Complexity
The degree to which a component or system has a design and/or internal structure
that is difficult to understand, maintain and verify.


Control flow
A sequence of events (paths) in the execution through a component or system.


Data flow
An abstract representation of the sequence and possible changes of the state of data
objects, where the state of an object is any of: creation, usage, or destruction.
[Beizer]


Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without
execution of these software artifacts.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 100

Module 3 Review Questions


1) The _______ is the person who is responsible for documenting issues raised
during the process of the review meeting.



2) Which type of review is also known as a peer review?



3) What types of errors can be found by Static Analysis?



4) A ___________ is a program or set of programs that translates text written in a
computer language (source) into another computer language (target).



5) List as many success factors for a review as you can.



6) List some examples of control flow statement types.



7) ________ can effectively test the program even before it has actually been run.



8) ________ commonly find errors that are not possible to be detected by regular
testing.



9) Reviews, static analysis and dynamic testing effectively have the same
objectives, what is it?



10) The _______ will be the person who makes the decision to hold a review.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 101

Module 3 Review Answers


1) The _______ is the person who is responsible for documenting issues raised
during the process of the review meeting.

Scribe (or recorder)



2) Which type of review is also known as a peer review?

A Technical Review (also known as a peer review)



3) What types of errors can be found by Static Analysis?

The types of errors that can be detected by Static Analysis are:

Unreachable code
Uncalled functions
Undeclared variables
Programming standard violations
Syntax errors
Inconsistent interfaces between components



4) A ___________ is a program or set of programs that translates text written in
a computer language (source) into another computer language (target).

A compiler is a program or set of programs that translates text written in a
computer language (source) into another computer language (target).



5) List as many success factors for a review as you can:

Have an objective for the review
Ensure you invite the right people to the review
Views should be expressed positively to the author
Apply tried and tested review techniques
All actions are documented clearly
Training is available in reviewing techniques if required


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 102

6) List some examples of control flow statement types.

Continuation at a different statement (jump)

Executing a set of statements only if some condition is met (choice)

Executing a set of statements zero or more times, until a condition is met
(loop)

Stopping the program, preventing any further execution (halt)



7) ________ can effectively test the program even before it has actually been run.


Static analysis can effectively test the program even before it has actually
been run.



8) ________ commonly find errors that are not possible to be detected by regular
testing.

Reviews commonly find errors that are not possible to be detected by
regular testing.



9) Reviews, static analysis and dynamic testing effectively have the same
objectives, what is it?

Reviews, static analysis and dynamic testing effectively have the same
objective identifying defects.



10) The _______ will be the person who makes the decision to hold a review.


The Manager will be the person who makes the decision to hold the
review. The Manager will ultimately decide if the review objectives have
been met. Managing peoples time with respect to the review is also a
Managers responsibility.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 103

4 Test Design Techniques (K2)


The test development process (K2)

Categories of test design techniques (K2)

Specification-based or black-box techniques (K3)

Structure-based or white-box techniques (K3)

Experienced-based techniques (K2)

Choosing test techniques (K2)


K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 104

4.1 The test development process (K2)



Terms used in this section:

Test case specification, test design, test execution schedule, test procedure
specification, test script, traceability.



A test development process can consist of varying levels for formality. This is dependant
on the context of the testing, including the organization, the maturity of testing and
development processes, time constraints and the actual people involved with the testing.
The following section outlines the way a more formal test development process would
unfold.


Test Analysis:

During Test analysis, we need to ensure we have appropriate supporting documentation,
such as the test basis documentation. The next step is to analyse this documentation to
determine each test condition. A test condition is basically something that can be verified
by a test case, for example; a function or an element. A good approach is to consider
traceability when designing test cases, which will allow for impact analysis and to allow
for determining requirements coverage. We can achieve this in a number of ways, for
example; referencing a requirement within the test case itself. This gives any reader of
the test case a good idea of why the test case has been written, and an idea of what it is
trying to achieve. During test analysis, the detailed test approach is implemented to
allow the selection of the most appropriate test design techniques to use, based on, for
example; any identified risks.


Test Design:

During test design the test cases and test data are created and specified. The test
design effectively records what needs to be tested (test objectives), and is derived from
the documents that come into the testing stage, such as designs and requirements. It
also records which features are to be tested, and how successful testing of these
features would be recognised. The aim of Test design is to transform the test objectives
into actual test conditions and test cases.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 105
As an example, here is a shopping cart project from which the following testing
requirements may be defined:
A single item can be added to the cart
A total amount can be produced
A multiple item discount can be applied

The test design does not need to record the specific values to be used during testing,
but should simply describe only the requirements for defining those values.

The test cases are produced when the test design is completed. Test cases should
ideally specify for each testing requirement:
A set of input values
Execution preconditions
Expected results
Execution post-conditions
Features from the test design do not have to be tested in separate test cases. The aim is
for a set of test cases to test each feature from the Test Design at least once. Taking the
shopping cart project example, all three requirements could actually be tested using just
two test cases:
Test Case 1 could test both that a single item could be added, and a total is
produced

Test Case 2 could check that multiple items could be added, and a discount is
applied

Furthermore, we could create a single test case to cover all requirements. But careful
consideration should be taken not to overly-complicate a test case, as this can lead to
confusion, mistakes and misleading results. The test case design should allow for the
tester to be able to easily determine the aim of the test case and to be able to execute it
easily. Individual test cases are normally grouped together and contained in a collective
document known as a Test case specification. The Standard for Software Test
Documentation (IEEE829) describes the content of test design specifications
(containing test conditions) and test case specifications.
Add to Cart Product 001
Add to Cart Product 002
Add to Cart Product 003
Discount:
Total Amount:
Sub-total:


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 106

Expected Results:

One of the most important aspects of any kind of testing is the expected results. If we do
not know what the expected result is liable to be prior to testing, then we cannot really
say whether the test has passed or failed. Often expected results are based on results
from previous iterations of the test cases, this is a good way of defining expected results
as we can directly compare new results against old results.

When new test cases are created, and we are not entirely sure of the expected result, a
good place to look would be in the design documentation related to the new
software/feature. Further information could be obtained from the design documentation
author.

The way in which the expected result is actually worded is also very important. You need
to be as specific as possible, and describe exactly what is expected to happen. This is to
ensure the expected result cannot be misinterpreted and to prevent confusion.



Example of a bad expected result:

The sub-total on the graphical user interface is correct.

The above example is far too vague, as different testers may have a different idea of
what is correct. Describe exactly want you want to be checked!


Example of a good expected result:

The sub-total displays a figure equal to the sum of all three input boxes to two decimal
places.



Test Implementation:

During test implementation, the test cases are developed, implemented, prioritized and
organized in the test procedure specification. The test procedure (often referred to as
test script or manual test script) specifies the sequence of action for the execution of a
test.

Sometimes, the tests will be run using a test execution tool. In this situation the
sequence of actions would be specified in a test script, which is effectively an automated
version of the test procedure. The various test procedures and automated test scripts
are then formed into a test execution schedule. The test execution schedule defines the
order in which the various test procedures and/or automated test scripts, are executed,
when they are to be ran and by which person. The test execution schedule should also
take into account factors such as regression tests, prioritization, and technical and
logical dependencies.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 107

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Test case specification




Test design




Test execution schedule




Test procedure specification




Test script




Traceability




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 108

You should now be familiar with the following terms:

Test case specification
A document specifying a set of test cases (objective, inputs, test actions, expected
results, and execution preconditions) for a test item. [After IEEE 829]


Test design
The process of transforming general testing objectives into tangible test conditions
and test cases.


Test execution schedule
A scheme for the execution of test procedures. The test procedures are included in
the test execution schedule in their context and in the order in which they are to be
executed.


Test procedure specification
A document specifying a sequence of actions for the execution of a test. Also known
as test script or manual test script. [After IEEE 829]


Test script
Commonly used to refer to a test procedure specification, especially an automated
one.


Traceability
The ability to identify related items in documentation and software, such as
requirements with associated tests.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 109

4.2 Categories of test design techniques (K2)


Terms used in this section:

Black-box test design technique, experience-based test design technique,
specification-based test design technique, structure-based test design technique,
white-box test design technique.


The function of a good test design technique is to identify the test conditions and test
cases to effectively test a product. We commonly place test design techniques in one of
two categories; these are Black-box and White-box techniques.


Black box test design techniques:

These techniques are a way to derive and select tests based upon the test basis
documentation. They can also be based upon the experience of the testers, developers
and even end users. The tests can be either functional or non-functional as long as no
reference to the internal workings of the code or system.


White-box test design techniques:

These techniques are also referred to as structural or structure-based techniques. They
are based upon an analysis of the structure of the component or system. This can be
thought of as the internal workings of the software or system.

Sometimes we have techniques that do not clearly fit into one these categories.
Although these techniques have different, or overlapping qualities, you will find that the
following list of techniques will commonly be found associated with a specific category.



Specification-based
techniques
Black-box
Structure-based
techniques

White-box

Experienced-based
techniques

Black-box
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 110

Common features of specification-based test design techniques:

Models, either formal or informal, are used for the specification of the problem to
be solved, the software or its components.
From these models test cases can be derived systematically.
Considered a Black-box test design technique.



Common features of structure-based test design techniques:

Information about how the software is constructed is used to derive the test
cases, for example, code and design.
The extent of coverage of the software can be measured for existing test cases,
and further test cases can be derived systematically to increase coverage.
Considered a White-box test design technique.



Common features of experience-based test design techniques:

The knowledge and experience of people are used to derive the test cases.
Knowledge of testers, developers, users and other stakeholders about the
software, its usage and its environment.
Knowledge about likely defects and their distribution.
Considered a Black-box test design technique.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 111

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Black-box test design technique




Experience-based test design technique




Specification-based test design technique




Structure-based test design technique




White-box test design technique




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 112

You should now be familiar with the following terms:

Black-box test design technique
Procedure to derive and/or select test cases based on an analysis of the
specification, either functional or non-functional, of a component or system without
reference to its internal structure.


Experience-based test design technique
Procedure to derive and/or select test cases based on the testers experience,
knowledge and intuition.


Specification-based test design technique
(Same description as Black-box test design technique) Procedure to derive and/or
select test cases based on an analysis of the specification, either functional or non-
functional, of a component or system without reference to its internal structure.


Structure-based test design technique
(Same description as White-box test design technique) Procedure to derive and/or
select test cases based on an analysis of the internal structure of a component or
system.


White-box test design technique
Procedure to derive and/or select test cases based on an analysis of the internal
structure of a component or system.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 113

4.3 Specification-based or black-box techniques (K3)


Terms used in this section:

Boundary value analysis, decision table testing, equivalence partitioning, state
transition testing, use case testing.


In this section we will focus on the techniques used by Specification-based or Black-box
Testing methodology.


Equivalence partitioning (K3)

Imagine you were assigned the task of manually testing a software program that
processed product orders. The following information is an extract of what is to be
processed by the program:

Order Number: 0 10000

If you decided you wanted to test this thoroughly, it would take a large amount of effort,
as you would need to enter every number between 0 and 10000.

An alternative to the time consuming goal of testing every possible value is the Black-
box test design technique called Equivalence partitioning. What this method allows you
to do is effectively partition the possible program inputs. For each of the input fields, it
should not matter which values are entered as long as they are within the correct range
and of the correct type. This is because it should be handled the same way by the
program (this would obviously need to be clarified by the developer as it is an important
assumption). For example, you could test the order number by choosing just a few
random values in the specified range, for example:


So, the point of Equivalence portioning is to reduce the amount of testing by choosing a
small selection of the possible values to be tested, as the program should handle them
in the same way.
0 - 10000 7, 409, 9450
Range Test Values
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 114
Boundary value analysis (K3)

By the use of Equivalence partitioning, a tester can perform effective testing without
testing every possible value. This method can be enhanced further by a Black-box test
design technique called Boundary value analysis. After time, an experienced Tester will
be often realise that problems can occur at the boundaries of the input and output
spaces. When testing only a small amount of possible values, the minimum and
maximum possible values should be amongst the first items to be tested.

Order Number: 0 10000

For the order number, we would test 0 as the minimum value and 10000 as the
maximum value.




So, those values are what the software program would expect. But lets approach this in
a more negative way. Lets add tests that are effectively out of range, i.e. -1 and
10001. This gives us confidence that that the range is clearly defined and can handle
unexpected values correctly.













Need more help?
Please read Worksheet B included with this package.
?
Lower Boundary = 0 0
Order Number: 0 - 10000
Upper Boundary = 1000 10000 10001
-1
Lower & Upper Boundaries
= 0
0
Order Number: 0 - 10000
10000
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 115
Decision table testing (K3)

Decision tables are a Black-box test design technique used as a way to capture system
requirements that may contain logical conditions, and also as a method to document
internal system designs. They are created by first analyzing the specification. Conditions
and subsequent system actions can then be identified from it. These input conditions
and actions are commonly presented in a true or false way, referred to as Boolean.



The decision table will have within it all of the conditions that can be triggered as well as
all the possible actions resulting from the triggered conditions. The columns in the
decision table will each correspond to a specific rule that defines a combination of
actions, which in turn result in the execution of an associated action. Ideally, the
coverage should aim to have at least one test per column to achieve a goal of covering
all combinations of triggering conditions.

Example decision table:

Fax Machine Troubleshooter
Rules
Fax is not read Y Y Y Y N N N N
Warning light is flashing Y Y N N Y Y N N
Conditions
Fax number is not recognized Y N Y N Y N Y N
Check fax machine is powered on X
Ensure correct number is entered X X X X
Check phone line is connected X X X X
Actions
Check paper is not jammed X X


Condition Statements

Condition Entries

Action Statements

Action Entries
Decision Tables are typically divided into four quadrants:

The upper half lists the conditions being tested, while the
lower half lists the possible actions to be taken. Each of the
columns represents a type of condition or rule.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 116
State transition testing (K3)

This type of Black-box test design technique is based on the concept of states and
finite-states, and is based on the tester being able to view the softwares states,
transition between states, and what will trigger a state change. Test coverage can
include tests designed to cover a typical sequence of states, to cover every state, to
exercise every transition, to exercise specific sequences of transitions or to test invalid
transitions.


Example state transition table:

from State to State
Received Denied Pending Active Dormant Closed
Received
N Y Y Y N N
Denied
Y N N Y N N
Pending
N Y N Y N N
Active
N N N N Y Y
Dormant
N N N Y N Y
Closed
N N N Y N N

The example chart above is based on the possible states of an electric companys
customer account system, and indicates which state transitions are possible and
impossible. If a state transition is possible, a Y is placed in the column where the from
and to states intersect. If a state transition is not possible, an N is placed in the column
where the from and to states intersect.









Need more help?
Please read Worksheet B included with this package.
?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 117
Use Case Testing (K2)

A Use case is defined as:

A sequence of transactions in a dialogue between a user and the system with a tangible
result.

Standard glossary of terms used in Software Testing


Put another way, a use case is basically description of a system's behaviour as it
responds to a request that comes from outside of that system. Use cases effectively
describe the interaction between a Primary actor (which is the interaction initiator) and
the system itself, and is normally represented in a set of clear steps. An Actor is
basically something or someone which come from outside the system, which participate
in a sequence of activities with the system, to achieve some goal. For example, an Actor
could be another system or an end user.

A Use case will typically have preconditions, which need to be met for a use case to
work successfully. Additionally Post-conditions exist to ensure the Use case has some
kind of observable results and state of the system after the use case has been
completed.

Test cases can be derived from Uses cases with a purpose to find defects by using the
product in a similar way that it will be used in the real world. For example, you may have
a piece of software that processes banking transactions. You may have tested that the
software can process every expected type of transaction. But the customer may require
that they process 10 transactions at once, so from that information we could create a
Use case test.

Use cases are considered to be a Black-box design technique and do not only describe
the way that the product is likely to be used, but also describe the way that the
processes flow through the system. Also referred to as scenarios, Use cases are a
valuable way of finding defects associated with integration.


!
Use Cases are also known as Scenarios
!
It is common for the Use cases to be used when designing
Acceptance tests, and may have involvement from the
customer or end user.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 118

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Boundary value analysis




Decision table testing




Equivalence partitioning




State transition testing




Use case testing




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 119

You should now be familiar with the following terms:

Boundary value analysis
A black box test design technique in which test cases are designed based on
boundary values.


Decision table testing
A black box test design technique in which test cases are designed to execute the
combinations of inputs and/or stimuli (causes) shown in a decision table.
[Veenendaal]


Equivalence partitioning
A black box test design technique in which test cases are designed to execute
representatives from equivalence partitions. In principle test cases are designed to
cover each partition at least once.


State transition testing
A black box test design technique in which test cases are designed to execute valid
and invalid state transitions.


Use case testing
A black box test design technique in which test cases are designed to execute user
scenarios.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 120

4.4 Structure-based or white-box techniques (K3)


Terms used in this section:

Code coverage, decision coverage, statement coverage, structure-based testing.


In this section we will focus on the techniques used by Structure-based (sometimes
referred to as White-box) techniques as seen in the following example areas:

Component level: the structure of the actual code itself, for example, statements,
decisions or branches.

Integration level: the structure may be a call tree (a diagram in which modules
call other modules).

System level: the structure may be a menu structure, business process or even a
web page structure.





In this section, two code-related structural techniques for code coverage, based on
statements and decisions, are discussed. Code-coverage is an analysis method that
determines which parts of the software have been executed (covered) by the test suite
and which parts have not been executed, e.g. statement coverage, decision coverage or
condition coverage etc.
!
Remember: Structure-based testing is based on an analysis
of the internal structure of the component or system.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 121

Statement testing and coverage (K3)

This testing method involves using a model of the source code which identifies
statements. These statements are the categorized as being either executable or non-
executable. In order to use this method, the input to each component must be identified.
Also, each test case must be able to identify each individual statement. Lastly, the
expected outcome of each test case must be clearly defined.

A statement should be executed completely or not at all. For instance:

IF x THEN y ENDIF

The above example is considered to be more than one statement because y may or
may not be executed depending upon the condition x


Code example:

Read a
Read b
IF a+b > 100 THEN
Print "Combined values are large"
ENDIF
IF a > 75 THEN
Print "Input a is large"
ENDIF


It is often useful to work out how many test cases will be required to fully test a program
for statement coverage. Using the above example we can determine that we need only
one test case to fully test for statement coverage.

The test would be to set variable a to a value of 76 and variable b to 25 for example.
This would then exercise both of the IF statements within the program.


To calculate statement coverage of a program we can use the following formula:

Statement Coverage = Number of executable statements executed
Total number of executable statements * 100 %



Need more help?
Please read Worksheet A included with this package.
?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 122

Decision testing and coverage (K3)

This test method uses a model of the source code which identifies individual decisions,
and their outcomes. A decision is defined as being an executable statement containing
its own logic. This logic may also have the capability to transfer control to another
statement. Each test case is designed to exercise the decision outcomes. In order to use
this method, the input to each component must be identified. Also, each test case must
be able to identify each individual decision. Lastly, the expected outcome of each test
case must be clearly defined.

Branch coverage measures the number of executed branches. A branch is an outcome
of a decision, so an IF statement, for example, has two branches, which are True and
False. Remember though; that code that has 100% branch coverage may still contain
errors.


To calculate the decision coverage of a program we can use the following formula:

Decision Coverage = number of executed decision outcomes * 100%
total number of decision outcomes


Code example:

Read a
Read b
IF a+b > 100 THEN
Print "Combined values are large"
ENDIF
IF a > 75 THEN
Print "Input a is large"
ENDIF


Using the above example again, we can determine that we need only two test cases to
fully test for branch coverage. The first test again could be to set variable a to a value
of 76 and variable b to 25 for example. This would then exercise both of the True
outcomes of the IF statements within the program. A second test would be to set
variable a to a value of 70 and variable b to 25 for example. This would then
exercise both of the False outcomes of the IF statements within the program.

Decision testing can be considered a form of control flow testing as it generates a
specific flow of control through the decision points. Decision coverage is considered to
be stronger than statement coverage based on the following statement:

!
100% decision coverage guarantees 100% statement
coverage, but not vice versa.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 123

In order to assist with determining the alternatives of a decision, Control flow diagrams
can be used. The diagrams show the pure logic of the structure, for example:


Some examples of some common control structures:




Diagram key:




Node
Node
Edges
Region
Node
0
1
2
IF
0
1
3
IF ELSE
2
0
1
2
WHILE
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 124

Other structure-based techniques (K1)

In addition to Decision coverage, certain more sophisticated levels of structural coverage
exist, for example:


Condition Coverage:

Condition coverage reports the true or false outcome of each Boolean sub-expression.
Condition coverage will measure the sub-expressions independently of each other. The
level of measurement is similar to decision coverage, but it has a greater sensitivity to
the control flow.


Multiple Condition Coverage:
Multiple condition coverage uses test cases that ensure each possible combination of
inputs to a decision are executed at least once. This is achieved by exhaustive testing of
the input combinations to a decision. This seems at first glance to be thorough, which
indeed it is, but has a drawback of being impractical due to the amount of test cases
required to test a complex program or system.
The idea behind coverage can also be applied at other test levels, for example
integration levels where the percentage of modules, components or classes that have
been exercised by a set of test cases could be expressed as module, component or
class coverage. Tools are also available to assist with the structural testing of code.



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 125

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Code coverage




Decision coverage




Statement coverage




Structure-based testing





WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 126

You should now be familiar with the following terms:

Code coverage
An analysis method that determines which parts of the software have been executed
(covered) by the test suite and which parts have not been executed, e.g. statement
coverage, decision coverage or condition coverage.


Decision coverage
The percentage of decision outcomes that have been exercised by a test suite. 100%
decision coverage implies both 100% branch coverage and 100% statement
coverage.


Statement coverage
The percentage of executable statements that have been exercised by a test suite.


Structure-based testing
(Same description as white-box testing) Testing based on an analysis of the internal
structure of the component or system.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 127

4.5 Experienced-based techniques (K2)


Terms used in this section:

Exploratory testing, fault attack.


In this section we will focus on the techniques used by Experienced-based testing,
commonly associated with Black-box Testing methodology.


Error Guessing:

Why can one Tester find more errors than another Tester in the same piece of software?

More often than not this is down to a technique called Error Guessing. To be successful
at Error Guessing, a certain level of knowledge and experience is required. A Tester can
then make an educated guess at where potential problems may arise. This could be
based on the Testers experience with a previous iteration of the software, or just a level
of knowledge in that area of technology.

This test case design technique can be very effective at pin-pointing potential problem
areas in software. It is often be used by creating a list of potential problem
areas/scenarios, then producing a set of test cases from it. This approach can often find
errors that would otherwise be missed by a more structured testing approach.

An example of how to use the Error Guessing method would be to imagine you had a
software program that accepted a ten digit customer code. The software was designed
to only accept numerical data.


Here are some example test case ideas that could be considered as Error Guessing:





XXX
Customer Code
Input of a blank entry

Input of greater than ten digits

Input of mixture of numbers and letters

Input of identical customer codes

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 128

What we are effectively trying to do when designing Error Guessing test cases, is to
think about what could have been missed during the software design. This testing
approach should only be used to compliment an existing formal test method, and should
not be used on its own, as it cannot be considered a complete form of testing software.
A more structured use of Error guessing is to create a list of possible failures, and then
design test cases that specifically focus effort on particular areas of functionality with a
view to evaluating the reliability. The aim of this is to attempt to force the failures to
occur. This approach is termed Fault Attack.



Exploratory Testing:

This informal test design technique is typically governed by time. It consists of using
tests based on a test chapter that contains test objectives. It is most effective when there
are little or no specifications available. It should only really be used to assist with, or
compliment a more formal approach. It can basically ensure that major functionality is
working as expected without fully testing it. The tester can also use the information
gained while testing to design new and better tests for the future.



Random Testing:

A Tester typically selects test input data from what is termed an input domain. Random
testing is simply when the tester selects data from the input domain randomly. In order
for random testing to be effective, there are some important open questions to be
considered:

Is the chosen random data sufficient to prove the module meets its specification
when tested?

Should random data only come from within the input domain?

How many values should be tested?


As you can tell, there is little structure involved in Random testing. In order to avoid
dealing with the above questions, a more structured Black-box test design could be
implemented instead. However, using a random approach could save valuable time and
resources if used in the right circumstances. There has been much debate over the
effectiveness of using random testing techniques over some of the more structured
techniques. Most experts agree that using random test data provides little chance of
producing an effective test.

There are many tools available today that are capable of selecting random test data from
a specified data value range. This approach is especially useful when it comes to tests
associated at the system level. You often find in the real world that Random Testing is
used in association with other structured techniques to provide a compromise between
targeted testing and testing everything.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 129

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Exploratory testing




Fault attack




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 130

You should now be familiar with the following terms:

Exploratory testing
An informal test design technique where the tester actively controls the design of the
tests as those tests are performed and uses information gained while testing to
design new and better tests. [After Bach]


Fault attack
Directed and focused attempt to evaluate the quality, especially reliability, of a test
object by attempting to force specific failures to occur.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 131

4.6 Choosing test techniques (K2)


Careful consideration should be taken when it comes to choosing a Test technique, as
the wrong decision could result in a meaningless set of results, undiscovered critical
defects etc. We can make a judgement on which techniques to use if we have tested a
similar product before. Thats were the importance of good test documentation comes in,
as we could quickly ascertain the right techniques to use based on what was the most
productive in previous testing projects.

If we are testing something new, then the following list contains points to consider when
choosing a technique:


Are there any regulatory standards involved?

Is there a level of risk involved?

What is the test objective?

What documentation is available to us?

What is the Testers level of knowledge?

How much time is available?

Do we have any previous experience testing a similar product?

Are there any customer requirements involved?


These are a sample of the type of questions you should be asking and are in no
particular order of importance. Some of them can be applied only to certain testing
situations, while some of them can be applied to every situation.









WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 132

Module 4 Review Questions



1) Using the following example, list some possible Boundary Testing and
Equivalence Partitioning test data values:

xx = 1 100

yy = 500 1000



2) Which type of Testing is commonly based on typical scenarios from the receiver
of the developed product?



3) Structure-based Testing is also known as what?



4) Give examples of what may be included in a test case using Error Guessing.



5) Which test technique reports the true or false outcome of each Boolean sub-
expression?



6) The following questions are associated with which testing task?

Are there any regulatory standards involved?
Is there a level of risk involved?



7) Should exploratory testing be used in isolation when testing software?


8) What are the four quadrants of a decision table?


9) The function of a good __________________ is to identify the test conditions
and test cases to effectively test a product.


10) The test procedure is often referred to as a what?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 133

Module 4 Review Answers



1) Using the following example, list some possible Boundary Testing and
Equivalence Partitioning test data values:

xx = 1 100
yy = 500 1000


Equivalence Partitioning examples:
2, 66, 88 etc.
507, 777, 993 etc.

Boundary Value Analysis examples:
0, 1, 100, 101
499, 500, 1000, 1001



2) Which type of Testing is commonly based on typical scenarios from the receiver
of the developed product?

Use Case Testing



3) Structure-based Testing is also known as what?

White-box Testing



4) Give an example of a Test Case using Error Guessing.

Here are some example test case ideas that could be considered as Error
Guessing:

Input of a blank entry

Input of greater than ten digits

Input of mixture of numbers and letters

Input of identical customer codes



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 134
5) Which test technique reports the true or false outcome of each Boolean sub-
expression?

Condition Coverage


6) The following questions are associated with which testing task?

Are there any regulatory standards involved?
Is there a level of risk involved?

Choosing a test technique



7) Should exploratory testing be used in isolation when testing software?

It should only really be used to assist with, or compliment a more formal
approach.



8) What are the four quadrants of a decision table?

Condition Statements
Condition Entries
Action Statements
Action Entries



9) The function of a good __________________ is to identify the test conditions
and test cases to effectively test a product.

The function of a good test design technique is to identify the test conditions and
test cases to effectively test a product.



10) The test procedure is often referred to as a what?

The test procedure is often referred to as a manual test script.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 135


5 Test Management (K3)


Test organization (K2)

Test planning and organization (K2)

Test progress monitoring and control (K2)

Configuration management (K2)

Risk and testing (K2)

Incident management (K3)


K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 136

5.1 Test organization (K2)


Terms used in this section:

Tester, test leader, test manager.


Test Organization (K2)

Most companies will choose an organisational structure that is specific to their own
testing requirements. There could even be multiple organisational structures for different
stages of testing. A developer is normally responsible for testing at the component level.
Using the V-Model approach, as you progress up the right hand side of the model,
changes as to who will perform the testing will arise, for example when Systems testing
is performed, as separate team may be used. Acceptance testing on the other hand,
could be performed by developers, another department, or even another company. So
as you can see, there are no hard and fast rules as to who can perform which type of
testing. It is important that whoever performs the testing must be made aware of their
testing responsibilities.


The following list represents some examples of the type of people/roles within an
organization that may be tasked with performing some specific type of testing:

Project manager

Quality manager

Developer

Business expert

Domain expert

Infrastructure

IT operations
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 137

Test Independence (K2)

At first glance it may seem logical for a Developer to test their own work. It would
probably be quicker, it may even save money. But unfortunately there are several factors
that adversely influence a Developer testing their own work.

It is not in the best interest of the Developer to find defaults in their own work
Obvious defects will not be easily spotted in their own piece of work
Its easy to incorrectly assume parts the software is correct, when in fact it is not

An independent tester could avoid the pitfalls mentioned above:

Their job is to find defects in Developers software
It is much easier to spot defects in somebody elses software
A good Tester never assumes, they check for themselves

Some drawbacks though are:

They can become isolated from the development team
The Developer may lose their sense of responsibility
An independent tester may become a bottleneck in the project

If testing is to be performed by an independent tester, then there are several options for
testing independence as follows:

No independent testers, developers test their own code
Independent testers within a development team
An independent test team within the organization that report to project manager
Independent testers from the business organization or user community
Specifically skilled Independent test specialists
Outsourced independent testers
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 138
Tasks of the test leader and tester (K1)

It is always a good idea to have a multi-disciplined Test Team. There will always arise
situations during a project where the individual testers skills will be called upon. Having
multi-skilled testers brings a level of balance to the team. Lets now take a look at two
specific individual roles within the organisation:


The Test Leader:

The Test leader may also be called a Test manager or Test coordinator. A Test leaders
role can also be performed by a Project manager, a Development manager, a Quality
assurance manager or a manager or a test group. Larger projects may require that the
role be split between the roles of Test leader and Test manager. The Test leader will
typically plan, whilst the Test manager would monitor and control the testing activities.
Ideally, a Test leader would come from a testing background and have a full
understanding of how testing is performed. They will also possess good managerial
expertise. They are also responsible for ensuring that test coverage is sufficient and will
be required to produce reports. The following list shows some example activities you
might expect a Test leader to perform:

Coordinate the test strategy and plan with project managers and others.

Write or review a test strategy for the project, and test policy for the organization.

Contribute the testing perspective to other project activities, such as integration
planning.

Plan the tests considering the context and understanding the test objectives
and risks including selecting test approaches, estimating the time, effort and
cost of testing, acquiring resources, defining test levels, cycles, and planning
incident management.

Initiate the specification, preparation, implementation and execution of tests,
monitor the test results and check the exit criteria.

Adapt planning based on test results and progress (sometimes documented in
status reports) and take any action necessary to compensate for problems.

Set up adequate configuration management of testware for traceability.

Introduce suitable metrics for measuring test progress and evaluating the quality
of the testing and the product.

Decide what should be automated, to what degree, and how.

Select tools to support testing and organize any training in tool use for testers.

Decide about the implementation of the test environment.

Write test summary reports based on the information gathered during testing.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 139

The Tester:

Each individual software development project may require numerous testing levels and
will have varying associated risks. This can influence who performs the role of the Tester
whilst keeping a level of independence. For example:

Component and Integration Testing performed by Developers

Acceptance Testing performed by Business experts and Users

Operational Acceptance Testing performed by Operators


Over recent years the importance of the activity of testing has increased, and given rise
to an increase in the amount of dedicated professional software testers. Today, a Tester
is known as a skilled professional who is involved in the testing of a component or
system, and can specialise into categories, including test analysis, test design or test
automation. The following list shows some example activities you might expect a Tester
to perform:

Review and contribute to test plans

Analyze, review and assess user requirements, specifications and models for
testability

Create test specifications

Set up the test environment (often coordinating with system administration and
network management)

Prepare and acquire test data

Implement tests on all test levels, execute and log the tests, evaluate the results
and document the deviations from expected results


Use test administration or management tools and test monitoring tools as
required

Automate tests (may be supported by a developer or a test automation expert)

Measure performance of components and systems (if applicable)

Review tests developed by others
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 140

Other Testing Roles (*note not a requirement of the exam syllabus):

The Client
The client is effectively the project sponsor, and will provide the budget for the project.
The Client can also be the business owner.

The Project Manager
Management skills are provided by the Project Manager. The Project Manager will be
actively involved throughout the project and will provide feedback to the client.

The User
The User will provide knowledge from the existing system/software and will define
requirements for the new system/software.

The Business Analyst
The Business Analyst will provide knowledge of the business and analysis skills. The
Business Analyst will also be responsible for creating User Requirements based on talks
with the Users.

The Systems Analyst
Systems design will be provided by the Systems Analyst. The Systems Analyst will also
be responsible for developing the Functional Specification from the User Requirements.

The Technical Designer
Technical detail and support to the system design is the responsibility of the Technical
Designer. This role may include database administration.

The Developer
A Developer will provide the skills to write the actual software code and perform Unit
Testing. They may also be called upon at a later stage to provide bug fixes and technical
advice.





!
A team does not have to have all of the above members, as
each testing project will have different requirements. Some
of the roles mentioned above may be carried by a single
person, while other roles may require several people.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 141

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Tester




Test leader




Test manager




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 142

You should now be familiar with the following terms:

Tester
A skilled professional who is involved in the testing of a component or system.


Test leader
(Same description as test manager) The person responsible for project management
of testing activities and resources, and evaluation of a test object. The individual, who
directs, controls, administers plans and regulates the evaluation of a test object.


Test manager
The person responsible for project management of testing activities and resources,
and evaluation of a test object. The individual, who directs, controls, administers
plans and regulates the evaluation of a test object.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 143

5.2 Test planning and estimation (K2)


Terms used in this section:

Test approach


Test planning (K2)

One of the most important parts of any form of Testing is the planning. All of the testing
that is to be carried out is dependent on the Test plan. The success of the project may
also rely on the Test Plan, so it is imperative to spend adequate time ensuring this
element of the testing process is carefully thought about.

The planning phase is often influenced by factors such as a Testing policy, objectives,
risks and resources. Often, information will not be readily available when first creating
the Test plan, but this can be added in a later issue of the Test plan. Unforeseen issues
that may affect testing will need to be added to the plan as and when they happen. But
Test planning should be seen as an ongoing process throughout the development of the
product. Planning may be documented in a project or master test plan, and in separate
test plans for test levels, such as system testing and acceptance testing. It is important
to be aware of any issues (provided by feedback) that could affect the progress of the
testing. This way any planning activities can be implemented, resulting in the prevention
of any potential bottle-neck situations.


When deciding what to consider regarding a Test Plan, we can turn to the IEEE 829-
1998 Test Plan Outline. The 16 clauses of the IEEE Test Plan consist of:

1. Test Plan identifier.
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension criteria and resumption requirements.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 144
Test Plan clauses in more detail:
1. Test Plan Identifier: This is simply the name of the document. Ensure that the
document has a unique name followed by a version number.

2. Introduction: This should be a high level description about the document and
why it was created.

3. Test Items: This section should include all hardware, software required to
perform the testing.

4. Features to Be Tested: The specific parts of the software specification that will
be tested.

5. Features Not to Be Tested: The specific parts of the software specification to be
excluded from the testing.

6. Approach: Details of how the testing process will actually be followed.

7. Item Pass/Fail Criteria: Defines the pass and failure criteria for an item being
tested.

8. Suspension Criteria and Resumption Requirements: This is a particular risk
clause to define under what circumstances testing would stop and restart.

9. Test Deliverables: Which test documents and other deliverables will be
produced.

10. Testing Tasks: The tasks themselves, their dependencies, the elapsed time
they will take, and the resource required.

11. Environmental Needs: What is needed in the way of testing software, hardware,
offices etc.

12. Responsibilities: Who has responsibility for delivering the various parts of the
plan.

13. Staffing and Training Needs: The people and skills needed to deliver the plan.

14. Schedule: When the tasks will take place.

15. Risks and Contingencies: This defines all other risk events, their likelihood,
impact and counter measures to overcome them.

16. Approvals: The signatures of the various stakeholders in the plan, to show they
agree in advance with what it says.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 145





Test planning activities (K2)

The following list represents some example activities that Test planning may include:

Determining the scope and risks, and identifying the objectives of testing.

Defining the overall approach of testing (the test strategy), including the definition
of the test levels and entry and exit criteria.

Integrating and coordinating the testing activities into the software life cycle
activities: acquisition, supply, development, operation and maintenance.

Making decisions about what to test, what roles will perform the test activities,
how the test activities should be done, and how the test results will be evaluated.

Scheduling test analysis and design activities.

Scheduling test implementation, execution and evaluation.

Assigning resources for the different activities defined.

Defining the amount, level of detail, structure and templates for the test
documentation.

Selecting metrics for monitoring and controlling test preparation and execution,
defect resolution and risk issues.

Setting the level of detail for test procedures in order to provide enough
information to support reproducible test preparation and execution.

!
One important thing to remember about the above Test plan is that
it is not restrictive. You can easily remove or modify the existing
clauses to suit the organisation, or add additional clauses. With a
well balanced Test plan in place, you are dramatically increasing
the likely hood of a successful test campaign.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 146

Exit criteria (K2)

The reason we have Exit Criteria is we need to know when to stop testing. If we had no
exit criteria, we would find ourselves in the position of testing until we ran out of time. At
this point, how could we have confidence in the product, if we had no tangible target to
meet.?

The Exit criteria can contain a variety of information, and will differ from project to
project. Some points to consider when defining the Exit criteria though, are:

Ensuring sufficient coverage of code

Ensuring sufficient coverage of functionality

Testing all of the high risk areas

The reliability of the product

The amount and severity of acceptable defects

The testing completeness deadline and associated cost
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 147
Test estimation (K2)

Part of the project planning process is to have an idea of the kind of effort required to
adequately test the product. Once this is known, then required resources can be
allocated for the work, thus ensuring that there is not a situation where there are not
enough Testers or even too many Testers involved. If this allocation is not performed
correctly then there could be a risk of not detecting critical errors in the software, of
which the effect on the project could be devastating.


The following two approaches can be used effectively to produce a test estimate:


The metrics-based approach:

This approach focuses on estimating the testing effort based on metrics of former or
similar projects or based on typical values.


The expert-based approach:

This approach focuses on estimating the tasks by the task owners or by experts in an
associated field. Once the test effort is estimated, resources can be identified and a
schedule can be drawn up.

The specific testing effort may depend on the following three factors:

1. Characteristics of the product:

Quality of the specification and other information used for test models
Size of the product
Complexity of the problem domain
Requirements for reliability and security
Requirements for documentation


2. Characteristics of the development process:

Stability of the organization
Tools used
Test process
Skills of the people involved
Time pressure

3. The outcome of testing: the number of defects and the amount of rework required.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 148
Creating a Test Estimate: (*Note Not a requirement for the exam)

Test Estimation will be required to be completed before the actual testing commences
on the project itself. The actual test estimation itself will detail the effort required in order
to perform the activities specified within the high-level test plan. When creating the
estimate, it is wise to gather information from similar projects. Also, it is worthwhile
approaching the owners of the tasks within the project for their estimates, and use their
knowledge to assist you in timescales etc.

Project Managers will look at the test estimation and expect details on how much time
will be required to perform the required testing tasks. So what types of tasks should we
consider for a test estimation? Here are some examples of tasks that require varying
amounts of time to complete:

Documentation*
Test Plan
Test Specifications
Test Report

*Dont forget that with each item of documentation, there will inevitably be a review process. This
can significantly lengthen the completion date for the document in question.


Set-up
Build Test networks
Set-up automated Tests

Training
Train Testers on new technology/features
Test Automation training

Execution
Dry runs
All Test execution phases
Re-testing

Additional
Error investigations
Error fix verifications


The format of the test estimate should be a concise document only detailing the facts
that will be of interest to the Project Managers. There is no need to go into too much
detail about how the testing will be performed, as this will go into the Test Plan.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 149
Test approaches (test strategies) (K2)

A test approach is simply the implementation of the test strategy for a specific project.
Normally, it will include decisions based on the projects goal and the risk assessment
carried out, test design techniques, entry and exit criteria and test types to be performed.
For example, if we were going to design tests early on in the development lifecycle, then
this could be considered a preventative approach, as the tests would be written before
any software has been produced. If we designed the test cases after the software had
been written, then this could be considered to be a reactive approach, as we would be
designing tests based on our reaction to the already written code.

Examples of typical test approaches & strategies:

Analytical approaches, such as risk-based testing where testing is directed to
areas of greatest risk.

Model-based approaches, such as stochastic testing using statistical information
about failure rates (such as reliability growth models) or usage (such as
operational profiles).

Methodical approaches, such as failure-based (including error guessing and
fault-attacks), experienced-based, check-list based, and quality characteristic
based.

Process- or standard-compliant approaches, such as those specified by industry-
specific standards or the various agile methodologies.

Dynamic and heuristic approaches, such as exploratory testing where testing is
more reactive to events than pre-planned, and where execution and evaluation
are concurrent tasks.

Consultative approaches, such as those where test coverage is driven primarily
by the advice and guidance of technology and/or business domain experts
outside the test team.

Regression-averse approaches, such as those that include reuse of existing test
material, extensive automation of functional regression tests, and standard test
suites.


When deciding on a test approach, there are other important factors to consider. We
should consider the risk, such as:

If the product fails will it endanger human life?

The level of knowledge, or skill of the people involved in testing the project.

The specific nature of the product and the business.

There may also be regulations or policies involved in testing a given product too.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 150

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Test approach




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 151

You should now be familiar with the following terms:

Test approach
The implementation of the test strategy for a specific project. It typically includes the
decisions made that follow based on the (test) projects goal and the risk assessment
carried out, starting points regarding the test process, the test design techniques to
be applied, exit criteria and test types to be performed.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 152

5.3 Test progress monitoring and control (K2)


Terms used in this section:

Defect density, failure rate, test control, test monitoring, test report.


Test progress monitoring (K1)

Test progress monitoring is a test management task that deals with the activities related
to periodically checking the status of a test project.

Once the testing has started, from a testers point of view all activity will be focused on
the actual testing. In a typical scenario, as time goes on, some tests have been
completed, whilst others remain to be completed. Then the Project Manager asks the
Test Team what state is the software in?

In order to properly answer the Project Managers question, we need some way of
monitoring the undergoing tests. There is no one perfect way to document this, as every
company will probably have their own way of doing things. But here are some suggested
items to document during a test phase:


Percentage of work done in test case preparation (or percentage of planned test
cases prepared).

Percentage of work done in test environment preparation.

Test case execution (e.g. number of test cases run/not run, and test cases
passed/failed).

Defect information (e.g. defect density, defects found and fixed, failure rate, and
retest results).

Test coverage of requirements, risks or code.

Subjective confidence of testers in the product.

Dates of test milestones.

Testing costs, including the cost compared to the benefit of finding the next
defect or to run the next test.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 153



All of the above information can be gathered manually, but there are automated tools out
there, that can assist with this task.

Some of this information is commonly stored on a results sheet for the test cases being
performed. These details should be updated as much as possible during the testing.
This way an accurate picture of the testing can be obtained at any time. It is a good idea
to store the information in a place where other interested parties can view it. This is a
step towards more of a greater flow of information. This is also where a Test Matrix can
be used to not only store a list of the Test Specifications that will be ran, but also the
results, including statistics obtained from the above list of items combined from each
Test Specification. For example, if someone wants an idea as to the quality of code or
test progress, then they can simply view the Test Matrix for an overall picture. If they are
interested in specific test cases, then they can view the individual results sheet for the
Test Specification in question.

Defects Found
Defects Fixed & Verified




Often the Test Leader will have to report to the Test Manager any deviations from the
Project/Test Plans. This could be for reasons such as; running out of time to test the
product. If the Test Leader has details of the above specified items, then it will make
monitoring the test process an easier task. It is also a good idea to have progress
meetings with the Test Leader to ensure not only sufficient test progress is being made,
but feedback as to the quality of code is also made.
What is Failure rate?
The ratio of the number of failures of a given category to a
given unit of measure.

?
What is Defect density?
The number of defects identified in a component or system
divided by the size of the component or system.

?
!
Defects Found will be documented by the Testers, but Defects
Fixed & Verified details will commonly be controlled by the
Development Team or Project Manager.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 154
Test Reporting (K2)

After a phase of testing has reached its conclusion, or indeed all of the testing has been
completed on the project, we need to summarize what happened. By using a Test
Report, we can show exactly what happened during the testing, including whether or not
targets dates and exit criteria were achieved. By analysing information and metrics, we
can also provide suggestions for future testing of similar projects based on; outstanding
risks, level of confidence and the amount of outstanding defects etc. The point of the
Test Report is to provide a way of summarizing the events and conclusion of the testing
in a document that can be read by anyone, without the need to trawl through results
sheets and Test Specifications. Some example metrics that should be collected during
and after a test level include:

The adequacy of the test objectives for that test level.
The adequacy of the test approaches taken.
The effectiveness of the testing with respect to its objectives.




Test control (K2)

Test control is a test management task that deals with creating and applying a set of
corrective actions to get a test project back on track if monitoring shows a deviation from
the plan. The Test Leader would have detailed in the Test Plan some form of Exit
Criteria in relation to the tests being ran for a particular phase, or project testing in its
entirety. This will often include an acceptable percentage of test cases that have been
completed. If for example, during the project, time is running out and there is a risk of not
achieving this target, then the Test Leader in association with the Project Manager may
have to take action.

Examples of the type of action that occur:

Changing the schedule to allow more time.

Allocate more Testers.

Set an entry criterion requiring fixes to have been retested by a developer before
accepting them into a build.

Reduce test coverage on low priority test cases.

Re-prioritize tests when an identified risk occurs.

In order for any of the above actions to be carried out, it is imperative that any deviation
from the Test Plan or potential risks to the successful completion of the testing are
highlighted as soon as possible.
For further information, an outline of a test summary report is provided
in the Standard for Software Test Documentation (IEEE829).

?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 155

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Defect density




Failure rate




Test control




Test monitoring




Test report




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 156

You should now be familiar with the following terms:

Defect density
The number of defects identified in a component or system divided by the size of the
component or system (expressed in standard measurement terms, e.g. lines-of code,
number of classes or function points).


Failure rate
The ratio of the number of failures of a given category to a given unit of measure, e.g.
failures per unit of time, failures per number of transactions, failures per number of
computer runs. [IEEE 610]


Test control
A test management task that deals with developing and applying a set of corrective
actions to get a test project on track when monitoring shows a deviation from what
was planned.


Test monitoring
A test management task that deals with the activities related to periodically checking
the status of a test project. Reports are prepared that compare the actual to that
which was planned.


Test report
(Same description as test summary report) A document summarizing testing
activities and results. It also contains an evaluation of the corresponding test items
against exit criteria. [After IEEE 829]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 157

5.4 Configuration management (K2)


Terms used in this section:

Configuration management, version control.


Configuration Management is termed as a discipline applying technical and
administrative direction and surveillance to: identify and document the functional and
physical characteristics of a configuration item, control changes to those characteristics,
record and report change processing and implementation status, and verify compliance
with specified requirements.

In simpler terms, Configuration Management is the approach used in managing the
individual components (software & hardware) that make up the System. It is important
not to confuse Configuration Management with Change Management. Change
management is concerned with changes made to an item. Whereas Configuration
Management is concerned with managing all of the individual items, and all of the items
as a whole (System).

Software exists in two forms; non-executable (source code) and executable code (object
code). When errors are found in the software, changes may be required to be made to
the source code. When this situation occurs, it is imperative to be able to identify which
version of code to change. There may also arise a situation where two Developers need
to make changes to the same code. If the Developers are unaware of the other updating
the version, then both updated versions could be saved causing lost changes or worse.
Also, Testers may not be aware of which version of code to test, causing further
problems.

With regards to testing and configuration management, all items of testware should be;
identifiable, version controlled, tracked for changes, related to each other and related to
development items (test objects). This is to ensure that traceability can be maintained
throughout the test process. Additionally, all identified documents and software items
should be referenced unambiguously in any test documentation.
!
The goal of configuration management for a tester should be to
help uniquely identify and reproduce the tested item, test
documents, the tests and the test harness.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 158
Further Configuration Management Information (*Note - not an exam requirement)


Configuration Management traditionally consists of the following four parts:

Configuration Identification
Configuration Control
Status Accounting
Configuration Auditing


Configuration Identification

This is the process of identifying all of the individual items that will be subject to version
control within the project. Details such as version and status may be recorded.


Configuration Control

This element of Configuration management consists of the evaluation, co-ordination,
approval or disapproval, and implementation of changes to configuration items after
formal establishment of their configuration identification. The activities basically ensure
that any changes are controlled and monitored. A master copy should be kept in order
for people to be able to check out the latest version of the document to avoid two people
working on the same document version. Items such as dates, version numbers and
updated by are details that may be recorded. Once the item has been updated, the item
can be checked back in, resulting in it becoming the master copy. A history will be
displayed when multiple versions exist.


Status Accounting

This is the process of recording and reporting on the current status of the item. It is in
effect the ability to be able to view the current state of the item.


Configuration Auditing

Configuration Auditing is used to ensure that the control process that is used is being
correctly adhered to. There now exist Configuration Tools to assist with this task such as
PVCS and AccuRev.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 159

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Configuration management




Version control




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 160

You should now be familiar with the following terms:

Configuration management
A discipline applying technical and administrative direction and surveillance to:
identify and document the functional and physical characteristics of a configuration
item, control changes to those characteristics, record and report change processing
and implementation status, and verify compliance with specified requirements. [IEEE
610]


Version control
An element of configuration management, consisting of the evaluation, co-ordination,
approval or disapproval, and implementation of changes to configuration items after
formal establishment of their configuration identification. [IEEE 610]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 161

5.5 Risk and testing (K2)


Terms used in this section:

Product risk, project risk, risk, risk-based testing.


When we talk about risk in relation to testing, what we mean is; the chances of
something happening, and the effect that it might have when it does happen. We can
define different levels of risk by either the likelihood of it occurring or the severity of the
impact if it does occur. Associated risks to software development are commonly placed
into the categories of Project risk and Product risk.


Project risks (K2)

Project risks are related to the risks associated with the management and control of the
project. Risks that are associated with a project will affect the capability of the project to
deliver its objectives. When analyzing, managing and the risks, the test manager should
be following well established project management principles. The following list highlights
potential project related risks:


Organizational factors:
o Skill and staff shortages
o Personal and training issues
o Political issues, such as:
Problems with testers communicating their needs and test results;
Failure to follow up on information found in testing and reviews
o Improper attitude toward or expectations of testing (e.g. not appreciating
the value of finding defects during testing).

Technical issues:
o Problems in defining the right requirements;
o The extent that requirements can be met given existing constraints;
o The quality of the design, code and tests.

Supplier issues:
o Failure of a third party;
o Contractual issues.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 162
Product risks (K2)

When referring to product associated risks, we are talking about the risk directly related
to the object under test. For example, the risk to people if the product fails (Air Traffic
Control Software?). Or the risk that the software does not do what it was designed to do.
The following list highlights potential product related risks:

Failure-prone software delivered.

Potential of the software/hardware causing harm to an individual or company.

Poor software characteristics (functionality, reliability, usability, performance etc).

Software that does not perform its intended functions.


We can use the known risks to decide where to start testing and where to concentrate
more test effort on. By using a risk-based approach to the testing, opportunities for being
proactive to reduce the levels of product risk are increased by starting in the beginning of
a project. Once the products risks have been identified, the risks can be used to guide
the planning, control and execution of the tests. The following list represents what can
be influenced by identified product risks:

Determine the test techniques to be employed.

Determine the extent of testing to be carried out.

Prioritize testing in an attempt to find the critical defects as early as possible.

Determine whether any non-testing activities could be employed to reduce risk.



Risk-based testing should use the collective knowledge and insight of the project
stakeholders to determine the risks and the levels of testing required to address those
risks. To reduce the chance of the product failing, risk management activities can
provide a relatively disciplined approach to:

Assess and reassess what can go wrong (risks).

Determine what risks are important to deal with.

Implement actions to deal with those risks.

Additionally, testing can help with identifying new risks, and may also assist with helping
to determine what risks should be reduced, and may also lower uncertainty about risks.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 163

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Product risk




Project risk




Risk




Risk-based testing




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 164

You should now be familiar with the following terms:

Product risk
A risk directly related to the test object.


Project risk
A risk related to management and control of the (test) project, e.g. lack of staffing,
strict deadlines, changing requirements, etc.


Risk
A factor that could result in future negative consequences; usually expressed as
impact and likelihood.


Risk-based testing
An approach to testing to reduce the level of product risks and inform stakeholders
on their status, starting in the initial stages of a project. It involves the identification of
product risks and their use in guiding the test process.


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 165

5.6 Incident management (K3)


We term an incident; any significant, unplanned event that occurs during testing that
requires subsequent investigation and/or correction. At first glance it seems very similar
to a software defect. But at the time of finding the incident, most of the time we cannot
determine whether the incident is a defect or not without further investigation. The
incident should be raised when the actual result differs from the expected result. After
the inevitable investigation of the incident, there may be a reason other than a software
defect, for example:


Test environment incorrectly set up

Incorrect Test Data used

Incorrect Test Specification


An incident does not have to be raised only against a piece of software. It can also be
raised against an item of documentation.

The incident itself should only be raised by someone other than the author of the product
under test. Most companies use some form of software to create and store each
incident. The incident management software should be simple to use and training should
be provided to all users if required. It should also provide the facility to update each
incident with additional information. This is especially useful when a simple way to
reproduce the incident has been found, and can then be made available to the person
assigned to investigate the incident.

When documenting the incident, it is important to be thoughtful and diplomatic to avoid
any conflicts between any involved parties. (i.e. Testers & Developers). The incident will
also need to be graded. This is basically a way of stating how important you think it
might be. This can initially be done by the person raising the incident and can be
updated at a later time. Most companies have their own idea of grading, some are more
complex than others. Once the incident has been stored, it is important for a Tester to
continue with the next task in hand. It is easy to discover an incident and spend too
much time trying to work out why it has happened. This can impact the test progress and
should be avoided unless authorised to investigate the incident further.

The incidents themselves should be tracked from inception through all stages right
through to when it is eventually resolved. It is common practice for regular meetings to
occur to discuss the incidents raised. This has the advantage of ascertaining the severity
of the problem and assigning appropriate personnel to deal with the incidents in a timely
fashion. It also helpful for management to see the potential impact on the project from
the incidents raised.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 166

Incident reports have the following objectives:

Provide developers and other parties with feedback about the problem to enable
identification, isolation and correction as necessary.

Provide test leaders a means of tracking the quality of the system under test and
the progress of the testing.

Provide ideas for test process improvement.



Details of the incident report may include:

Date of issue, issuing organization, and author.

Expected and actual results.

This section should clearly define the difference of the actual result from
the expected result, and should document where the expected result
came from.


Identification of the test item (configuration item) and environment.

This is normally a code or name assigned to the item under test by the
company. It is important to include any version details here too.


Software or system life cycle process in which the incident was observed.

For example Unit testing, System testing etc.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 167

Description of the incident to enable reproduction and resolution, including logs,
database dumps or screenshots.

This is an important section as it contains vital information. An ideal
incident would be documented with sufficient information clearly
explaining what the problem is and simple steps in order for someone
else to be able to reproduce it. Make sure that the incident can be
understood by someone with limited knowledge of the product. As it is not
always someone with a high level of technical knowledge assigned to it.

Unfortunately, it is a common occurrence for the incident investigator to
require contacting the originator to ask questions on what exactly the
problem is, or how to reproduce it. This can waste a lot of time, but can
be easily avoided if care is taken when documenting the incident. It is
also common for poorly documented incidents to be misunderstood by
other parties, which can lead to the wrong action or no action to be taken
at all, resulting in serious defects slipping through the net.

This section is also a good place to detail a work-around if one exists.
This can be important as some incidents can effectively block the
remainder of any testing to be carried out. Also, if a work-around is
documented it may assist other testers who come across the problem to
avoid the same situation.


Scope or degree of impact on stakeholder(s) interests.

The scope normally details in what area the incident was found and which
areas may be affected by the incident. Information here may also include
specifically targeted information to certain stakeholder, for example a
business analyst. An example of this would be a reason that the defect
would be unacceptable to a specific customer in their unique real-world
environment.


Severity of the impact on the system.

Will this problem cause loss of data, or crash the system etc.? The
severity value can vary from company to company, but a commonly used
severity grading method is; Low Medium High.


Urgency/priority to fix.

The problem may block further testing, and/or affect the project
completion deadline. The priority will normally be specified by a Manager
as it can dictate who will investigate the problem and the arising
timescales from that investigation.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 168

Status of the incident

For example; open, deferred, duplicate, waiting to be fixed, fixed awaiting
retest, closed.


Conclusions, recommendations and approvals.

An example of this type of information is; additional research required, or
calling on knowledge that is external to the project.


Global issues

Fixing the defect may affect other areas of the product, or may adversely
affect specific customers. Information here may relate to a change
effecting future releases of the product or related products.


Change history

Such as the sequence of actions taken by project team members with
respect to the incident to isolate, repair, and confirm it as fixed.


References

The location and version of the documents containing the test case(s)
used to highlight the problem should be detailed here.












Need more help?
Please read Worksheet C included with this package.
?
The structure of an incident report is also covered in the Standard for
Software Test Documentation (IEEE 829).

!
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 169

Module 5 Review Questions



1) Give one negative and one positive of using an Independent Tester.



2) Name as many individual roles within an organization that may be tasked with
performing some specific type of testing:



3) The ________ will provide knowledge from the existing system/software and will
define requirements for the new system/software.



4) What is termed Details of how the testing process will actually be followed.?



5) In which activity would; making decisions about what to test, what roles will
perform the test activities and how the test activities should be done, be
performed?



6) Which test estimation approach focuses on estimating the testing effort based
on metrics of former or similar projects or based on typical values?



7) Defects fixed & verified details within the progress and monitoring phase will
commonly be controlled by whom?



8) Changing the schedule to allow more time is common to which task?



9) Software that does not perform its intended functions is considered to be which
type of risk?



10) Providing test leaders a means of tracking the quality of the system under test
and the progress of the testing, is an objective of which type of report?
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 170

Module 5 Review Answers



1) Give one negative and positive of using an Independent Tester.

Negative:
They can become isolated from the development team
The Developer may lose their sense of responsibility
An independent tester may become a bottleneck in the project

Positive:
Their job is to find defects in Developers software
It is much easier to spot defects in somebody elses software
A good Tester never assumes, they check for themselves.


2) Name as many individual roles within an organization that may be tasked with
performing some specific type of testing:

Project manager
Quality manager
Developer
Business expert
Domain expert
Infrastructure
IT operations


3) The ________ will provide knowledge from the existing system/software and will
define requirements for the new system/software.

The User will provide knowledge from the existing system/software and will
define requirements for the new system/software.



4) What is termed Details of how the testing process will actually be followed.?

The Approach



5) In which activity would; making decisions about what to test, what roles will
perform the test activities and how the test activities should be done, be
performed?

Test Planning
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 171
6) Which test estimation approach focuses on estimating the testing effort based
on metrics of former or similar projects or based on typical values?

The metrics-based approach



7) Defects fixed & verified details within the progress and monitoring phase will
commonly be controlled by whom?

Defects Fixed & Verified details will commonly be controlled by the Development
Team or Project Manager.



8) Changing the schedule to allow more time is common to which task?

Test control



9) Software that does not perform its intended functions is considered to be which
type of risk?

A product risk



10) Providing test leaders a means of tracking the quality of the system under test
and the progress of the testing, is an objective of which type of report?

An incident report
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 172

6 Tool support for testing (K2)


Types of test tool (K2)

Effective use of tools: potential benefits and risks (K2)

Introducing a tool into an organization (K1)


K1: remember, recognize, recall;

K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;

K3: apply, use.


WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 173

6.1 Types of test tool (K2)


Terms used in this section:

Configuration management tool, coverage tool, debugging tool, dynamic analysis
tool, incident management tool, load testing tool, modelling tool, monitoring tool,
performance testing tool, probe effect, requirements management tool, review tool,
security tool, static analysis tool, stress testing tool, test comparator, test data
preparation tool, test design tool, test harness, test execution tool, test management
tool, unit test framework tool.


Test tool classification (K2)

There are many tools available today in relation to testing. Some tools are designed only
to fulfil a very specific role in the testing environment, while other tools can adequately
perform multiple tasks. Test tools are classified under the activity to which they most
closely belong. You will find that some Test tool companies may provide and support
only a single tool, while others will offer complete suites in order to satisfy several
requirements at once. Testing tools are often used to improve the efficiency of testing
activities by automating repetitive tasks. Testing tools can also be used to improve the
reliability of testing by, for example, automating large data comparisons or simulating
certain types of behavior.

Some test tools can affect the outcome of the test itself, for example, performance
testing that produces timing results may differ depending on which tool is used to
measure it with. Or you may get a different measure of code coverage depending on
which coverage tool (a tool that provides objective measures of what structural
elements, e.g. statements, branches have been exercised by a test suite) you use. This
effect is called the probe effect
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 174

Tool support for management of testing and tests (K1)


Test Management Tools:

Test Management tools commonly have multiple features. Test Management is mainly
concerned with the management, creation and control of test documentation. More
advanced tools have additional capabilities such as result logging and test scheduling.
Characteristics of test management tools include:

Support for the management of tests and the testing activities carried out.

Interfaces to test execution tools, defect tracking tools and requirement
management tools.

Independent version control or interface with an external configuration
management tool.

Support for traceability of tests, test results and incidents to source documents,
such as requirements specifications.

Logging of test results and generation of progress reports.

Quantitative analysis (metrics) related to the tests (e.g. tests run and tests
passed) and the test object (e.g. incidents raised), in order to give information
about the test object, and to control and improve the test process.



Requirements Management Tools:

Requirements management tools are designed to support the recording of requirements,
requirements attributes (e.g. priority, knowledge responsible) and annotation, and
facilitates traceability through layers of requirements and requirements change
management. They also allow requirements to be prioritized and enable individual tests
to be traceable to requirements, functions and/or features. Traceability is most likely to
be reported in a test management progress report. The coverage of requirements,
functions and/or features by a set of tests may also be reported.


Incident Management Tools:

This type of tool stores and manages any incident reports, for example defects, failures
and anomalies. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of incidents and provide reporting facilities. Incident
management tools also allow for the progress of incidents to be monitored over time,
and often provide support for statistical analysis, and can provide reports about
incidents. Incident management tools are also known as defect tracking tools.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 175

Configuration Management Tools:

Although not strictly a Testing type of tool, they are useful in version control of software
development builds and software tests. They provide support for the identification and
control of configuration items, their status over changes and versions, and the release of
baselines consisting of configuration items. They can be used to ensure traceability
between testware and software work products. This type of tool can be particularly
useful when testing on multiple hardware/software environments, as information relating
to versions of operating systems, libraries, browsers, computers etc.





Tool support for static testing (K1)

Review Tools:

Review tools are also known as review process support tools. This type of tool provides
features such as storing review comments, review processes, traceability between
documents and source code. A popular use for a review tool is when the situation arises
where the review team are at remote locations, as the tool may support online reviews.


Static Analysis Tools:

Although primarily a developer orientated tool, Static analysis tools can help testers,
developers and quality assurance people find defects before any dynamic testing has
begun. One of this tools purposes is to help ensure coding standards are enforced.
Also, the tool can be used to analyse the structures and dependencies of the code.
Additionally, the tool promotes a greater understanding of the code itself.


Modelling Tools:

Primarily a developer orientated tool, a modelling tool is used to validate the models of a
software or system. Several different types of modelling tools exist today ranging from
finding defects and inconsistencies in state models, object models and data models.
Additionally, the tool may also assist with test case generation. Valuable defects can be
found using modelling tools, with the added benefit of finding them early in the
development lifecycle, which can be cost effective.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 176

Tool support for test specification (K1)

Test Design Tools:

This type of tool can generate test inputs or test cases from items such as requirements,
interfaces, design models or actual code. In some cases, the expected test result may
also be generated. Tests created by this tool for state or object models are only really
used for verifying the implementation of the model and nothing more, as they would not
be sufficient for verifying all aspect for the software/system. Other types of this tool can
assist in supporting the generation of tests by providing structured templates, often
called a test frame. These generate tests or test stubs, which speed up the test design
process.


Test Data Preparation Tools:

These tools allow test data to be manipulated from test databases, files or data
transmissions to setup and prepare data for specific tests. Advanced types of this tool
can utilise a range of database and file formats. An added advantage of these tools is to
ensure that live data transferred to a test environment is made anonymously, which is
ideal for data protection.




Tool support for test execution and logging (K1)

Test Execution Tools:

By using a scripting language, Test execution tools allow tests to be executed
automatically, or semi-automatically, using stored inputs and expected outcomes. In this
situation the scripting language allows manipulation of the tests with little effort. An
example of this would be; repeatedly running a test with different test data. Most tools of
this type will have dynamic comparison functionality and may provide test logging. Some
test execution tools have capture/playback functionality. This allows test inputs to be
directly captured, and then played back repeatedly, which can be useful when trying to
reproduce when a specific failure occurs.


Test harness/unit test framework tools:

The purpose of a test harness is to facilitate the testing of components or part of a
system by attempting to simulate the environment in which that test object will run. The
reason for this could be that the other components of that environment are not yet
available and are replaced by stubs and/or drivers. Alternatively, they may be used to
provide a predictable and controllable environment where any faults can be localized to
the specific item under test. Primarily used by developers, a framework may be created
where part of the code, object, method or function, unit or component can be executed,
by calling the object to be tested and/or giving feedback to that object. It achieves this by
providing artificial means of supplying input to the test object, and/or by supplying stubs
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 177
to take output from the object, in place of the real output targets. Test harness tools can
also be used to provide an execution framework in middleware, where languages,
operating systems or hardware must be tested together. They may be called unit test
framework tools when they have a particular focus on the component test level. This
type of tool aids in executing the component tests in parallel with building the code.


Test Comparators:

This type of tool is used to automatically highlight differences between files, databases
or test results. They can be useful when multiple complicated sets of results require
comparison to see if any changes have occurred. Similarly, databases can also be
compared saving vast amounts of man hours. Off the shelf Comparison Tools can
normally deal with a range of file and database formats. This type of tool often has filter
capabilities to allow ignoring of rows or columns of data or even areas on a screen.


Coverage Measurement:

Primarily used by developers, this type of tool provides objective measures of structural
test coverage when the actual tests are executed. Before the programs are compiled,
they are first instrumented. Once this has been completed they can then be tested. The
instrumentation process allows the coverage data to be logged whilst the program is
running. Once testing is complete, the logs can provide statistics on the details of the
tests covered. Coverage measurement tools can be either intrusive or non-intrusive
depending on the measurement techniques used, what is measured and the coding
language.


Security Tools:

A security testing tool supports operational security. Security Testing has become an
important step in testing todays products. Security tools exist to assist with testing for
viruses, denial of service attacks etc. The purpose of this type of tool is to expose any
vulnerability of the product. Although not strictly a security tool, a firewall may be used in
security testing a system.



Tool support for performance and monitoring (K1)

Dynamic Analysis Tools:

Primarily used by developers, Dynamic analysis tools gather run-time information on the
state of the executing software. These tools are ideally suited for monitoring the use and
allocation of memory. Defects such as memory leaks, unassigned pointers can be found,
which would otherwise be difficult to find manually. Traditionally, these types of tools are
of most use when used in component and component integration testing.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 178

Performance testing/load testing/stress testing tools:

These types of tools are useful to report on what behavior a system exhibits when under
certain conditions. They typically have two main facilities: load generation and test
transaction measurement. Load generation can simulate either multiple users or high
volumes of input data. During execution, response time measurements are taken from
selected transactions and these are logged. The tools are often named after the aspect
of performance that they measure, such as load or stress, so are also known as load
testing tools or stress testing tools. These tools will commonly be able to display reports
and graphs based on the applied loads. They are sometimes also based on automated
repetitive execution of tests, controlled by parameters.


Monitoring Tools:

Although not strictly a testing tool, Monitoring tools can provide information that can be
used for testing purposes and which is not available by other means. A monitoring tool is
a software tool or hardware device that runs concurrently with the component or system
under test and can provide us with continuous reports about systems resources, and
even warn us about imminent problems. Traceability is normally catered for with this tool
by storing software version details.




Tool support for specific application areas (K1)

Many software developments will have a very specific purpose, and this can make it
difficult to choose an off-the-shelf testing tool that will suit the testing needs. What you
will find is that many of the examples we have outlined in this chapter can actually be
specialized to fulfil a given task. For example, there exists test execution tools
specifically made for web pages.




Tool support using other tools (K1)

Testers will also use various other tools in their work, not just tools specifically made for
testing. For example, SQL may be used by a tester for checking fields within a database
to verify a test case result. Spreadsheets are also popular with testers. Some testers use
spreadsheets to design test cases with, while other may use them to design test data.
Another type of tool is a Debugging tool, which is a tool used by programmers to
reproduce failures, investigate the state of programs and find the corresponding defect.
Debuggers enable programmers to execute programs step by step, to halt a program at
any program statement and to set and examine program variables.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 179

Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Configuration management tool




Coverage tool




Debugging tool




Dynamic analysis tool




Incident management tool




Load testing tool




Modelling tool




Monitoring tool



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 180


Performance testing tool




Probe effect




Requirements management tool




Review tool




Security tool




Static analysis tool




Stress testing tool




Test comparator




Test data preparation tool



WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 181


Test design tool




Test harness




Test execution tool




Test management tool




Unit test framework tool




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 182

You should now be familiar with the following terms:

Configuration management tool
A tool that provides support for the identification and control of configuration items,
their status over changes and versions, and the release of baselines consisting of
configuration items.


Coverage tool
A tool that provides objective measures of what structural elements, e.g. statements,
branches have been exercised by a test suite.


Debugging tool
A tool used by programmers to reproduce failures, investigate the state of programs
and find the corresponding defect. Debuggers enable programmers to execute
programs step by step, to halt a program at any program statement and to set and
examine program variables.


Dynamic analysis tool
A tool that provides run-time information on the state of the software code. These
tools are most commonly used to identify unassigned pointers, check pointer
arithmetic and to monitor the allocation, use and de-allocation of memory and to flag
memory leaks.


Incident management tool
A tool that facilitates the recording and status tracking of incidents. They often have
workflow-oriented facilities to track and control the allocation, correction and re-
testing of incidents and provide reporting facilities.


Load testing tool
A tool to support load generation and is typically one of the two main facilities of
performance testing. Load generation can simulate either multiple users or high
volumes of input data. Load testing tools normally provide reports based on test logs
and graphs of load against response times.


Modelling tool
A tool that supports the validation of models of the software or system [Graham].

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 183


Monitoring tool
A software tool or hardware device that runs concurrently with the component or
system under test and supervises records and/or analyses the behavior of the
component or system. [After IEEE 610]


Performance testing tool
A tool to support performance testing and that usually has two main facilities: load
generation and test transaction measurement. Load generation can simulate either
multiple users or high volumes of input data. During execution, response time
measurements are taken from selected transactions and these are logged.
Performance testing tools normally provide reports based on test logs and graphs of
load against response times.


Probe effect
The effect on the component or system by the measurement instrument when the
component or system is being measured, e.g. by a performance testing tool or
monitor. For example performance may be slightly worse when performance testing
tools are being used.


Requirements management tool
A tool that supports the recording of requirements, requirements attributes (e.g.
priority, knowledge responsible) and annotation, and facilitates traceability through
layers of requirements and requirements change management. Some requirements
management tools also provide facilities for static analysis, such as consistency
checking and violations to pre-defined requirements rules.


Review tool
A tool that provides support to the review process. Typical features include review
planning and tracking support, communication support, collaborative reviews and a
repository for collecting and reporting of metrics.


Security tool
A tool that supports operational security.


Static analysis tool
A tool that carries out static analysis.


Stress testing tool
A tool that supports stress testing.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 184

Test comparator
A test tool to perform automated test comparison of actual results with expected
results.


Test data preparation tool
A type of test tool that enables data to be selected from existing databases or
created, generated, manipulated and edited for use in testing.


Test design tool
A tool that supports the test design activity by generating test inputs from a
specification that may be held in a CASE tool repository, e.g. requirements
management tool, from specified test conditions held in the tool itself, or from code.


Test harness
A test environment comprised of stubs and drivers needed to execute a test.


Test execution tool
A type of test tool that is able to execute other software using an automated test
script, e.g. capture/playback. [Fewster and Graham]


Test management tool
A tool that provides support to the test management and control part of a test
process. It often has several capabilities, such as testware management, scheduling
of tests, the logging of results, progress tracking, incident management and test
reporting.


Unit test framework tool.
A tool that provides an environment for unit or component testing in which a
component can be tested in isolation or with suitable stubs and drivers. It also
provides other support for the developer, such as debugging capabilities. [Graham]


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 185

6.2 Effective use of tools: potential benefits and risks (K2)


Terms used in this section:

Data-driven (testing), keyword-driven (testing), scripting language.


Potential benefits and risks of tool support for testing (for all tools) (K2)

In todays modern testing environment there are normally multiple types of testing
activities to be performed throughout the project. If all of these tasks are currently
performed manually, then it you might at first think..Why dont we automate them all?

Having a fully automated test environment can take an enormous amount of resources
to develop. Not only will every test case have to be converted to an automated script
(which will in itself require testing), but automated test tool training will also be required.
Once you have your automated test environment in place, it will require constant
maintenance. Every test case change will have to be automated, every software product
change will have to be automated. Thats just a few things worth bearing in mind!

A suggested approach is to pin-point which of your current activities could benefit from
tool support. A sensible place to start implementing an automated tool is often
Regression Tests. This is because of the following reasons:

They are infrequently updated

They are easily repeatable

The expected results are easily comparable


Faced with a wide selection of tools to choose from, it is imperative that the right choice
is made in order to get the most benefit from it. Testers are not the only people to benefit
from these tools, as there are Test related tools to assist with Developers, Test Leaders,
and Managers.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 186

Potential benefits of using tools include:

Repetitive work is reduced (e.g. running regression tests, re-entering the same
test data, and checking against coding standards).

Greater consistency and repeatability (e.g. tests executed by a tool, and tests
derived from requirements).

Objective assessment (e.g. static measures, coverage).

Ease of access to information about tests or testing (e.g. statistics and graphs
about test progress, incident rates and performance).



Risks of using tools include:

Unrealistic expectations for the tool (including functionality and ease of use).

Underestimating the time, cost and effort for the initial introduction of a tool
(including training and external expertise).

Underestimating the time and effort needed to achieve significant and continuing
benefits from the tool (including the need for changes in the testing process and
continuous improvement of the way the tool is used).

Underestimating the effort required to maintain the test assets generated by the
tool.

Over-reliance on the tool (replacement for test design or where manual testing
would be better).
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 187
Special considerations for some types of tool (K1)

Test execution tools:

Test execution tools work by replaying scripts designed to implement tests that are
electronically stored. These tools often require large amounts of effort in order to achieve
noticeable benefits. The thought of capturing tests by simply recording the actions of a
manual tester may at first seem extremely efficient. But this approach should be avoided
when large numbers of automated tests are required, as it does not scale well. A
captured script is a linear representation with specific data and actions as part of each
script. This type of script may be unstable when unexpected events occur.

A data-driven approach will separate out the test inputs, normally into a spreadsheet or
table, and then uses a simple generic script that will read the test data and perform the
same test with different data. Some Testers may not be familiar with the scripting
language (a programming language in which executable test scripts are written and used
by a test execution tool) used could still use these scripts effectively by supplying their
own test data for the scripts. In a keyword-driven approach, the spreadsheet contains
keywords describing the actions to be taken and test data. Testers, even if they dont
have any experience with the scripting language, can still define tests using the
keywords, which can be modified to suit the application being tested.

Technical expertise in the scripting language is still needed for all approaches either by
testers or by test automation specialists. With any of these discussed scripting
techniques, the expected results for each test need to be stored. The results can then be
compared to verify any expected differences.


Performance testing tools:

Performance testing tools will require someone with good experience/expertise in
performance testing to help design the tests and interpret the results. Otherwise the
tests may provide inadequate/meaningless results.


Static analysis tools:

Static analysis tools applied to source code can enforce coding standards, but
sometimes when they are used on, in particular, existing code, this may generate a large
amount of warning messages. These warning messages should ideally be addressed so
that maintenance of the code is easier in the future. By using filters, some of these
messages can be excluded to provide a more effective approach. Later more and more
of these filters can be removed.


Test management tools:

When test management tools are in use, they need to interface with certain other tools
or even spreadsheets to allow them to provide information in the required format for the
needs of the user. These reports should be designed and subsequently monitored so
that they provide the most benefit.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 188


Know your terms

Write in your own words, the meanings of the terms below. Then check your answers
against the correct meanings on the next page. Read through this section again until
you are confident you understand the listed terms.


Data-driven (testing)




Keyword-driven (testing)




Scripting language




WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 189


You should now be familiar with the following terms:

Data-driven (testing)
A scripting technique that stores test input and expected results in a table or
spreadsheet, so that a single control script can execute all of the tests in the table.
Data driven testing is often used to support the application of test execution tools
such as capture/playback tools. [Fewster and Graham]


Keyword-driven (testing)
A scripting technique that uses data files to contain not only test data and expected
results, but also keywords related to the application being tested. The keywords are
interpreted by special supporting scripts that are called by the control script for the
test.


Scripting language
A programming language in which executable test scripts are written, used by a test
execution tool (e.g. a capture/playback tool).


Term descriptions are extracts from the Standard glossary of terms used in Software Testing version 2.0, written by the
Glossary Working Party International Software Testing Qualifications Board.
WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 190

6.3 Introducing a tool into an organization (K1)


Introducing a Tool into an Organization (K1)

When choosing the tool, try and use a disciplined approach. Dont instantly choose the
tool that has the most features, as it could prove to be an over-complicated tool that may
require additional training. Also, try and consider the tools ability to integrate with your
existing environment, for example database connectivity. Another point to consider is the
future of your automated environment. A plan of where you expect the automated
environment to eventually take your test process may have an impact of type of tool you
are considering.



A suggested tool selection process is:

1) Create a list of potential tools that may be suitable
2) Arrange for a demonstration or free trial
3) Test the product using a typical scenario (pilot project)
4) Organise a review of the tool



It is a good idea to create a pilot project to test the tool for suitability. The benefits of
using a pilot project are; gaining experience using the tool, identification of any changes
in the test process that may be required.















!
Roll-out of the tool should only occur following a successful pilot
project or evaluation period.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 191

Module 6 Review Questions


1) ___________ tools can help Testers and Developers find defects before any
dynamic testing has begun.



2) What type of tool provides objective measures of structural test coverage when
the actual tests are executed?



3) List the suggested tool selection process.



4) Run-time information on the state of the executing software is achieved by using
___________ Tools.



5) When should the roll-out of a tool occur?



6) ______________ tools work by replaying scripts designed to implement tests
that are electronically stored.



7) List the reasons of why regression tests are suitable for automating.




8) ____________ can provide information that can be used for testing purposes
and which is not available by other means.



9) The purpose of which type of tool is to expose any vulnerability of the product?



10) The benefits of using a ___________ are; gaining experience using the tool,
identification of any changes in the test process that may be required.

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 192

Module 6 Review Questions


1) _______ ______ tools can help Testers and Developers find defects before any
dynamic testing has begun.

Static Analysis



2) What type of tool provides objective measures of structural test coverage when
the actual tests are executed?

Coverage Measurement



3) List the suggested tool selection process.

Create a list of potential tools that may be suitable
Arrange for a demonstration or free trial
Test the product using a typical scenario (pilot project)
Organise a review of the tool



4) Run-time information on the state of the executing software is achieved by using
______ _____ Tools.

Dynamic Analysis Tools



5) When should the roll-out of a tool occur?

Roll-out of the tool should only occur following a successful pilot project or
evaluation period.



6) ______________ tools work by replaying scripts designed to implement tests
that are electronically stored.

Test execution

WWW.UNIVERSALEXAMS.COM


Copyright 2005-2010 WWW.UNIVERSALEXAMS.COM
Page 193
7) List the reasons of why regression tests are suitable for automating.

They are infrequently updated
They are easily repeatable
The expected results are easily comparable



8) ____________ can provide information that can be used for testing purposes
and which is not available by other means.

Monitoring tools



9) The purpose of which type of tool is to expose any vulnerability of the product?

Security Tools



10) The benefits of using a ___________ are; gaining experience using the tool,
identification of any changes in the test process that may be required.

Pilot Project

Anda mungkin juga menyukai