levels of tests
What differences are there between the different levels of tests? The focus
shifts from early component testing to late acceptance testing. It is important
that everybody understands this.
There are generally four recognized levels of tests: unit/component testing,
integration testing, system testing, and acceptance testing. Tests are frequently
grouped by where they are added in the software development process, or by
the level of specificity of the test.
Unit/component testing
The most basic type of testing is unit, or component, testing.
1
Unit testing aims to verify each part of the software by isolating it and then
perform tests to demonstrate that each individual component is correct in
terms of fulfilling requirements and the desired functionality.
This type of testing is performed at the earliest stages of the development
process, and in many cases it is executed by the developers themselves before
handing the software over to the testing te am.
The advantage of detecting any errors in the software early in the day is that by
doing so the team minimises software development risks, as well as time and
money wasted in having to go back and undo fundamental problems in the
program once it is nearly completed.
Integration testing
Integration testing aims to test different parts of the system in combination in
order to assess if they work correctly together. By testing the units in groups,
any faults in the way they interact together can be iden tified.
There are many ways to test how different components of the system function
at their interface; testers can adopt either a bottom -up or a top-down
integration method.
It is recommended that testers start with this approach first, before applying
the top-down approach which tests higher-level modules first and studies
simpler ones later.
System testing
The next level of testing is system testing. As the name implies, all the
components of the software are tested as a whole in order to ensure that the
overall product meets the requirements specified.
System testing is a very important step as the software is almost ready to ship
and it can be tested in an environment which is very close to that which the
user will experience once it is deployed.
System testing enables testers to ensure that the product meet s business
requirements, as well as determine that it runs smoothly within its operating
environment. This type of testing is typically performed by a specialized testing
team.
2
Acceptance testing
Finally, acceptance testing is the level in the software testing process where a
product is given the green light or not. The aim of this type of testing is to
evaluate whether the system complies with the end -user requirements and if it
is ready for deployment.
The testing team will utilise a variety of methods, such as pre -written scenarios
and test cases to test the software and use the results obtained from these tools
to find ways in which the system can be improved.
The scope of acceptance testing ranges from simply finding spelling mistakes
and cosmetic errors, to uncovering bugs that could cause a major error in the
application.
By performing acceptance tests, the te sting team can find out how the product
will perform when it is installed on the user’s system. There are also various
legal and contractual reasons why acceptance testing has to be carried out.
Any testing team should know that testing is important at every phase of the
development cycle.
The four levels of tests shouldn’t only be seen as a hierarchy that extends from
simple to complex, but also as a sequence that spans the whole development
process from the early to the later stages. Note however that later does not
imply that acceptance testing is done only after say 6 months of development
work. In a more agile approach, acceptance testing can be carried out as often
as every 2-3 weeks, as a part of the sprint demo. In an organization working
more traditionally it is quite typical to have 3 -4 releases per year, each
following the cycle described here.
3
Conclusion
Testing early and testing frequently is well worth the effort.
Detecting software errors early is important since it more effort is needed to fix
bugs when the system is nearing launch, and — due to the interactive nature of
components in the system — one small bug in a particular component hidden
deep within layers of code can result in an effect that is magnified several times
over on a system-level.
4
A list of 100 types of Software Testing Types along with definitions. A must read
for any QA professional.
6
instead of just directly testing a specific method. Can be performed by
testing or development teams.
27.Configuration Testing: Testing technique which determines minimal and
optimal configuration of hardware and software, and the effect of adding
or modifying resources such as memory, disk drives and CPU. Usually it is
performed by the Performance Testingengineers.
28.Condition Coverage Testing: Type of software testing where each condition
is executed by making it true and false, in each of the ways at least once. It
is typically made by the Automation Testing teams.
29.Compliance Testing: Type of testing which checks whether the system was
developed in accordance with standards, procedures and guidelines. It is
usually performed by external companies which offer "Certified OGC
Compliant" brand.
30.Concurrency Testing: Multi-user testing geared towards determining the
effects of accessing the same application code, module or database records.
It it usually done by performance engineers.
31.Conformance Testing: The process of testing that an implementation
conforms to the specification on which it is based. It is usually performed
by testing teams.
32.Context Driven Testing: An Agile Testing technique that advocates
continuous and creative evaluation of testing opportunities in light of the
potential information revealed and the value of that information to the
organization at a specific moment. It is usually performed by Agile testing
teams.
33.Conversion Testing: Testing of programs or procedures used to convert
data from existing systems for use in replacement systems. It is usually
performed by the QA teams.
34.Decision Coverage Testing: Type of software testing where each
condition/decision is executed by setting it on true/false. It is typically
made by the automation testing teams.
35.Destructive Testing: Type of testing in which the tests are carried out to the
specimen's failure, in order to understand a specimen's structural
performance or material behavior under different loads. It is usually
performed by QA teams.
36.Dependency Testing: Testing type which examines an application's
requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality. It is usually performed by testing
teams.
7
37.Dynamic Testing: Term used in software engineering to describe the
testing of the dynamic behavior of code. It is typically performed by testing
teams.
38.Domain Testing: White box testing technique which contains checkings
that the program accepts only valid input. It is usually done by software
development teams and occasionally by automation testing teams.
39.Error-Handling Testing: Software testing type which determines the ability
of the system to properly process erroneous transactions. It is usually
performed by the testing teams.
40.End-to-end Testing: Similar to system testing, involves testing of a
complete application environment in a situation that mimics real-world
use, such as interacting with a database, using network communications,
or interacting with other hardware, applications, or systems if appropriate.
It is performed by QA teams.
41.Endurance Testing: Type of testing which checks for memory leaks or
other problems that may occur with prolonged execution. It is usually
performed by performance engineers.
42.Exploratory Testing: Black box testing technique performed without
planning and documentation. It is usually performed by manual testers.
43.Equivalence Partitioning Testing: Software testing technique that divides
the input data of a software unit into partitions of data from which test
cases can be derived. it is usually performed by the QA teams.
44.Fault injection Testing: Element of a comprehensive test strategy that
enables the tester to concentrate on the manner in which the application
under test is able to handle exceptions. It is performed by QA teams.
45.Formal verification Testing: The act of proving or disproving the
correctness of intended algorithms underlying a system with respect to a
certain formal specification or property, using formal methods of
mathematics. It is usually performed by QA teams.
46.Functional Testing: Type of black box testing that bases its test cases on the
specifications of the software component under test. It is performed by
testing teams.
47.Fuzz Testing: Software testing technique that provides invalid, unexpected,
or random data to the inputs of a program - a special area of mutation
testing. Fuzz testing is performed by testing teams.
48.Gorilla Testing: Software testing technique which focuses on heavily
testing of one particular module. It is performed by quality assurance
teams, usually when running full testing.
49.Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but
8
using some knowledge of its internal workings. It can be performed by
either development or testing teams.
50.Glass box Testing: Similar to white box testing, based on knowledge of the
internal logic of an application's code. It is performed by development
teams.
51.GUI software Testing: The process of testing a product that uses a graphical
user interface, to ensure it meets its written specifications. This is
normally done by the testing teams.
52.Globalization Testing: Testing method that checks proper functionality of
the product with any of the culture/locale settings using every type of
international input possible. It is performed by the testing team.
53.Hybrid Integration Testing: Testing technique which combines top-down
and bottom-up integration techniques in order leverage benefits of these
kind of testing. It is usually performed by the testing teams.
54.Integration Testing: The phase in software testing in which individual
software modules are combined and tested as a group. It is usually
conducted by testing teams.
55.Interface Testing: Testing conducted to evaluate whether systems or
components pass data and control correctly to one another. It is usually
performed by both testing and development teams.
56.Install/uninstall Testing: Quality assurance work that focuses on what
customers will need to do to install and set up the new software
successfully. It may involve full, partial or upgrades install/uninstall
processes and is typically done by the software testing engineer in
conjunction with the configuration manager.
57.Internationalization Testing: The process which ensures that product's
functionality is not broken and all the messages are properly externalized
when used in different languages and locale. It is usually performed by the
testing teams.
58.Inter-Systems Testing: Testing technique that focuses on testing the
application to ensure that interconnection between application functions
correctly. It is usually done by the testing teams.
59.Keyword-driven Testing: Also known as table-driven testing or action-
word testing, is a software testing methodology for automated testing that
separates the test creation process into two distinct stages: a Planning
Stage and an Implementation Stage. It can be used by either manual or
automation testing teams.
60.Load Testing: Testing technique that puts demand on a system or device
and measures its response. It is usually conducted by the performance
engineers.
9
61.Localization Testing: Part of software testing process focused on adapting a
globalized application to a particular culture/locale. It is normally done by
the testing teams.
62.Loop Testing: A white box testing technique that exercises program loops.
It is performed by the development teams.
63.Manual Scripted Testing: Testing method in which the test cases are
designed and reviewed by the team before executing it. It is done
by Manual Testing teams.
64.Manual-Support Testing: Testing technique that involves testing of all the
functions performed by the people while preparing the data and using
these data from automated system. it is conducted by testing teams.
65.Model-Based Testing: The application of Model based design for designing
and executing the necessary artifacts to perform software testing. It is
usually performed by testing teams.
66.Mutation Testing: Method of software testing which involves modifying
programs' source code or byte code in small ways in order to test sections
of the code that are seldom or never accessed during normal tests
execution. It is normally conducted by testers.
67.Modularity-driven Testing: Software testing technique which requires the
creation of small, independent scripts that represent modules, sections,
and functions of the application under test. It is usually performed by the
testing team.
68.Non-functional Testing: Testing technique which focuses on testing of a
software application for its non-functional requirements. Can be
conducted by the performance engineers or by manual testing teams.
69.Negative Testing: Also known as "test to fail" - testing method where the
tests' aim is showing that a component or system does not work. It is
performed by manual or automation testers.
70.Operational Testing: Testing technique conducted to evaluate a system or
component in its operational environment. Usually it is performed by
testing teams.
71.Orthogonal array Testing: Systematic, statistical way of testing which can
be applied in user interface testing, system testing, Regression Testing,
configuration testing and Performance Testing. It is performed by the
testing team.
72.Pair Testing: Software development technique in which two team
members work together at one keyboard to test the software application.
One does the testing and the other analyzes or reviews the testing. This
can be done between one Tester and Developer or Business Analyst or
10
between two testers with both participants taking turns at driving the
keyboard.
73.Passive Testing: Testing technique consisting in monitoring the results of a
running system without introducing any special test data. It is performed
by the testing team.
74.Parallel Testing: Testing technique which has the purpose to ensure that a
new application which has replaced its older version has been installed
and is running correctly. It is conducted by the testing team.
75.Path Testing: Typical white box testing which has the goal to satisfy
coverage criteria for each logical path through the program. It is usually
performed by the development team.
76.Penetration Testing: Testing method which evaluates the security of a
computer system or network by simulating an attack from a malicious
source. Usually they are conducted by specialized penetration testing
companies.
77.Performance Testing: Functional testing conducted to evaluate the
compliance of a system or component with specified performance
requirements. It is usually conducted by the performance engineer.
78.Qualification Testing: Testing against the specifications of the previous
release, usually conducted by the developer for the consumer, to
demonstrate that the software meets its specified requirements.
79.Ramp Testing: Type of testing consisting in raising an input signal
continuously until the system breaks down. It may be conducted by the
testing team or the performance engineer.
80.Regression Testing: Type of software testing that seeks to uncover
software errors after changes to the program (e.g. bug fixes or new
functionality) have been made, by retesting the program. It is performed
by the testing teams.
81.Recovery Testing: Testing technique which evaluates how well a system
recovers from crashes, hardware failures, or other catastrophic problems.
It is performed by the testing teams.
82.Requirements Testing: Testing technique which validates that the
requirements are correct, complete, unambiguous, and logically consistent
and allows designing a necessary and sufficient set of test cases from those
requirements. It is performed by QA teams.
83.Security Testing: A process to determine that an information system
protects data and maintains functionality as intended. It can be performed
by testing teams or by specialized security-testing companies.
11
84.Sanity Testing: Testing technique which determines if a new software
version is performing well enough to accept it for a major testing effort. It
is performed by the testing teams.
85.Scenario Testing: Testing activity that uses scenarios based on a
hypothetical story to help a person think through a complex problem or
system for a testing environment. It is performed by the testing teams.
86.Scalability Testing: Part of the battery of non-functional tests which tests a
software application for measuring its capability to scale up - be it the user
load supported, the number of transactions, the data volume etc. It is
conducted by the performance engineer.
87.Statement Testing: White box testing which satisfies the criterion that each
statement in a program is executed at least once during program testing. It
is usually performed by the development team.
88.Static Testing: A form of software testing where the software isn't actually
used it checks mainly for the sanity of the code, algorithm, or document. It
is used by the developer who wrote the code.
89.Stability Testing: Testing technique which attempts to determine if an
application will crash. It is usually conducted by the performance engineer.
90.Smoke Testing: Testing technique which examines all the basic
components of a software system to ensure that they work properly.
Typically, smoke testing is conducted by the testing team, immediately
after a software build is made .
91.Storage Testing: Testing type that verifies the program under test stores
data files in the correct directories and that it reserves sufficient space to
prevent unexpected termination resulting from lack of space. It is usually
performed by the testing team.
92.Stress Testing: Testing technique which evaluates a system or component
at or beyond the limits of its specified requirements. It is usually
conducted by the performance engineer.
93.Structural Testing: White box testing technique which takes into account
the internal structure of a system or component and ensures that each
program statement performs its intended function. It is usually performed
by the software developers.
94.System Testing: The process of testing an integrated hardware and
software system to verify that the system meets its specified requirements.
It is conducted by the testing teams in both development and target
environment.
95.System integration Testing: Testing process that exercises a software
system's coexistence with others. It is usually performed by the testing
teams.
12
96.Top Down Integration Testing: Testing technique that involves starting at
the stop of a system hierarchy at the user interface and using stubs to test
from the top down until the entire system has been implemented. It is
conducted by the testing teams.
Difference between defect, error, bug, failure and fault: “A mistake in coding is
called error ,error found by tester is called defect, defect accepted by development
team then it is called bug ,build does not meet the requirements then it Is failure.”
13
Bug:
An Error found in the development environment before the product is shipped to the
customer.
Bug: Simply Bug is an error found BEFORE the application goes into production. A
programming error that causes a program to work poorly, produce incorrect results, or
crash. An error in software or hardware that causes a program to malfunction.
Defect:
Defect is the difference between expected and actual result in the context of testing.
Defect is the deviation of the customer requirement. An Error found in the product itself
after it is shipped to the customer. Defect is an error found AFTER the application goes
into production. Simply defect can be defined as a variance between expected and
actual. Defect is an error found AFTER the application goes into production.
Error: It the one which is generated because of wrong login, loop or due to syntax.
Error means normally arises in software Error means to change the functionality of the
program.
15
A typical Software Development Life Cycle (SDLC) consists of the
following phases:
Check out the below video to watch “Software Testing Life Cycle
Phases (STLC Phases)”
Requirement Phase:
Requirement gathering and analysis is the most important phase in
software development lifecycle. Business Analyst collects the
requirement from the Customer/Client as per the clients business
needs and documents the requirements in the Business
Requirement Specification (document name varies depends upon
the Organization. Some examples are Customer Requirement
Specification (CRS), Business Specification (BS) etc., and provides
the same to Development Team.
Analysis Phase:
Once the requirement gathering and analysis is done the next step
is to define and document the product requirements and get them
approved by the customer. This is done through SRS (Software
Requirement Specification) document. SRS consists of all the
product requirements to be designed and developed during the
project life cycle. Key people involved in this phase are Project
Manager, Business Analysist and Senior members of the Team. The
outcome of this phase is Software Requirement Specification.
Design Phase:
It has two steps:
HLD – High Level Design – It gives the architecture of the software
product to be developed and is done by architects and senior
developers
LLD – Low Level Design – It is done by senior developers. It
16
describes how each and every feature in the product should work
and how every component should work. Here, only the design will
be there and not the code
The outcome from this phase is High Level Document and Low Level
Document which works as an input to the next phase
Development Phase:
Developers of all levels (seniors, juniors, freshers) involved in this
phase. This is the phase where we start building the software and
start writing the code for the product. The outcome from this phase
is Source Code Document (SCD) and the developed product.
Testing Phase:
When the software is ready, it is sent to the testing department
where Test team tests it thoroughly for different defects. They
either test the software manually or using automated testing tools
depends on process defined in STLC (Software Testing Life
Cycle) and ensure that each and every component of the software
works fine. Once the QA makes sure that the software is error-free,
it goes to the next stage, which is Implementation. The outcome of
this phase is the Quality Product and the Testing Artifacts.
17
Some of the SDLC Models are as follows:
1. Waterfall Model
2. Spiral
3. V Model
4. Prototype
5. Agile
18
The different phases of Software Testing Life Cycle are:
Contents [hide]
1 Software Testing Life Cycle:
o 1.1 Requirement Analysis:
o 1.2 Test Planning:
o 1.3 Test Design:
o 1.4 Test Environment Setup:
o 1.5 Test Execution:
o 1.6 Test Closure:
Every phase of STLC (Software Testing Life Cycle) has a definite
Entry and Exit Criteria.
Check out the below video to watch “Software Testing Life Cycle
Phases (STLC Phases)”
Requirement Analysis:
Entry criteria for this phase is BRS (Business Requirement
Specification) document. During this phase, test team studies and
19
analyzes the requirements from a testing perspective. This phase
helps to identify whether the requirements are testable or not. If
any requirement is not testable, test team can communicate with
various stakeholders (Client, Business Analyst, Technical Leads,
System Architects etc) during this phase so that the mitigation
strategy can be planned.
Test Planning:
Test planning is the first step of the testing process. In this phase
typically Test Manager/Test Lead involves determining the effort
and cost estimates for the entire project. Preparation of Test Plan
will be done based on the requirement analysis. Activities like
resource planning, determining roles and responsibilities, tool
selection (if automation), training requirement etc., carried out in
this phase. The deliverables of this phase are Test Plan & Effort
estimation documents.
20
Test Cases, Test Scripts, Test Data, Requirements Traceability
Matrix
Test Execution:
Test team starts executing the test cases based on the planned test
cases. If a test case result is Pass/Fail then the same should be
updated in the test cases. Defect report should be prepared for
failed test cases and should be reported to the Development Team
through bug tracking tool (eg., Quality Center) for fixing the
defects. Retesting will be performed once the defect was fixed. Click
here to see the Bug Life Cycle.
Entry Criteria: Test Plan document, Test cases, Test data, Test
Environment.
Deliverables: Test case execution report, Defect report, RTM
21
The final stage where we prepare Test Closure Report, Test Metrics.
Testing team will be called out for a meeting to evaluate cycle
completion criteria based on Test coverage, Quality, Time, Cost,
Software, Business objectives. Test team analyses the test
artifacts (such as Test cases, Defect reports etc.,) to identify
strategies that have to be implemented in future, which will help to
remove process bottlenecks in the upcoming projects. Test metrics
and Test closure report will be prepared based on the above criteria.
Entry Criteria: Test Case Execution report (make sure there are no
high severity defects opened), Defect report
Deliverables: Test Closure report, Test metrics
22
Check below video to see detailed explanation on “Bug Life Cycle
/ Defect Life Cycle”
23
Open: The development team starts analyzing and works on the
defect fix
Test: If the status is “Test”, it means the defect is fixed and ready
to do test whether it is fixed or not.
Verified: The tester re-tests the bug after it got fixed by the
developer. If there is no bug detected in the software, then the bug
is fixed and the status assigned is “verified.”
Closed: After verified the fix, if the bug is no longer exits then the
status of bug will be assigned as “Closed.”
Reopen: If the defect remains same after the retest, then the
tester posts the defect using defect retesting document and changes
the status to “Reopen”. Again the bug goes through the life cycle to
be fixed.
24
Rejected: If the system is working according to specifications and
bug is just due to some misinterpretation (such as referring to old
requirements or extra features) then Team lead or developers can
mark such bugs as “Rejected”
This is all about Bug Life Cycle / Defect Life Cycle. Some companies
use these bug id’s in RTM to map with the test cases.
25
Software Test Metrics:
Before starting what is Software Test Metrics and types, I would like
to start with the famous quotes in terms of metrics.
1. Process metrics
2. Product metrics
26
Process Metrics:
Software Test Metrics used in the process of test preparation and
test execution phase of STLC.
Formula:
1 Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test Case Preparation)
E.g.:
27
Formula:
1 Test Design Coverage = ((Total number of requirements mapped to test cases) / (Total number of requirements)*100
E.g.:
Formula:
1 (No of Test cases executed)/ (Effort spent for execution of test cases)
E.g.:
28
Test Execution Productivity = 180/10 = 18 test cases/hour
Formula:
1 Test Execution Coverage = (Total no. of test cases executed / Total no. of test cases planned to execute)*100
E.g.:
Formula:
1 Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases executed) * 100
E.g.:
Test Cases Pass = (80/90)*100 = 88.8 = 89%
29
Test Cases Failed:
It is to measure the percentage no. of test cases failed
Formula:
1 Test Cases Failed = (Total no. of test cases failed) / (Total no. of test cases executed) * 100
E.g.:
Formula:
1 Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test cases executed) * 100
E.g.:
Product metric:
Software Test Metrics used in the process of defect analysis phase
of STLC.
30
Error Discovery Rate:
It is to determine the effectiveness of the test cases.
Formula:
1 Error Discovery Rate = (Total number of defects found /Total no. of test cases executed)*100
E.g.:
Formula:
Defect Fix Rate = (Total no of Defects reported as fixed - Total no. of defects reopened) / (Total no of Defects reported
1
as fixed + Total no. of new Bugs due to fix)*100
E.g.:
Defect Density:
It is defined as the ratio of defects to requirements.
Formula:
E.g.:
Actual Size= 10
Defect Leakage:
It is used to review the efficiency of the testing process before UAT.
Formula:
32
1 Defect Leakage = ((Total no. of defects found in UAT)/(Total no. of defects found before UAT)) * 100
E.g.:
Formula:
Defect Removal Efficiency = ((Total no. of defects found pre-delivery) /( (Total no. of defects found pre-delivery )+
1
(Total no. of defects found post-delivery)))* 100
E.g.:
Here I have hand-picked few posts which will help you to learn more
interview related stuff:
33
Test Automation Framework Interview Questions
TestNG Interview Questions
SQL Interview Questions
Manual Testing Interview Questions
Agile Interview Questions
Why You Choose Software Testing As A Career
General Interview Questions
If you have any more questions, feel free to ask via comments. If
you find this post useful, do share it with your friends on Social
Networking.
34
Like all other test artifacts, RTM too varies between organizations.
Most of the organizations use just the Requirement Id’s and Test
Case Id’s in the RTM. It is possible to make some other fields such
as Requirement Description, Test Phase, Test case
result, Document Owner etc., It is necessary to update the RTM
whenever there is a change in requirement.
35
Assume
total test cases identified are 10
Test Deliverables are the test artifacts which are given to the
stakeholders of a software project during the SDLC (Software
Development Life Cycle). A software project which follows SDLC
undergoes the different phases before delivering to the customer. In
this process there will be some deliverables in every phase. Some of
the deliverables are provided before the testing phase commences
and some are provided during the testing phase and rest after the
testing phase is completed.
38
Interview Question: What is test deliverables and list out the test
deliverables you have come across in the process of STLC?
This is one of the most important QA interview questions for
freshers.
39
5. Test Cases/Scripts: Test cases are the set of positive and
negative executable steps of a test scenario which has a set of pre-
conditions, test data, expected result, post-conditions and actual
results. Click here for more details.
6. Test Data: Test data is the data that is used by the testers to run
the test cases. Whilst running the test cases, testers need to enter
some input data. To do so, testers prepare test data. It can be
prepared manually and also by using tools.
40
12. Test incident report: It contains all the incidents such as
resolved or unresolved incidents which are found while testing the
software.
14. Release Note: Release notes will be sent to the client, customer
or stakeholders along with the build. It contains list of new releases,
bug fixes.
16. User guide: This guide gives assistance to the end user on
accessing the software application.
41
The Complete Guide To Writing Test
Strategy
Test Strategy is a high level document (static document) and
usually developed by project manager. It is a document which
captures the approach on how we go about testing the product and
achieve the goals. It is normally derived from the Business
Requirement Specification (BRS). Documents like Test Plan
are prepared by keeping this document as base.
Usually test team starts writing the detailed Test Plan and continue
further phases of testing once the test strategy is ready. In Agile
world, some of the companies are not spending time on test plan
preparation due to the minimal time for each release but they
maintain test strategy document. Maintaining this document for the
entire project leads to mitigate the unforeseen risks.
Contents [hide]
1 The Complete Guide To Writing Test Strategy
o 1.1 Sections of Test Strategy Document:
o 1.2 Scope and overview:
o 1.3 Test Approach:
o 1.4 Test Levels:
o 1.5 Test Types:
o 1.6 Roles and responsibilities:
o 1.7 Environment requirements:
o 1.8 Testing tools:
o 1.9 Industry standards to follow:
o 1.10 Test deliverables:
42
o 1.11 Testing metrics:
o 1.12 Requirement Traceability Matrix:
o 1.13 Risk and mitigation:
o 1.14 Reporting tool:
o 1.15 Test Summary:
o 1.16 Download Sample Test Strategy Document:
43
1. Scope and overview
2. Test Approach
3. Testing tools
4. Industry standards to follow
5. Test deliverables
6. Testing metrics
7. Requirement Traceability Matrix
8. Risk and mitigation
9. Reporting tool
10. Test summary
Test Approach:
In this section, we usually define the following
Test levels
Test types
Roles and responsibilities
Environment requirements
Test Levels:
This section lists out the levels of testing that will be performed
during QA Testing. Levels of testing such as unit testing, integration
testing, system testing and user acceptance testing. Testers are
responsible for integration testing, system testing and user
acceptance testing.
44
Test Types:
This section lists out the testing types that will be performed during
QA Testing.
Environment requirements:
This section lists out the hardware and software for the test
environment in order to commence the testing activities.
Testing tools:
This section will describe the testing tools necessary to conduct the
tests
Test deliverables:
This section lists out the deliverables that need to produce before,
during and at the end of testing.
45
Testing metrics:
This section describes the metrics that should be used in the project
to analyze the project status.
Reporting tool:
This section outlines how defects and issues will be tracked using a
reporting tool.
46
Conclusion:
Test strategy document gives a clear vision of what the test team
will do for the whole project. It is a static document means it wont
change throughout the project life cycle. The one who prepares this
document, must have good experience in the product domain, as
this is the document that is going to drive the entire team and it
won’t change throughout the project life cycle (it is a static
document). Test strategy document should be circulated to entire
testing team before the testing activities begin. Writing a good test
strategy improves the complete testing process and leads to
produce a high-quality system.
47