BY
YOGESH KHAIRNAR
1Testing can make sure that the product is as per the specifications.
2Testing helps in modifying features to make it more usable and friendly.
2Testing can provide an indication of the software’s reliability and quality.
3Testing starts since Requirements and it goes till the Maintenance.
Verification: -
Verification involves Reviews and meetings (Informal and Formal)
to evaluate documents, plans, codes, requirements and specifications. This can be
done with Checklists and Issues lists. Verification is done to ensure that the
software meets the required specifications. Verification is QA / Static /
Preventive Process. [QA means monitoring like Audit, Verification is baseline.]
Validation: -
Validation involves actual testing and takes place after verifications are
completed. Validation is done to ensure that the software meets the
requirements of the customer. Validation is
QC / Dynamic / Detective and Corrective Process. [QC means Actual Testing.]
Quality: -
It is character or attribute of something. OR It is the degree of excellence.
Quality Assurance: -
It involves the entire software development process – monitoring and improving
the process, making sure that any agreed upon standards and procedures are
followed, and insuring that problems are found and dealt with. QA is Static /
Verification / Preventive / Monitoring Process.
Quality Control: -
In this organization focus is on testing a group of quality related attributes such as
Correctness, Security, Portability, Inter-operability, Usability, and Maintainability. QC
is Dynamic / Validation / Detective and Corrective / Actual Testing Process.
Software Life cycle: -
The life cycle begins when an application is first conceived and ends when it is no
longer in use.
Requirements Release
Unit Testing
Test Cycle: -
A Test Cycle is the period in which the product is tested and defects are verified.
Test Case: -
A test case is a document that describes an input action or event, and an
expected response to determine whether feature of an application is working
correctly or not.
It contains particulars as Test Case Identifier, Objective, Steps, Input Data, and
Expected Result etc.
Test Plan: -
A Test Plan is a document that describes the objective, scope, approach, and focus
of all software testing efforts.
OR
A Test Plan is a document that describes the objective, scope, approach,
methodology to be used, task to be performed, resources, schedules, risks, and
dependencies.
Test Script: -
Test Script is commonly refer to the automated test procedure used with a testing
tool.
Test Specification: -
Test Specification defines exactly what tests will be performed and what their
scope and objective will be.
Test Suite: -
Test Suite is a group / set / collection of test cases.
Test Bed: -
Test bed is nothing but a pre-requisite environment of the Testing.
Bug / Defect: -
Bug / Defect are a manifestation / sign / appearance of an error in software.
Status of Defect: - New, Open, Rejected, Reopen, Fixed, Closed.
Priority: -
Priority of the bug or defect is how fast it gets fixed.
Priority depends on two factors:
i.Impact of Error on business,
ii.How much user is going to use the functionality?
Priority status 1 - Low, 2 - Medium, 3 - High, 4 - Very High, 5 - Urgent. [According to
Test Director.]
Defect Density: -
Defect Density is the number of defects to program language.
[Kloc – Kilo Lines of Code.] According to Six Sigma 3.5% must be the Defect Density.
A Good Test: -
A Good Test is one which reveals / represents an error.
Debugging: -
Debugging is the process of finding and removing the causes of failures in the
software.
Testing Strategies: -
Types of Testing: -
1.Unit Testing: -
It is ‘micro’ scale of testing; it is used to test particular functions or code
modules. It is done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code.
2.Integration Testing: -
It is the testing of combined parts of an application to determine whether
they function together correctly or not.
OR
It is the testing in which different parts of the system are combined together
and focus is only on integrated part or integration point.
The parts can be code modules, individual applications, client and server
applications on a network etc. This type of testing is especially relevant to client /
3.Functionality Testing: -
It is the testing done to ensure that whether system meets its specified
functional requirements or not. It is the black box type testing.
4.System Testing: -
It is the testing of an integrated or whole system to verify that
it meets specified requirements. It is Negative type of testing
because it is aimed at showing software does not work. It is black
box type testing.
A. Usability Testing: -
It is the testing done to ensure that whether the system is user
friendly or not. (easy to use, easy to learn, look and feel, navigation, help
etc)
B. Compatibility Testing: -
It is the testing done to ensure that whether system is
compatible with all software platforms [like different OS, different versions
of specific OS, and different application software’s] or not.
C. Configuration Testing: -
It is the testing done to ensure that whether system is compatible
with all hardware platforms [like different Processors, HDD, FDD, and RAM
size] or not.
D. Performance Testing: -
It is the testing done to ensure the time response of the system
against large amount of data during short time period.
F. Stress Testing: -
It is the testing done to ensure the systems response after lowering
its resources.
G. Security Testing: -
It is the testing done to ensure that whether system meets its
specified security objectives or not.
H. Recovery Testing: -
It is the testing done to ensure the systems ability to recover from
disaster or varying degrees of failures.
I. Installabity Testing: -
It is the testing done to ensure that whether system follows the
installation procedures correctly or not. OR
It is the Testing done to ensure the features of the installer.
J. Uninstallabilty Testing: -
K. Maintainability Testing: -
It is the testing done to ensure that whether system meets its
specified maintainability objectives or not.
L. Portability Testing: -
It is the testing done to ensure that whether system is compatible
with all software and hardware platforms or not.
It is the combination of Compatibility and Configuration Testing.
5.Alpha Testing: -
It is the testing of an application when development is nearing completion,
minor design changes may still be made as a result of this testing. It is done by end-
user or others and not by programmers or testers.
6.Beta Testing: -
It is the testing of an application when development and testing are
essentially completed and final bugs or problems need
to be found before final release. It is done by end-user or others and not by
programmers or testers.
7.Regression Testing: -
It is the retesting after the modifications of the software or its environment
to check whether any changed functionality does not affect any unchanged
functionality. Especially Automated testing tools are used for this type of testing.
2.Exploratory Testing: -
It is the testing in which no test cases are designed; tester goes according to
his imagination and creativity in terms of finding out problems with the product.
5.Branch Testing: -
It is the testing done to ensure the coverage criteria such that for each
decision point each possible branch is executed at least once.
7.Conversion Testing: -
It is the testing of programs or procedures used to convert data from
existing system to another system.
8.Isolation Testing: -
It is the component testing of individual component in isolation
from surrounding components.
9.Feature Testing: -
It is the testing in which test case selection is based on the analysis of the
specification of the component without reference to its internal working.
10.Arc Testing: -
It is a test case design technique for a component in which test cases are
designed to execute branch outcomes.
11.Domain Testing: -
It is a test case design technique for a component in which
test cases are designed to execute representatives from equivalence classes.
Version Control: -
If there are changes like structure and functionality in the product then whole
version change takes place, e.g. - Version 1.0 to Version 2.0. And if there are minor
changes in the present Version then small Version change takes place, e.g. – Version
1.0 to Version 1.1.
Testing Process: -
FSD/SRS/ Use Cases
↓
Prepare Test Cases
↓
Review (Peer/ Lead/ Senior)
↓
Modifications (If Any)
↓