Anda di halaman 1dari 17

7.1.

Introduction to Testing - Correctness of a software,


Testing Approaches
What is testing?
Testing is the process of evaluating a system and/or its components with the intent to find out
whether it satisfies the stated requirements or not. This testing activity will give the actual and
expected, and, the difference between actual and expected results. In other words testing means
executing a system in order to identify any gaps, errors or missing requirements when compared
with the actual or desired requirements.
Consider a situation where the client has asked to develop a banking application which should
have functionality like cash withdrawal, funds transfer, account summary, SMS facility etc.
This application was developed and delivered to the client with a few bugs in it. The end users
(customers of bank) will face problems due to bugs which were missed out without proper testing.
The customer will be dissatisfied and sometimes sue the bank. To prevent these kind of scenarios
testing is done.
Software testing can be stated as the process of validating and verifying a computer
program/application/product:

Meets the requirements which will guide the further design and development.
To confirm whether the system works as expected.
To confirm whether it can be implemented with the same characteristics.
To confirm whether it satisfies the stakeholders needs

Testing is thus part of overall Quality Assurance (QA) from which we will be able to measure the
quality of any software application. The software should meet its specified standards during its
development cycle.
Software testing can be implemented at any time in the development process depending on the
testing method employed. Test designing will start at the beginning of the project well before coding
and thus will help to save a huge amount of the rework effort and resources.

7.2. Who does testing?


It depends on the process and the associated stakeholders of the project(s). Stakeholder can be
referred to all the people who are part of the project being executed. Stakeholders are those people
who are having an interest in the successful completion of the project. The important thing to
remember is that the stakeholders should also have some say in defining the project objectives,
since they are the people who will be affected by the outcome. When defining project stakeholders,
the project manager and members of his/her team should carefully think through who will be the
end users of the product. Project stakeholders usually include Customers/clients, company, project
team including project manager, senior managers etc.
In the IT industry, large companies have teams with responsibilities to evaluate the software
developed in the context of the stated requirements. Moreover, developers will also conduct testing
which is called Unit Testing, which is explained later in the document. In most of the cases, the
below professionals are involved in the testing of a system.

Test analyst / Test Manager.


Software Tester.
Software Developer.
End User.

Test analyst / Test Manager


During Test Planning phase, the activities carried out in
Test Strategy (A Test Strategy document is a high level document normally developed by project
manager. This document defines Testing Approach to clarify the major tasks and challenges of the
test project) phase will be detailed and more application specific data will be collected. The above
mentioned details along with the schedule of the project will be documented in the test plan.
Software Tester
The role of a software tester is mainly to write the test cases and perform integration/system
testing.
End User
The Acceptance testing is done by the end user at the end of the project.

7.3. Test approaches:-Top-down and bottom-up


Bottom-Up Testing is an approach to integrated testing where the lowest level components modules, procedures or functions - are tested first, then integrated and this will be used to facilitate
the testing of higher level components. After the integration testing of lower level modules, the next
level of modules will be taken and can be used for further integration testing. The process is
repeated until the components at the top of the hierarchy are tested. This type of approach is
helpful only when all or most of the modules of the same development level are ready. This method
also helps to determine the levels of each software module developed and makes it easier for
reporting the testing progress which can be given in percentages.
Top-Down: This type of testing is an approach to integrated testing where the top integrated
modules are tested first and the branch of the module is tested step-by-step until the end of the
related module.
In both method stubs and drivers are used to stand-in for missing components and are replaced
as and when the levels are completed. For eg: If we have Modules x, y, z. 'x' module is ready and
need to test it. But it calls functions from y and z (which are not ready). To test a particular module
we write dummy piece of code which simulates y and z which will return values for x. These pieces
of dummy code are called stubs in a Top Down Integration.
Now consider that modules y and z are ready and module x is not ready, and we need to test y
and z which return values from x. So to get the values from 'x' we write a dummy code for 'x' which
returns values for y and z. These pieces of dummy code are called drivers in Bottom Up Integration.

7.4. Levels of Testing - Overview, Unit Testing,


Integration Testing, System Testing, Acceptance Testing
Levels of testing include the different methodologies that can be used while conducting Software
Testing. Following are the 2 aspects on which the Software Testing is carried out:
Functional Testing.
Non-Functional Testing.
Functional Testing:
Testing the application taking into consideration the business requirements. Functional testing is
done using the functional specifications provided by the client or by using the design specifications
like use cases provided by the clients.

Functional Testing covers:


Unit Testing.
Integration Testing (Top Down and Bottom up Testing).
System Testing.
User Acceptance Testing.

Example
Consider the example of the banking ATM which will have different functionalities like cash
withdrawal, deposit, balance inquiry, account summary etc.
Unit testing will include developing programs for each functionality mentioned and testing them.
Integration testing includes combining all the modules so that the information is passed correctly
from one module to another. Say if an account has an initial amount of 1000 INR and if we deposit
an amount of 2000 INR to the account then the balance inquiry should be an amount of 3000 INR.
Comprehensive black box testing of banking system with transactions initiated and validations
performed on databases and reports generated while doing the account balance summary.
Unit Testing
This type of testing is performed by the developers before the product/application is handed over
to the testing team to formally execute the test cases. Unit testing is performed by the software
developers on the individual units of source code assigned to them. The developers use test data
that is separate from the test data of the quality assurance team.

The goal of unit testing is to isolate each part of the program and show that the individual parts are
working correctly in terms of requirements and functionality.
The limitations of Unit Testing are as follows:
Testing cannot catch each and every bug in an application. It is impossible to evaluate every
execution path in a software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that the developer can use to verify the
source code. So after the developer has exhausted all options there is no other choice but to stop
unit testing and combine the code segments with other units.
Integration Testing
The testing of combined parts of an application to determine if they function correctly together is
Integration testing. There are two methods of doing Integration Testing; Bottom-up Integration
testing and Top Down Integration testing which were already explained.
Bottom-up integration:
This type of testing begins with unit testing, followed by tests of progressively higher level
combinations of units/programs called modules.
Top-Down integration:
In this testing, the highest/top level modules are first tested and then progressively lower-level
modules are tested.
In a comprehensive software development environment, bottom-up testing is usually done first,
followed by top-down testing. The process will finally conclude with multiple tests of the complete
application, preferably in scenarios which are designed to mimic the environment which will be
encountered in customers' computers, systems and network.
System Testing
This is the next level in testing in which we test the whole system. Once all the components are
integrated, the entire application is tested rigorously to see whether it meets the quality standards.
This type of testing is usually performed by a specialized testing team.
System testing is very important because of the following reasons:

1.System Testing is the first step in the Software Development Life Cycle, where the entire
application is tested.
2.The application will be tested thoroughly to verify that it meets the functional and technical
specifications.
3.The application is tested in an test environment which is very close to the production environment
where the application will be deployed later.
4.System Testing enables us to test, verify and validate both the business requirements and the
Applications Architecture.
Acceptance Testing
This is the most important type of testing as it is conducted by the Quality Assurance Team who
will gauge whether the application meets the intended specifications and satisfies the client's
requirements. The QA team (Alpha testing) will have a set of pre written scenarios and Test Cases
that will be used to test the application. Alpha testing is a form of internal acceptance testing
performed mainly by in-house software QA and testing teams. We also have Beta testing which
the is the final testing phase where companies release the software for few external user groups
outside the company test teams or employees.
More ideas will be shared about the application and more tests can be performed on it to determine
its accuracy and the reasons why the project was initiated. Acceptance testing is done not only to
point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs
in the application that will result in system crashes or major errors in the application at a later stage.
By performing acceptance tests on an application, the testing team will get to know how the
application will perform in production environment. There may also be legal and contractual
requirements for acceptance of the system. In such cases the application should satisfy the above
requirements before it is accepted by the client.
Non-Functional Testing:
Testing the application against client's and performance requirement. Non-Functioning testing is
done based on the requirements and test scenarios given by the client.

Non-Functional Testing covers:


Load and Performance Testing

Stress & Volume Testing


Compatibility & Migration Testing
Data Conversion Testing
Operational Readiness Testing
Security Testing
Performance testing

You will get to learn more on non functional testing once you are into projects.

7.5. Test Techniques - Black box testing, White box


testing
Black box testing
Black box testing takes an external perspective of the test object (the entire application) to derive
test cases. These tests can be functional or non-functional, though usually it is functional. The test
designer selects valid and invalid input and determines the correct output. There is no knowledge
of the test object's internal structure, which means, the tester will not be having any idea on the
application's code but knows how the application behaves in response to a particular input
Black box testing technique is applicable to all levels of software testing: unit, integration, functional
testing, system and acceptance. The higher the level goes, more complex and bigger the box
becomes, hence one is forced to use black box testing to simplify. However one cannot be 100
percent sure that all existent paths are tested.

Black Box Testing is a testing strategy and not a type of testing; and this does not need any
knowledge of internal design or code. As the name "black box" suggests, the tester doesn't require
any knowledge of internal logic or code structure. These types of testing are totally focused on the
testing for requirements and functionality of the product/software application. Black box testing is
also called as "Opaque Testing", "Functional/Behavioral Testing" or "Closed Box Testing".
In order to implement the Black Box Testing Strategy, tester should be thorough with the
requirement specifications of the system and as a user should be knowing how the system should
behave in response to a particular action. Some of the black box testing methods are Boundary
value analysis, Equivalence partition etc.

Test Techniques - Black box testing, White box testing


We will illustrate the above 2 methods by an example.
There is a store which has introduced discounts for its customers based on their purchase
amounts. The discounts are as per the table below. We have to do the testing of this system.

To find test cases based on equivalence partitioning,


First, we need to find the variables(inputs and outputs for the system). And then do partitioning
based on the input ranges.
Inputs here are : Purchase amount and Customer type
Output : Discounted bill amount
Variable 1
Purchase_Amount (p_amt)
Total range is : all keys in keyboard. But valid range is only numbers. So if we partition them, the
classes are:

Non numeric (invalid class C1)


Numeric

negative & zero (<0) (invalid class. As we will not purchase for amount of 0 or less that is an invalid
input) C2
positive (>0)

[1..499]
[500..4999]
[5000.. MAX]
[> MAX]

Variable 2

C3 Valid
C4 valid
C5 valid
C6 invalid

Customer Type (cust_type):


M (member)
NM (non member)

C1
C2

So test cases are a combination of both the variables

Now for boundary value analysis:


Pick the valid classes, and the boundary values are:

Customer type [M, NM]


So test cases can be formed with permutation combination of the two variables here.
Advantages of Black Box Testing

Tester can be non-technical.


Used to verify contradictions in actual system and the specifications.
Test cases can be designed as soon as the functional specifications are complete.

Disadvantages of Black Box Testing

The test inputs need to be from large sample space. That is, from a huge set of data which
will take time. Also it is difficult to identify all possible inputs in limited testing time. So writing
test cases is slow and difficult.
Chances are more that there will be unidentified paths during this testing.

White box testing

White box testing is a security testing method that can be used to validate whether the code which
is implemented follows the intended design, to validate necessary security functionality, and to find
other vulnerabilities.
The first step in planning the white box testing is to develop a test strategy which is based on risk
analysis. The purpose of a test strategy is to clarify the major activities involved, key decisions to
be made, and challenges faced in the testing effort. This includes identifying testing scope, testing
techniques, coverage metrics, test environment, and skill of the testing resources.

The test strategy must account for the fact that time and budget constraints will prevent testing
each and every component of a software system and should balance test effectiveness with test
efficiency based on risks to the system. The level of effectiveness necessarily depends on the use
of software and its consequence of failure. The higher the cost of failure for software, then a more
rigorous and sophisticated testing approach must be adopted to ensure effectiveness. Risk
analysis provides the right context and information to derive a test strategy.
Advantages of White Box Testing :

To start the white box testing of the desired application there is no need to wait for userface
(UI) to be completed Covers all possible paths of code which will ensure a thorough testing.
It helps in checking coding standards.
Tester can ask about implementation of each section, so it might be possible to remove
unused/dead lines of codes helps in reducing the number of test cases to be executed during
black box testing.
As the tester is aware of internal coding structure, then it is helpful to derive which type of
input data is needed to test the software application effectively.
White box testing allows you to help in code optimization

Disadvantages of White Box Testing :

To test the software application a highly skilled resource is required to carry out testing who
has good knowledge of internal structure of the code which will increase the cost.
Updating the test script is required if there is change in requirement too frequently.
If the application to be tested is large in size, then exhaustive testing is impossible.
It is not possible for testing each and every path/condition of software program, which might
miss the defects in code.
White box testing is a very expensive type of testing.
To test each paths or conditions may require different input conditions, so inorder to test full
application, the tester need to create range of inputs which may be a time consuming.

How do you perform White Box Testing?


Consider the example to find sum of two numbers.
Pseudocode:
Read a.
Read b.
sum = a + b.
print sum.
We check code for two types of errors:

Syntactic Errors.
Logical Errors

Syntactic Errors:
Every programming language has its own grammar rules. Error in these rules are considered part
of syntactic errors. Like in 'c' language, semicolon is mandatory at the end of each line.
Logical Errors:
It is an error in the logic applied. For Instance, To add two number say a and b, mistakenly the
developer may write it as a - b , instead of a + b, its a logical error Code is verified for both syntactic
error and logical error by passing multiple set of data. From the above example we can see that
for performing white box testing, we have to come up with two basic steps. The following points
explain what testers do in white box testing technique:

Step 1) Understand the source code


The first thing a tester will do is to learn and understand the source code of the application. Since
white box testing involves the testing of the inner workings of an application, the tester must have
good knowledge in the programming languages used in the applications they are testing. Also, the
tester must be highly aware of secure coding practices. Security is often one of the primary
objectives of testing software. The tester should be able to find security issues and prevent attacks
from hackers and inexperienced users who might inject malicious code into the application either
knowingly or unknowingly.
Step 2) Create test cases and execute
The second step in white box testing involves testing the application's source code for proper flow
and structure. One way of achieving this is by writing more code to test the application?s source
code. The tester will develop small tests for each process or series of processes in the application.
This method requires that the tester must have thorough knowledge of the source code and is
often done by the developers. Other methods include manual testing, trial and error testing, use of
testing tools etc.

7.6. When to start and stop testing


When to Start Testing?
An early start to testing can help reduces the cost, time to rework and error free software can be
delivered to the client. However in Software Development Life Cycle (SDLC) testing can be started
from the Requirements Gathering phase and will continue till the deployment of the software.
However it also depends on the development model that is being used. In a Water fall model,
which is one of the commonly used development model, formal testing is conducted in the testing
phase, but in incremental model (another type of model), testing is performed at the end of every
increment/iteration and at the end, the entire application is tested again.
Testing is done in different forms at every phase of SDLC as given below.

During requirement gathering phase, the analysis and verifications of requirements are also
considered as testing.
In the design phase while reviewing the design with an intent to improve the design is also
considered as testing.
Testing performed by a developer on completion of the code can also be called as Unit type
of testing.

When to Stop Testing?


It is difficult to determine when to stop testing as compared to start testing, as testing is a never
ending process and no one can demand that any software
is 100% fully tested. Testing needs to be stopped as and
when it has met the completion criteria.
Now how can we find out the completion criteria?
Completion criteria can be derived from the test plan and
test strategy document and also by rechecking the test
coverage.

Completion criteria should usually be based on Risks. Testing should be stopped when :

Test cases completed with certain percentage passed and test coverage is achieved.
There are no known critical bugs open after testing.
Coverage of code, functionality, or requirements reaches a specified point
Bug rate falls below a certain level. Which means the testers are not getting any priority
1(highest priority), 2, or 3 (low priority) bugs.

As testing is a never ending process we can never assume that 100 % testing has been done, we
can only minimize the risk of delivering the product to the client/customer with testing done. The
risk can be measured by Risk analysis but for small duration / low budget / low resources project,
risk can be deduced by simply by considering the below points:

Measuring Test Coverage


Number of test cycles
Number of bugs with high priority

7.7. Debugging
In computers, debugging is the process of finding and fixing/bypassing bugs or errors in a program
source code or it can called as the engineering of a hardware device. To debug a program or
hardware device, the programmer has to start with a problem, need to isolate the source of that
particular problem, and then finally fix it. User of a program who does not know how to fix the
problem must learn enough about the problem so that he can avoid it until the problem is given a
permanent fix. When someone says they've debugged a program or "worked the bugs out" of a
program, they mean that they have fixed the problem and the bugs no longer exist in the
application.
For any new software or hardware development
process, debugging is a necessary process whether
it is a commercial product or an enterprise or
personal

application

program.

For

complex

products, debugging is done as the result of the unit


test for the smallest unit of a system, again at
component or module test when parts are brought
together, and then at system test when the product
is used with other existing products, and finally during customer beta test (explained earlier), where
users try the product out in a real world situation. As most computer programs and programmed
hardware devices contain thousands of lines of code, almost any new product is likely to contain
a few bugs.

Debugging tools (called debuggers) will help identify coding errors at various development stages.
Some programming language packages will include a facility for checking the errors in the code
the moment it is being written.
Some of the debugging tools available are GDB (For Unix C++), Expeditor (MF) and so on. Please
do a search in Google to get some of the most commonly used debugging tools in the software
industry.
Links :

http://www.worldcolleges.info/College/EBooks/download/software%20testing%20life%20cycle(STLC).pdf
http://www.ipl.com/pdf/p0820.pdf
http://www.cs.swan.ac.uk/~csmarkus/CS339/presentations/20061202_Oladimeji_Levels_of_Test
ing.pdf

Anda mungkin juga menyukai