Anda di halaman 1dari 7

Integration Testing

Objective of Integration testing is to make sure that the interaction of two or more components produces
results that satisfy functional requirement. In integration testing, test cases are developed with the express
purpose of exercising the interface between the components. Integration testing can also be treated as testing
assumption of fellow programmer. During the coding phase, lots of assumptions are made. Assumptions can
be made for how you will receive data from different components and how you have to pass data to different
components.
During Unit Testing, these assumptions are not tested. Purpose of unit testing is also to make sure that these
assumptions are valid. There could be many reasons for integration to go wrong, it could be because
Interface Misuse A calling component calls another component and makes an error in its use of interface,
probably by calling/passing parameters in the wrong sequence. Interface Misunderstanding A calling
component makes some assumption about the other components behavior which are incorrect.
Integration Testing can be performed in three different ways based on the from where you start testing and in
which direction you are progressing.

Big Bang Integration Testing

Top Down Integration Testing

Bottom Up Integration Testing

Hybrid Integration testing

Top down testing can proceed in a depth-first or a breadth-first manner. For depth-first integration each
module is tested in increasing detail, replacing more and more levels of detail with actual code rather than
stubs. Alternatively breadth-first would proceed by refining all the modules at the same level of control
throughout the application. In practice a combination of the two techniques would be used.
Entry Criteria
Main entry criteria for Integration testing is the completion of Unit Testing. If individual units are not tested
properly for their functionality, then Integration testing should not be started.
Exit Criteria
Integration testing is complete when you make sure that all the interfaces where components interact with
each other are covered. It is important to cover negative cases as well because components might make
assumption with respect to the data.
Even if a software component is successfully unit tested, in an enterprise n-tier
distributed application it is of little or no value if the component cannot be successfully integrated with the
rest of the application.
Once unit tested components are delivered we then integrate them together.
These integrated components are tested to weed out errors and bugs caused due to the integration. This is a
very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.


A lot of bugs emerge during the integration step.
In most cases a dedicated testing team focuses on Integration Testing.
Prerequisites for Integration Testing:
Before we begin Integration Testing it is important that all the components have been successfully unit tested.
Integration Testing Steps:
Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated
What is an Integration Test Plan?
As you may have read in the other articles in the series, this document typically describes one or more of the
following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary
How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to
the other.
So the Integration Test cases should typically focus on scenarios where one component is being called from
another. Also the overall application functionality should be tested to make sure the app works when the
different components are brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be created to focus on
different areas of the application.
As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore
the Integration Test Cases should be as detailed as possible.
Working towards Effective Integration Testing:

There are various factors that affect Software Integration and hence Integration Testing:
1) Software Configuration Management: Since Integration Testing focuses on Integration of components
and components can be built by different developers and even different development teams, it is important the
right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier
development is integrating the right version of components. Integration testing may run through several
iterations and to fix bugs components may undergo changes. Hence it is important that a good Software
Configuration Management (SCM) policy is in place. We should be able to track the components and their
versions. So each time we integrate the application components we know exactly what versions go into the
build process.
2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of
components were sent for the build or there are missing components. If possible write a script to integrate and
deploy the components this helps reduce manual errors.
3) Document: Document the Integration process/build process to help eliminate the errors of omission or
oversight. It is possible that the person responsible for integrating the components forgets to run a required
script and the Integration Testing will not yield correct results.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect
should be documented and tracked. Information should be captured as to how the defect was fixed. This is
valuable information. It can help in future integration and deployment processes.

Hybrid Integration Testing


Top-down and bottom-up, both the types of testing have their advantages and disadvantages. While in topdown integration testing it is very easy to follow the top-down software development process at the same time
in bottom-up testing, the code is used mostly is tested repetitively.
In hybrid integration testing approach, we try to leverage benefits of top-down and bottom-up, both the types
of testing.
While it is important to take benefit of both the approaches using hybrid integration techniques, we need to
make sure that we do it thoroughly. Otherwise it will be very difficult to identify which modules are tested
using top down and which one are tested using bottom up. You might end up missing some modules altogether
in this case if proper caution is not exercised.

Regression Testing
Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to
check whether previous functionality of application is working fine and new changes have not introduced any
new bugs.
This is the method of verification. Verifying that the bugs are fixed and the newly added feature have not
created in problem in previous working version of software.

Why regression Testing?


Regression testing is initiated when programmer fix any bug or add new code for new functionality to the
system. It is a quality measure to check that new code complies with old code and unmodified code is not
getting affected.
Most of the time testing team has task to check the last minute changes in the system. In such situation testing
only affected application area in necessary to complete the testing process in time with covering all major
system aspects.
How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large then the application
area getting affected is quite large and testing should be thoroughly including all the application test cases. But
this can be effectively decided when tester gets input from developer about the scope, nature and amount of
change.
What we do in regression testing?

Rerunning the previously conducted tests

Comparing current results with previously executed test results.

Regression Testing Tools:


Automated Regression testing is the testing area where we can automate most of the testing efforts. We run all
the previously executed test cases this means we have test case set available and running these test cases
manually is time consuming. We know the expected results so automating these test cases is time saving and
efficient regression testing method. Extent of automation depends on the number of test cases that are going to
remain applicable over the time. If test cases are varying time to time as application scope goes on increasing
then automation of regression procedure will be the waste of time.
Most of the regression testing tools are record and playback type. Means you will record the test cases by
navigating through the AUT and verify whether expected results are coming or not.
Example regression testing tools are:

Winrunner

QTP

AdventNet QEngine

Regression Tester

vTest

Watir

Selenium

actiWate

Rational Functional Tester

SilkTest

Most of the tools are both Functional as well as regression testing tools.
Regression Testing Of GUI application:
It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure is modified.
The test cases written on old GUI either becomes obsolete or need to reuse. Reusing the regression testing test
cases means GUI test cases are modified according to new GUI. But this task becomes cumbersome if you
have large set of GUI test cases.
Regression testing is also known as validation testing and provides a consistent, repeatable validation
of each change to an application under development or being modified. Each time a defect is fixed, the
potential exists to inadvertently introduce new errors, problems, and defects. An element of uncertainty is
introduced about ability of the application to repeat everything that went right up to the point of failure.
Regression testing is the probably selective retesting of an application or system that has been modified to
insure that no previously working components, functions, or features fail as a result of the repairs. Regression
testing is conducted in parallel with other tests and can be viewed as a quality control tool to ensure that the
newly modified code still complies with its specified requirements and that unmodified code has not been
affected by the change. It is important to understand that regression testing doesnt test that a specific defect
has been fixed. Regression testing tests that the rest of the application up to the point or repair was not
adversely affected by the fix.
Entry and Exit Criteria for Regression
Entry Criteria
The defect is repeatable and has been properly documented
A change control or defect tracking record was opened to identify and track the regression testing effort
A regression test specific to the defect has been created, reviewed, and accepted
Exit Criteria
Results of the test show no negative impact to the application

Requirements Traceability Matrix


Requirements tracing is the process of documenting the links between the user
requirements for the system youre building and the work products developed to
implement and verify those requirements. These work products include Software
requirements, design specifications, Software code, test plans and other artifacts of the
systems development process. Requirements tracing helps the project team to
understand which parts of the design and code implement the users requirements, and
which tests are necessary to verify that the users requirements have been
implemented correctly.
The RTM Template shows the Mapping between the actual Requirement and User
Requirement/System Requirement.

Any changes that happens after the system has been built we can trace the impact of
the change on the Application through RTM Matrix. This is also the mapping between
actual Requirement and Design Specification. This helps us in tracing the changes that
may happen with respect to the Design Document during the Development process of
the application. Here we will give specific Document unique ID, which is associated with
that particular requirement to easily trace that particular document.
In any case, if you want to change the Requirement in future then you can use the RTM
to make the respective changes and you can easily judge how many associated test
scripts will be changing.
Elements of RTM:
a. Requirements ID
b. Requirements Description
c. Test Case ID
d. Status [Open, Closed, Defer (Later), On hold]
Where can a Traceability Matrix be used?
Is the Traceability Matrix applicable only for big projects?
The Traceability Matrix is an essential part of any Software development process, and
hence irrespective of the size of the project, whenever there is a requirement to build a
Software this concept comes into focus.
The biggest advantage of Traceability Matrix is backward and forward traceability. (i.e)
At any point of time in the development life cycle the status of the project and the
modules that have been tested could be easily determined thereby reducing the
possibility of speculations about the status of the project.
Developing a Traceability Matrix
How is the Traceability Matrix developed?
In the design document, if there is a Design description A, which can be traced back to
the Requirement Specification A, implying that the design A takes care of Requirement
A. Similarly in the test plan, Test Case A takes care of testing the Design A, which in
turn takes care of Requirement A and so on.

There has to be references from Design document back to Requirement document,


from Test plan back to Design document and so on.
Usually Unit test cases will have Traceability to Design Specification and System test
cases /Acceptance Test cases will have Traceability to Requirement Specification. This
helps to ensure that no requirement is left uncovered (either un-designed / un-tested).
Requirements Traceability enhances project control and quality. It is a process of
documenting the links between user requirements for a system and the work products
developed to implement and verify those requirements. It is a technique to support an
objective of requirements management, to make certain that the application will meet
end-users needs.

Anda mungkin juga menyukai