Anda di halaman 1dari 10

Black box testing takes an external perspective of the test object to derive test

cases. These tests can be functional or non-functional, though usually functional.


The test designer selects valid and invalid input and determines the correct
output. There is no knowledge of the test object's internal structure.
This method of test design is applicable to all levels of software testing: unit,
integration, functional testing, system and acceptance. The higher the level, and
hence the bigger and more complex the box, the more one is forced to use black
box testing to simplify. While this method can uncover unimplemented parts of
the specification, one cannot be sure that all existent paths are tested.

Regression testing is any type of software testing which seeks to uncover


regression bugs. Regression bugs occur whenever software functionality that
previously worked as desired, stops working or no longer works in the same way
that was previously planned. Typically regression bugs occur as an unintended
consequence of program changes.
Common methods of regression testing include re-running previously run tests
and checking whether previously fixed faults have re-emerged.

Gray Box Testing: Grey box testing is the combination of black box and white
box testing. Intention of this testing is to find out defects related to bad design or
bad implementation of the system.

In gray box testing, test engineer is equipped with the knowledge of system and
designs test cases or test data based on system knowledge.

For example, consider a hypothetical case wherein you have to test a web
application. Functionality of this web application is very simple, you just need to
enter your personal details like email and field of interest on the web form and
submit this form. Server will get this details, and based on the field of interest pick
some articles and mail it to the given email. Email validation is happening at the
client side using Java Scripts.

In this case, in the absence of implementation detail, you might test web form
with valid/invalid mail IDs and different field of interests to make sure that
functionality is intact.

But, if you know the implementation detail, you know that system is making
following assumptions
• Server will never get invalid mail ID
• Server will never send mail to invalid ID
• Server will never receive failure notification for this mail.
So as part of gray box testing, in the above example you will have a test case on
clients where Java Scripts are disabled. It could happen due to any reason and if it
happens, validation can not happen at the client site. In this case, assumptions
made by the system are violated and
• Server will get invalid mail ID
• Server will send mail to invalid mail ID
• Server will receive failure notification
Hope you understood the concept of gray box testing and how it can be used to
create different test cases or data points based on the implementation details of
the system.

Integration Testing: Objective of Integration testing is to make sure that the


interaction of two or more components produces results that satisfy functional
requirement. In integration testing, test cases are developed with the express
purpose of exercising the interface between the components.
Integration testing can also be treated as testing assumption of fellow
programmer. During the coding phase, lots of assumptions are made.
Assumptions can be made for how you will receive data from different
components and how you have to pass data to different components. During Unit
Testing, these assumptions are not tested. Purpose of unit testing is also to make
sure that these assumptions are valid. There could be many reasons for
integration to go wrong, it could be because
• Interface Misuse - A calling component calls another component and makes
an error in its use of interface, probably by calling/passing parameters in the
wrong sequence.
• Interface Misunderstanding - A calling component makes some assumption
about the other components behavior which are incorrect.

Integration Testing can be performed in three different ways based on the from
where you start testing and in which direction you are progressing.
• Big Bang Integration Testing
• Top Down Integration Testing
• Bottom Up Integration Testing
• Hybrid Integration testing
Top down testing can proceed in a depth-first or a breadth-first manner. For depth-
first integration each module is tested in increasing detail, replacing more and
more levels of detail with actual code rather than stubs. Alternatively breadth-first
would proceed by refining all the modules at the same level of control throughout
the application. In practice a combination of the two techniques would be used.

Entry Criteria
Main entry criteria for Integration testing is the completion of Unit Testing. If
individual units are not tested properly for their functionality, then Integration
testing should not be started.

Exit Criteria

Integration testing is complete when you make sure that all the interfaces where
components interact with each other are covered. It is important to cover
negative cases as well because components might make assumption with respect
to the data.

Smoke Testing:
Smoke testing is done at the start of the application is deployed. Smoke test is the
entry point for the entire test execution. When the application is passed under the
smoke test then only further system testing or regression testing can be carried
out.

In general smoke testing is done when the higher version of the build is deployed
and is done at each and every time the build is deployed. In smoke testing the
main functionalites are tested and the stability of the system is validated.

Sanity Testing:
Sanity testing is also similar to Smoke testing, but has some minor differences.
Sanity testing is done when the application is deployed into testing for the very
first time and in smoke testing only positive scenarios are validated but in sanity
testing both the positive and negative scenarios are validated.

For example, if the new software is crashing systems every 5 minutes, bogging
down systems to a crawl, or destroying databases, the software may not be in a
'sane' enough condition to warrant further testing in its current state.

Ad-hoc testing
Testing carried out using no recognised Test Case Design Technique which is a
method used to determine Test Cases. Here the testing is done by the knolwedge
of the tester in the application and he tests the system randomly with out any test
cases or any specifications or requirements.

Security Testing: Security Testing is very important in today's world, because


of the way computer and internet has affected the individual and organization.
Today, it is very difficult to imagine world without Internet and latest
communication system. All these communication systems increases efficiency of
individual and organization by multifold.

Since every one from individual to organization, uses Internet or communication


system to pass information, to do business, to transfer money it becomes very
critical for the service provider to make sure that information and network are
secured from the intruders.

Primary purpose of security testing is to identify the vulnerabilities and


subsequently repairing them. Typically, security testing is conducted after the
system has been developed, installed and is operational. Unlike other types of
testing, network security testing is performed on the system on the periodic basis
to make sure that all the vulnerabilities of the system are identified.

Network security testing can be further classified into following types


• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• File Integrity Checkers
• Virus Detection
• War Dialing
• Penetration Testing

None of these tests provide a complete picture of network security. You will need
to perform a combination of these techniques to ascertain status of your Network
Testing Activities.

Apart from Network Security Testing, you should also take care of Application
security testing. Intruders can target specific applications for unauthorized access
or for any malicious reason. It becomes even critical for the web applications,
because of the visibility and access of the application through Internet. Web
application Security Testing is covered in a different section.

System testing: System testing is probably the most important phase of


complete testing cycle. This phase is started after the completion of other
phases like Unit, Component and Integration testing. During the System Testing
phase, non functional testing also comes in to picture and performance, load,
stress, scalability all these types of testing are performed in this phase.

By Definition, System Testing is conducted on the complete integrated system


and on a replicated production environment. System Testing also evaluates that
system compliance with specific functional and non functional requirements
both.
It is very important to understand that not many test cases are written for the
system testing. Test cases for the system testing are derived from the
architecture/design of the system, from input of the end user and by user stories.
It does not make sense to exercise extensive testing in the System Testing phase,
as most of the functional defects should have been caught and corrected during
earlier testing phase.

Utmost care is exercised for the defects uncovered during System Testing phase
and proper impact analysis should be done before fixing the defect. Some times, if
business permits defects are just documented and mentioned as the known
limitation instead of fixing it.

Progress of the System Testing also instills and build confidence in the product
teams as this is the first phase in which product is tested with production
environment.

System Testing phase also prepares team for more user centric testing i.e User
Acceptance Testing.

Entry Criteria
• Unit, component and Integration test are complete
• Defects identified during these test phases are resolved and closed by QE
team
• Teams have sufficient tools and resources to mimic the production
environment
• Teams have sufficient tools and resources to mimic the production
environment
Exit Criteria
• Test cases execution reports shows that functional and non functional
requirements are met.
• Defects found during the System Testing are either fixed after doing
thorough impact analysis or are documented as known limitations.

Integration Testing: Objective of Integration testing is to make sure that the


interaction of two or more components produces results that satisfy functional
requirement. In integration testing, test cases are developed with the express
purpose of exercising the interface between the components.
Integration testing can also be treated as testing assumption of fellow
programmer. During the coding phase, lots of assumptions are made.
Assumptions can be made for how you will receive data from different
components and how you have to pass data to different components. During Unit
Testing, these assumptions are not tested. Purpose of unit testing is also to make
sure that these assumptions are valid. There could be many reasons for
integration to go wrong, it could be because
• Interface Misuse - A calling component calls another component and makes
an error in its use of interface, probably by calling/passing parameters in the
wrong sequence.
• Interface Misunderstanding - A calling component makes some assumption
about the other components behavior which are incorrect.

Integration Testing can be performed in three different ways based on the from
where you start testing and in which direction you are progressing.
• Big Bang Integration Testing
• Top Down Integration Testing
• Bottom Up Integration Testing
• Hybrid Integration testing
Top down testing can proceed in a depth-first or a breadth-first manner. For depth-
first integration each module is tested in increasing detail, replacing more and
more levels of detail with actual code rather than stubs. Alternatively breadth-first
would proceed by refining all the modules at the same level of control throughout
the application. In practice a combination of the two techniques would be used.

Entry Criteria

Main entry criteria for Integration testing is the completion of Unit Testing. If
individual units are not tested properly for their functionality, then Integration
testing should not be started.

Exit Criteria

Integration testing is complete when you make sure that all the interfaces where
components interact with each other are covered. It is important to cover
negative cases as well because components might make assumption with respect
to the data.

Performance test:
Load test
Load tests are performance tests which are focused on determining or validating
performance characteristics of the product under test when subjected to workload
models and load volumes anticipated during production operations.
What are the benefits?
Helps
• Evaluate the adequacy of a load balancer.
• Detecting functionality errors under load.
• Determine the scalability of the application OR for capacity planning
purposes as the need may be.
What risks does it address?
• How many users can the application handle before “bad stuff” happens
• How much data can my database/file server handle?
• Are the network components adequate?
Stress test
Performance tests focused on determining or validating performance
characteristics of the product under test when subjected to workload models, and
load volumes beyond those anticipated during production operations.

These tests are all about determining under what conditions an application will fail
how it will fail and what indicators can be monitored to warn of an impending
failure.

What are the benefits?


Helps
• Determining if data can be corrupted by over stressing the system
• Estimating how far beyond the target load an application can go before
causing failures and errors in addition to slowness
• Establishing application monitoring triggers to warn of impending failures
• Ensuring that security holes are not opened up by stressful conditions.
• Determining the side effects of common hardware or supporting application
failures.
What risks does it address?
• What happens if we underestimated the peak load?
• What kind of failures should we plan for?
• What indicators should we be looking for to intervene prior to failure?

Equivalence Class Partitioning (ECP): An Equivalence Class Partitioning (ECP)


approach divides the input domain of a software to be tested into the finite
number of partitions or eqivalence classes. This method can be used to partition
the output domain as well, but it is not commonly used.

Boundary value testing (BVA) is a technique to find whether the application is


accepting the expected range of values and rejecting the values which falls out of
range.

Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to
10 characters.
BVA is done like this,
max value = 10 pass
max-1 = 9 pass
max+1 = 11 fail
min = 4 pass
min+1 = 5 pass
min-1 = 3 fail.

Like wise we check the corner values and come out with a details whether the
application is accepting correct range of values.

Sanity testing: Sanity testing is product or project is sane enough(Marked by

sound judgment) or not ? just checking the major functionality navigations, screens

displaying or not

Retesting: Is a type of testing that is performed to check for the functionality of


an application by using different inputs after the fix is done for the bugs that were
recorded during the earlier testing.

Fuzz testing or fuzzing: It is a software testing technique that provides random


data ("fuzz") to the inputs of a program. If the program fails (for example, by
crashing, or by failing built-in code assertions), the defects can be noted.
The great advantage of fuzz testing is that the test design is extremely simple,
and free of preconceptions about system behavior. Fuzz testing was developed at
the University of Wisconsin-Madison in 1989 by Professor Barton Miller and the
students in his graduate Advanced Operating Systems class.

Bug Life Cycle & Guidelines


In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug
Life Cycle, The different states of a bug, Description of Various Stages, Guidelines
on deciding the Severity of Bug, A sample guideline for assignment of Priority
Levels during the product test phase and Guidelines on writing Bug Description.

Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists
without a bug. The elimination of bugs from the software depends upon the
efficiency of testing done on the software. A bug is a specific concern about the
quality of the Application under Test (AUT).

Bug Life Cycle:


In software development process, the bug has a life cycle. The bug should go
through the life cycle to be closed. A specific life cycle ensures that the process is
standardized. The bug attains different states in the life cycle. The life cycle of the
bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:


1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:


1. New: When the bug is posted for the first time, its state will be “NEW”. This
means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the
bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is changed
to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing
team for next round of testing. Before he releases the software with bug fixed, he
changes the state of bug to “TEST”. It specifies that the bug has been fixed and is
released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to
be fixed in next releases. The reasons for changing the bug to this state have
many factors. Some of them are priority of the bug may be low, lack of time for
the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the
bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester
tests the bug. If the bug is not present in the software, he approves that the bug is
fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer,
the tester changes the status to “REOPENED”. The bug traverses the life cycle
once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“CLOSED”. This state means that the bug is fixed, tested and approved.
While defect prevention is much more effective and efficient in reducing the
number of defects, most organization conducts defect discovery and removal.
Discovering and removing defects is an expensive and inefficient process. It is
much more efficient for an organization to conduct activities that prevent defects.

Guidelines on deciding the Severity of Bug:


Indicate the impact each defect has on testing efforts or users and
administrators of the application under test. This information is used by
developers and management as the basis for assigning priority of work on
defects.
A sample guideline for assignment of Priority Levels during the product test phase
includes:
1. Critical / Show Stopper — An item that prevents further testing of the
product or function under test can be classified as Critical Bug. No
workaround is possible for such bugs. Examples of this include a missing
menu option or security permission required to access a function under test.

.
2. Major / High — A defect that does not function as expected/designed or
cause other functionality to fail to meet requirements can be classified as
Major Bug. The workaround can be provided for such bugs. Examples of this
include inaccurate calculations; the wrong field being updated, etc.
.
3. Average / Medium — The defects which do not conform to standards and
conventions can be classified as Medium Bugs. Easy workarounds exists to
achieve functionality objectives. Examples include matching visual and text
links which lead to different end points.
.
4. Minor / Low — Cosmetic defects which does not affect the functionality of
the system can be classified as Minor Bugs.

Anda mungkin juga menyukai