Anda di halaman 1dari 14

Q:What is state base testing? Why it is suitable for object oriented testing?

State based testing is used as one of the most useful object oriented software testing techniques. It uses the concept of
state machines. State machines are used to model the behavior of objects. It represents various states which an object is
expected to visit during its lifetime in response to events or methods.
During state based testing, the unit testers typically perform the following steps:
Create a state transition diagram for each relevant class with logical states. Transform the state transition diagrams
into a transition trees, whereby die root of the tree is the start state, and each branch is made up of a series of
transitions between states that terminates when a transition returns the object under test to a previous state. Create a
state*based test suite for the class under test, whereby each test case corresponds to a branch of the transition tree.
Each test case starts by instantiating an object under test in the start state: The test case then sends the object under
test a series of test stimuli (either messages with appropriate parameters or exceptions with appropriate attributes that
are intended to walk the object under test down the corresponding branch of the transition tree. The test oracle is the
state transition diagram. Each test case compares each actual post*stimuli state of the object under test with the state
that is predicted by the branch of the transition tree that generated the test case. In object*oriented testing literature,
a class is considered to be a basic unit of test mu A major characteristic of classes is the interaction between data
members and member functions. This interaction is represented as definitions and uses of data members in member
functions and can be properly modeled with state chart diagrams. The data members represent the allowable slates of
an object and member functions represent the change of states i.e. transitions. Therefore, specifications of classes can
be done in a better way using slate diagrams. Object*oriented technologies can reduce or eliminate some problems
typical of procedural software, but may introduce new problems that can result in classes of faults hardly addressable
with traditional testing techniques. In particular, state* dependent faults tend to occur more frequently in
object*oriented software than in procedural software; almost all objects have an associated state, and the behavior of
methods invoked on an object depends in general on the object's state. Such faults can be very difficult to reveal
because they cause failures only when the objects are exercised in particular states. So, state based testing provides a
better approach of testing.
(a) What are metrics? Why so we need to measure software?
The metrics can be defined as the continuous application of measurement based techniques to the software
development process and its products to supply meaningful and timely management information, together with the use
of those techniques to improve that process and its products.
Need to measure software
The software industry * as all other industry * has as its main goals to deliver products and services to the market. This
must be done at a price that the customer accepts as reasonable and at a quality that the customer wants. One of the
main characteristics of the market is that it changes, not by itself but through the competition among those who provide
the goods that is sold. The changes will not happen at the same time for all industries, but only when the players can no
longer expand in any other way. The main results of this competition are lower prices and higher quality * not because
the
producers want it to be this way, but because it is the only way in which they can survive.
The software industry, like any other industry, can improve only in one way; by understanding the product and the
process that produces it. Thus, we can define the first goal of software measurement.
We need measurement in software development to improve the development process so that we can increase product
quality and thus increase customer satisfaction
In addition to this long*term need, we also need measurement in order to agree on the quality of a delivered product.
The lime when the customer in awe watched the blinking lights is gone forever. The customers now want quality and as
a result of this we need to be able to establish a common ground for specifying, developing and accepting a software
product with an agreed quality. Even though we accept the ISO definition of quality as the products degree of satisfying
the customers needs and expectations, we need something more concrete when we start to write a contract and
develop a software system. The definition given in the standard ISO 9126 is a good starting point even though it

concentrates on the product and leaves out important factors such as price, time of delivery and quality of service. Thus,
our second need is
(b) Discuss some methods of integration testing with examples.
The various approaches of integration testing are:
Top down integration
Top*down integration testing is an incremental approach to construction of program structure. Modules are integrated
by moving downward through the control hierarchy, beginning with the main control module (main program). Modules
subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a
depth*first or breadth*first manner. Consider the following example:

*Depth*first integration would integrate all components on a major control path of the structure. Selection of a major
path is somewhat arbitrary and depends on application* specific characteristics. For example, selecting the left hand
path, components Ml, M2 , MS would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6
would be integrated. Then, the central and right hand control paths are built.
* Breadth*first integration incorporates all components directly subordinate at each level, moving across the structure
horizontally. From the figure, components M2, M3, and M4 (a replacement for stub S4) would be integrated first. The
next control level, MS, M6, and so on, follows.
Bottom up integration
Bottom*up integration testing, as its name implies, begins construction and testing with atomic modules
(i.e.components at the lowest levels in the program structure). Because components are integrated from the bottom up,
processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
Consider the following example:
Integration follows the pattern illustrated in Figure. Components are combined to form clusters 1, 2, and 3. Each of the
clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma.
Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is
removed prior to integration with module Mb. Both Ma and Mb will ultin ately be integrated with component Me, and
so forth.

We need measurement on software so that we can understand and agree on product quality

(E)What are the major problems that we face during object oriented testing?
1.White-box testing methods can be applied for testing the code used to implement class operations, but not
up to much extent

2. Only Black-box testing methods are appropriate for testing OO systems.


3.Object-oriented programming brings additional testing concerns
classes may contain operations that are inherited from super classes
subclasses may contain operations that were redefined rather than inherited
all classes derived from an previously tested base class need to be thoroughly tested
(e)What is Localization Testing?
Ans: Localization of software is a complex process. There are a lot of potential failure points for any localization
endeavor both grammatical and cultural, for this reason localization testing is an essential component in the
development cycle of a successfully localized product. Localization testing consists of three primary components:
Cosmetic Testing Functional Testing
Linguistic Testing
Cosmetic testing - "look and feel" review
Cosmetic testing is a crucial part of the localization testing process as unexpected problems may occur due to text
expansion and other UI changes.
Functional testing - Functional validation of localized product Functional testing is usually not necessary for document
translation projects except for
interactive pdfs and online help'
linguistic testing It is validation of the translated content within context
(b) What is class testing? Discuss the issues related to class testing.
Class Testing
Testing of a class is very significant and critical in object oriented testing where we went to verify the implementation of
a class with respect to its specifications. If the implementation is as per specifications of a class with
Stack
respect to its specifications. If the implementation is as per specifications, then it is expected that every
x: integer
top: integer instance of the class may behave in the specified way. Class testing is similar to the unit testing of a
Stack()
conventional system. One of the methods of generating test cases is from pre and post conditions
push(x)
specified in the use cases. Every method of a class ha a pre*condition that needs to be satisfied before
POP()
the execution. Similarly, every method of a class has a post*condition that is the resultant state after the
execution of the method. Consider
a class slack with two attributes (a and top) and three methods (stack (), push () and pop {))
We should first specify the pre and post conditions for every operation/method of a class. We may identify
requirements for all possible combination of situations in which a pre*condition can hold and post*conditions can be
achieved. We may generate test cases to address what happens when a pre*condition is violated. We consider the stack
class given in figure above and identify the following pre and post conditions of all the methods in the class:
(i)Stack ::Stack()
(a)Pre=true
(b)Post: top=0
(ii)Stack: :push(x)
(a)Pre: top<MAX
(b)Post: top=top+1
(iii)Stack: :pop()
(a)Pre: top>0
(b)Post: top=top*l
After the identification of pre and post conditions, we may establish logical relationships between pre and post
conditions. Every logical relationship may generate a test case. We consider the push() operation and establish the
following logical relationships:
1.(precondition: top<MAX; post condition: top+top+1)
2.(pre condition: not (top<MAX); post condition: exception)
Similarly for pop () operation, the following logical relationships are established:
3.(pre condition: h r H); post condition: top=top*1)

4.(pre condition: not (top>0); post condition: exception)


We may identify test cases for every operation/method using pre and post conditions. We should generate test cases
when a pre*condition is true and false. Both are equally important to verify the behavior of a class. We may generate
test cases for push(x) and pop() operations as shown in the table below:
Table:. Test cases of function push()
Test
Condition Expected output
input
23
Top<MAX Element 23 inserted successfully
34
Top=MAX Stack overflow
Issues related to class testing
Classes cannot be tested in isolation. They may also require additional source code (similar to stubs and drivers) for
testing independently. An operation is not a testable component, i.e. it can only be tested through an instance of a
class. It is impossible, unlike in procedure*oriented (traditional) testing, to build either a bottom up or a top*down
testing strategy based on a sequence of invocations since there i> no sequential order in which the operations of a class
can be invoked unless inspired by common sense (for instance invoking the operation that creates an object before
invoking any other operation of the object).Every object carries a state. The context in which an operation is executed
is not only defined by its possible parameters, but mainly by the values of the features of the object by which it is
invoked (i.e. its state). Furthermore, any operation can modify these values. Therefore, the behavior of an operation
cannot be considered independent from the object for which it is defined.
What is fault prediction model?
Software fault prediction models are used to identify fault-prone classes automatically before software testing. These
models can reduce the testing duration, project risks, resource and infrastructure costs. To achieve help for planning and
executing testing by focusing resources on the fault prone parts of the design and code, the model used to predict faulty
classes should be used. The fault prediction model can also be used to identify classes that are prone to have severe
faults. One can use this model with respect to high severity of faults to focus the testing on those parts of the system
that are likely to cause serious failures.

(a) Explain how object oriented testing is different from procedural testing? Explain with example.
Ans: Procedure testing shall model the procedural requirements of the software system as a complete and delivered
unit. Procedure Requirements shall define what is expected of any procedural documentation and shall the written in
the form of Procedural instructions. These procedural instructions will normally come in the form of one of the following
documents:
A user Guide
An Instruction Manual
A User Reference Manual
This information will normally define how the user is meant to:
Set up the system for normal usage
Operate the system in normal conditions
Become a competent user of the system (tutorial files)
Trouble*shoot the system when faults arise
Re*configure the system
Procedural testing shall be measured in the following ways:
Static testing * shall be measured as the percentage of the total specified procedural requirements, which have been
covered by procedural instructions., reviewed.
Dynamic testing as a percentage of the specified procedural requirements which have been executed.

Object*Oriented testing is much more similar to the way the real world works; it is analogous to the human brain. Each
program is made up of many entities called objects. Objects become the fundamental upi ts and have*behavior, or a
specific purpose, associated with them. Objects cannot directly access another object's data. Instead, a message must be
sent requesting the data, just like people must ask one another for information; we cannot see inside each other's
heads. Object Oriented testing include testing techniques like Analysis and Design testing, class tests, Integration : tests,
validation test, and system tests. Benefits of Object*Oriented programming which make it different from procedural
programming are as follows:
ability to simulate real*world event much more effectively code is reusable thus less code may have to be written
data becomes active better able to create GUI (graphical user interface) applications programmers are able to reach
their goals faster
Programmers are able to produce faster, more accurate and better*written applications (in the case of a veteran
programmer, by a factor of as much as 20 times compared with a procedural program).
Levels of Testing
Unit testing
Integration testing
System testing
Unit testing
Reasons to support unit testing
* Easy to locate the bug * Exhaustive testing upto some extent * Interaction of multiple errors can be avoid
Requires overhead code for driver and stub called as
Scaffolding Generate scaffolding automatically by means of test harness Requires detailed knowledge of the internal
program design, thus,
* Unit testing is typically carried out by programmers.
The tests performed can be behavioral or structural.
A sample Executable test program:
*Works on a table in the test database *Typical structure:
*Input
*Expected_Result
*Actual_Result

(b) Explain function oriented metrics and compare with size oriented metrics with examples.
Ans: Metrics are the technical process that is used to develop a product. The process is measured to improve it and the
product is measured to increase quality. Measurements can be either direct or indirect. Direct measures are taken from
a feature of an item (e.g. length). Direct measures in a product include lines of code (LOC), execution speed, memory
size, and defects reported. Indirect measures associate a measure to a feature of the object being measured (e g quality
is based upon counting rejects). Indirect measures include functionality, quality, complexity, efficiency, reliability, and
maintainability. Direct measures are generally easier to collect than indirect measures. They are divided into:
Size*oriented metrics are used to collect direct measures of software engineering output and quality.
Function*oriented metrics provide indirect measures.
Size*oriented metrics are a direct measure of software and the development process.
These metrics can include:
effort (time)
money spent
KLOC (1000s lines of code)
pages of documentation created
errors
people on the project
From this data some simple size*oriented metrics can be generated.
Productivity = KLOC / person*month
Quality = defects / KLOC
Cost = Cost / KLOC
Documentation=pages of documentation /LOC
Function*Oriented Metrics: Function*oriented metrics are indirect measures of software which focus on functionality
and utility. The first function*oriented metric was proposed by Albrecht (IBM, 1979} who suggested a productivity
measurement approach called the function point method. Function points are derived from countable measures and
assessments of software complexity. An unadjusted function point count or UFC is calculated based on five
characteristics.
The Five Characteristics:
1.number of user inputs 2.number of user outputs
3.number of user inquiries (on*line inputs) 4.number of Files 5.number of external interfaces (tape, disk)

Q9(a) Explain the testing process for object oriented programs.


Ans: Object oriented systems are built out of two or more interrelated objects. It is done to determine the correctness of
0*0 systems by testing the methods that change or. communicate the state of an object. These testing methods present
in object*oriented system are similar to testing subprograms in process*oriented systems. In order to cover the
strategies and tools associated with object oriented testing following testing is applied:
Analysis and Design Testing Unit/Class Tests
Integration Tests Validation Tests System Tests
The entire process is:

The test case design is as follows:


1.Identify each test case uniquely * Associate test case explicitly with the class and/or method to be tested
2.State the purpose of the test 3.Each test case should contain: list of specified states for the object that is to be tested
A list of messages and operations that will be exercised as a consequence of the test A list of exceptions that may
occur as the object is tested A list of external conditions for setup (i.e., changes in the environment external to the
software that must exist in order to properly conduct the test) Supplementary information that will aid in
understanding or implementing the test
Automated unit testing tools facilitate these requirements in object oriented testing.

(b)Writc a C program for calculation of roots of a quadratic equation. Find out its all software science metrices.
Ans: /* program to find the root of a quadratic equation */
#include<stdio.h>
#include<conio.h>
#include<math .h>
int main()
{
int a,b,c;
float r1,r2,up;
printf("Enter value of a : );
scanf(%d, &a);
printf("Enter value of b : ");
scanf("%d", &b);
printf("Enter value of c:);
scanf(%d", &c);
up=(b*b)*(4*a*c);
if(upM)
{
printf("\n ROOTS ARE REAL ROOTS\n);
r1 = ((*b) + sqrt(up)) /(2*a);
r2 = ((*b)*sqrt(up))/(2*a);

printf("\n ROOTS: %f, %f\n", r1, r2);


}
else if(up==0)
{
printf("\n ROOTS ARE EQUAL\n");
r1 = (*b/(2*a));
printf(\nROOT IS...: %f\n, r1);
}
else
printf("\n ROOTS ARE IMAGINARY ROOTS\n");
getch();
return 0;
}
The Halstead Metrics routine computes Maurice Halsteads software science metrics for each procedure or module
contained in the parameter. Altogether, there are six metrics, ail o f which are functions of the following four variables:
n1 = the number of unique operators = 9
n2 = the number of unique operands = 6
N1 = the total number of operators = 17
N2 = the total number of operands = 24
The six software science metrics include:
Vocabulary: n=n1+n2 (the total number of unique operators and operands)
Vocabulary = n= n1+n2 = 15
Length: *N=N1+N2 (The sum of ay occurrences of operators and operands)
Length = N=N1+N2= 41
Volume = V= N log2(n)
Volume =V= 41 *log(15)=41*1.1760=48.21
Program Difficulty: D=1/2 n1n2/n2 (reflects the effort required to understand, code, and maintain a given
procedure)
Program Difficulty =D = 1/2*9*24/6= 18
language Level: L=V/D2 (indicates how well a programmer uses features of the language)
2

Language Level= L= V/D = 48.21/18*18= 0.1487


Effort: E=DV (indicates a level of program complexity in units of time that it takes to write, modify, or maintain a piece
of code)
Effort= 18*48.21=867.78
(g)Explain Recovery testing with examples.
Ans: Recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures
and other similar problems. Recovery testing is the forced failure of the software in a variety of ways to verify that
recovery is properly performed. Recovery testing should not be confused with Reliability testing, which is tries to
discover the point at which failure occurs. Some of the examples of recovery testing are:
1. While the application is running, suddenly restart the computer and after that check the validness of application's
data integrity.
2. While application receives data from the network, unplug and then in some time plug-in the cable, and analyze the
application ability to continue receiving of data from that point, when network connection disappeared.
3. To restart the system while the browser will have definite number of sessions and after rebooting check, that it is
able to recover all of them.

Integration Testing is a level of the software testing process where individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the interaction between integrated units.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit is debatable and it could mean any of the following:
1.the smallest testable part of a software
2.a module which could consist of many of 1
3.a component which could consist of many of 2
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the
ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled
and Integration Testing is performed. For example, whether the cap fits into the body or not.
Integration Testing Techniques

Decomposition*Based Integration
Decomposition based on packaging partitions
Packaging basis is the functional decomposition tree
Based on interfaces i.e. inputs & outputs
Big Bang Approach
All the units are compiled together and tested at once. pg
works well for small systems with well defined components and interfaces.
Drawbacks: Difficult to isolate the defects. Very difficult to make sure that all the cases for integration Testing are
covered testing are covered.
Top*Down Integration Testing: Top Down Integration as the term suggests, starts always at the top of the program
hierarchy and travels towards its branches. This can be done in either depth*first or breadth*first.
Bottom*Up Integration Testing: Bottom Up integration as it name implies starts at the lowest level in the program
structure.
Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
Call Graph*Based Integration
Call Graph*based integration is based upon messages (function calls) and not on the decomposition tree.
represented pictorially through a directed graph termed call graph. moves in the direction of structural testing.

Pair wise Integration


The idea is to eliminate the stub/driver development effort. Instead, the actual code is used. One testing session is
limited to testing a pair of units in the call
graph. Thus, the number of sessions is equal to the number of edges in the call graph. This is not much of a
reduction in sessions from either top down or This is not much of a reduction in sessions from either top*down or
bottom*up, but it is a drastic reduction in stub/driver development.
Neighbourhood Integration
each session tests the neighbourhood of one Node, where neighbourhood of a node is defined as nodes that are one
edge away from the node. Away from the node.
Neighbourhood includes all the predecessor and successor nodes of a node.
Path*based Integration
Path*based integration tests the behaviour of the system*level threads to be correct. It is motivated by the overall
system behaviour, rather than the structure.
It is a smooth preparation for system level testing
Debugging
Activity of locating and correcting errors
Characteristics of bug * The symptom and cause may be geographical remote * The symptom may disappear when
another error is Corrected * Symptom may be caused by no errors * Symptom caused by human error difficult to trace
The symptom may be result of timing problem Rather than process problem Maybe difficult to reproduce the
accurately input Conditions The symptom may be intermittent
Debugging Process :- Replication of the Bug Understanding the bug Locate the bug Locate the bug Fix the bug
and re-test the program
Debugging Approaches Trial and error Backtracking Brute Force Brute Force memory dumps and run-time
traces are examined for clues to error causes Cause Elimination *induction & deduction

System Testing
Testing conducted on the complete integrated
system. Done after Unit, Component and Integration testing. Evaluates system compliance with requirements.
Brings out defects that are not directly attributable to a module / interface * are not directly attributable to a module
/ interface. But are based on issues that are related to design and architecture of the whole product. According to
Petschenik guidelines for choosing test cases during system testing *Testing the system capabilities is more important
than testing components *Testing the usual is more important than testing the Exotic *In case of modifications; test old
capabilities rather than new ones Attributes evaluate during system testing * Usable * Secure * Compatible *
Dependable *documented

System Test Cases


Written according to information collected from:
* Detailed architecture / design documents * Module specifications * SRS Created after looking at component &
integration test cases. Based on user stories, customer discussions and observation of a typical customer usage.
System Testing
Functional Testing * Validates functional requirements
Performance Testing * Validates non*functional requirements
Acceptance Testing * Validates clients expectations
Functional Testing
* Objective: Assess whether the app does what it is
supposed to do pp * Basis: Behavioral/functional specification * Test case: A sequence of ASFs (thread)
Threads
We view system testing in terms of threads of system level behavior.
Many possible views of a thread:
*a scenario of normal usage *a system level test case *a stimulus/response pair *behavior that results from a sequence
of system level inputs *an interleaved sequence of port input and output events *a sequence of transitions in a state
machine description of a system
*an interleaved sequence of object messages and method executions *a sequence of machine instructions *a sequence
of source instructions *a sequence of atomic system functions
Performance Testing
Goal: Try to violate non-functional requirements
Test how the system behaves when overloaded.
* Can bottlenecks be identified? (First candidates for redesign in the next iteration) Try unusual orders of execution *
Call a receive() before send() Check the systems response to large volumes of data * If the system is supposed to
handle 1000 items, try it with 1001 items.
Types of Performance Testing
Stress Testing * Stress limits of system
Security testing * Try to violate security requirements
Volume testing * Test what happens if large amounts
of data are handled requirements Environmental test
* Test tolerances for heat, humidity motion
Configuration testing * Test the various software and
hardware configurations

Quality testing * Test reliability, maintain- ability


&availability Compatibility test * Test backward compatibility with existing systems &availability
Recovery testing * Test systems response to
Presence of errors or loss of data Timing testing
* Evaluate response times and time to perform a function
Human factors testing * Test with end users
Acceptance Testing
Goal: Demonstrate system is ready for operational use
* Choice of tests is made by client* Many tests can be taken from integration testing * Acceptance test is
performed by the client, not by the developers
Alpha test: * Client uses the software at the developers
environment. * Software used in a controlled setting, with the developer always ready to fix bugs. Beta test:
* Conducted at clients environment(developer is not present) * Software gets a realistic workout in target environment

Issues in Object-Oriented Software Testing


Unit of testing
Implications of encapsulation and composition * how to test encapsulation? Implications of inheritance Implications
of inheritance * superclass needs to know subclasses? Implications of Polymorphism * behavioral equivalence
(h)Explain End-to-End Testing.
Ans: End-to-end testing is a methodology used to test whether the flow of an application is performing as designed from
start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the
right information is passed between various system components and systems. This is basically exercising an entire
"workflow" like accepting an order from a customer and then processing that order all the way through fulfillment,
accounting and shipping. However by the end of System Testing most test environments will exercise (or should
exercise) an entire end-to-end test.
The entire process of End-to-End testing:-

(k)Explain Object-oriented testing.


Ans: Object-oriented testing essentially means testing software developed using objectoriented methodology. The process of testing 00 software is more difficult than the traditional approach, since programs
are not executed in a sequential manner. 00 components can be combined in an arbitrary order; thus defining test cases
becomes a search for the order of routines that will cause an error. A method was proposed for evolutionary testing of
classes in Object-oriented software.
Object-oriented language provides an abstract way of thinking a problem. OOP language features of inheritance
polymorphism present new technical challenges to testers. Its main focus is on objects instead of functions. There are
various testing performed under OOT:Unit testing: Smallest testable unit is the encapsulated class or object Component testing to make sure individual
subsystems work correctly.
Integration testing to make sure subsystems work correctly together. System testing to verify that requirements are
met. Regression testing to make sure previous functionality still works after new functionality is added
OBJECT ORIENTED TESTING METRICS
Testing metrics can be grouped into two categories: encapsulation and inheritance. Encapsulation
Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states have to be tested.
Percent public and protected (PAP) - This number indicates the percentage of class attributes that are public and thus
the likelihood of side effects among classes.

Public access to data members (PAD) - This metric shows the number of classes that access other class's attributes and
thus violation of encapsulation
Inheritance
Number of root classes (NOR) - A count of distinct class hierarchies.
Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.
Number of children (NOC) and depth of the inheritance tree (DIT) - For each subclass, its superclass has to be re-tested.
The above metrics (and others) are different than those used in traditional software testing, however, metrics collected
from testing should be the same (i.e. number and type of errors, performance metrics, etc.).
What is Path Testing?
Path Testing is a structural testing method based on the source code or algorithm and NOT based on the specifications.
It can be applied at different levels of granularity.
Path Testing Assumptions:
The Specifications are Accurate The Data is defined and accessed properly There are no defects that exist in the
system other than those that affect control flow
Path Testing Techniques:
Control Flow Graph (CFG) - The Program is converted into Flow graphs by representing the code into nodes, regions
and edges. Decision to Decision path (D-D) - The CFG can be broken into various Decision to Decision paths and then
collapsed into individual nodes. Independent (basis) paths - Independent path is a path through a DD-path graph which
cannot be reproduced from other paths by other methods.

Anda mungkin juga menyukai