5 Hours)
N. B.: (1)
(2)
(3)
(4)
(5)
I.
Ans.
Phases within fundamental test process can be divided into following steps
o Planning and control
o Analysis and design
o Implementation and execution
o Evaluating exit criteria and reporting
o Test closure activities
Though logically sequential these activities may overlap, occur concurrently or even
repeat
Quality: Quality is the degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations.
10
II.
Ans.
Iterative Model:
Requirements may change at a previous stage of development and may be taken care of
in the next stage
Feedback loop at every stage in development
Regression testing increasingly important on all iterations one after another
Testing plan to allow more testing at each subsequent delivery phase
More practical than the waterfall model
Limitation:
They are many cycles of waterfall model
Fixed price projects have problem of estimation
Product architecture and design becomes fragile due to many iteration
RAD:
Usable software created at fast speed with the involvement of the user for every
development stage (functionality)
Miniature form of spiral model where requirements are added in small chunks and
refined with every iteration
Components or functions are time-boxed, delivered and then assembled into working
prototype
Early validation of technical risks and rapid response to changing customer
requirements
Limitation:
Refactoring is the main constraint
Involves huge cycles of retesting and regression testing
Efforts of integration are huge
b. What is Integration Testing? List and explain the approaches Integration Testing.
Ans.
Definition: Testing performed to expose defects in the interfaces and in the interactions
between integrated components or systems
Integration testing starts when two components are individually tested ( i.e. after
performing unit testing) and ends when all component interfaces have been tested
Approaches/Methodologies/Strategies of Integration Testing:
o Approaches of integration depend upon how the system is integrated
There are two ways to integrate modules:
o Non Incremental Software Integration
Big Bang Integration approach
o Incremental Software Integration
Top-down Software Integration
Bottom-up Software Integration
Sandwich Integration
Non Incremental Approach
Big Bang Approach:
o All or most of the developed modules are coupled together to form a
complete software system or major part of the system and then integration
testing is performed
o Effective in saving time
o If test cases and results are not recorded then the integration process
becomes difficult and the testing team may not achieve goal of integration
testing
o Effective for small systems but not the large ones
Advantages:
o Simple and convenient for small systems
o To check the data flow from one module to another
o Communication between various modules is checked
o Needs less planning, less formal development process
o No stub/driver is required to be designed and coded
o No Cost incurred in creating stubs and drivers
o Effective in terms of time as it is fast
Disadvantages:
o Since all modules are tested at once, high risk modules are not isolated and
tested on priority
o Integration testing can only commence when all modules are ready
o Fault localization is difficult
o Probability to miss interface faults is high
o If defect is found in the integrated code then there can be much of
throwaway code
10
Ans.
III.
Ans
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
10
Kick-off Meeting:
o Optional step in review procedure
o Goal: to get all participants at the same wavelength regarding the document under
review
o Result of entry check and exit criteria are discussed
o Kick-off has a positive effect on the motivation of the reviewers
o Introduction, document relation, role assignments, checking rate, pages to be checked,
process changes, any other discussion are highlights of the Kick-off Meeting
o Documents under review distributed to participants
Preparation:
o Participants to work individually on the documents under review using related
documents, procedures, rules and checklist
o Participants to identify questions, defects and comments as per their role
o Issues to be logged
o Spelling mistakes recorded but not discussed
o Check lists based on perspectives of developer, tester or maintainer be used to make
reviews efficient and effective
o Annotated document to be handed to the author after meeting
o Critical success factor for preparation is checking rate (documents checked per hour),
ranging 5-10 /hr not exceeding exit criterion
Review Meeting:
o Meeting consists of:
o Logging phase:
Issues identified in preparation phase are mentioned page by page, reviewer by
reviewer and logged by author or scribe(during inspection)
No real discussion allowed in this phase
Focus is on logging as many defects in a certain timeframe
Moderator keeps the logging rate (well-led & disciplined:1-2 defect/min)
For informal reviews there may be no separate logging phase and may start with a
discussion
o Discussion phase:
Detailed discussion on issue or defect and its impact
Participant proposes the severity of the defect
Moderator paces this part of the meeting
Meeting ends with decision on the document under review based on the formal exit
criteria
If defects per page exceed the average the document must be reviewed again
Average defects per page is quality indicator and ensures thoroughness of review
process
o Rework:
Author will improve the document under review
Not every defect found leads to rework
Author decides if the defect should be fixed and should accordingly report the
consideration issues
Changes made should be identified in the follow-up (Track Changes)
o Follow-up
Moderator to ensure that satisfactory actions are taken on all logged defects,
process improvement suggestions and change requests
Moderator /participants check for compliance with exit criteria
Moderator keeps track of all measures and stores for future analysis
(students may explain any two above phases)
b. Explain the characteristics and objectives of :Ans.
i)
o
o
o
o
o
ii)
o
o
o
o
Walkthrough
Meeting led by Author, Scribe may be present
Scenarios and dry runs may be used to validate the content
To present the document to the stakeholder to gather more information regarding
the document
To evaluate the contents of the document and for knowledge transfer
To examine and discuss the validity of proposed solution and viability of
alternatives, establishing consensus
Inspection
Usually led by a trained moderator (not the author)
Uses defined roles during the process
Involves peers to examine the product
Rules and checklist are used during the preparation phase
o
o
o
iii)
Audit
o Independent assessment of the process or product to ensure that product as well as
processes used to build them meets predefined criteria. Auditors may not be experts in
the subject matter but may have experience in the field.
o The Audit report comprises of:
o Major and Minor Non-conformances
o Observations
o Achievements or good practices
iv)
o
Technical review
Technical review is discussion meeting that focuses on achieving consensus about
technical content of a document
Less formal as compared to Inspection
Assess the value of technical concepts and alternatives in the product and project
environment
o
o
v)
Peer Review
o Conducted frequently at various stages in SDLC, could be done by fellow developer,
and also called as Desk review. Checklist and processes may be defined for tracking
peer reviews.
o Types of Peer Reviews:
o Online Peer Review (Peer-to-Review)
o Offline Peer Review (Peer Review)
c. Explain the factors for successful performance of Reviews.
o
o
o
o
o
o
o
o
o
Ans.
Find a Champion
Pick things that really count
Explicitly plan and track review activities
Train participants
Manage people issues
Follow the rules but keep it simple
Continuously improve process and tools
Report results
Just do it!
Code Metrics:
Code analysis uses Structural attributes of the code such as
o Comment frequency
o Depth of nesting
o Cyclomatic number
o Number of LOC
Code Structure:
Control Flow Structure:
o Addresses sequence in which instructions are executed
o Reflects the iterations and loops in a programs design
o Control Flow Analysis identifies unreachable code or dead code
o Many code metrics depend on the Control Flow Structure
Data Flow Structure:
o Follows the trail of data items as it is accessed and modified by the code
o Number of times transactions applied to the data item
o Data Flow measures show how data act as they are transformed by the program
o Defects like undefined symbol, referencing variables, undefined values may be
found
Data Structure:
o Refers to organization of data itself, independent to program
o Data arranged as lists, queues, stacks or other well-defined structure have welldefined algorithms for creating, modifying or deleting them
o Helps design test cases to show correctness of a program
(students may list and explain any two points of each type)
Equivalence Partitioning:
A Black-box test design technique in which test cases are designed to execute
representatives from equivalence partitions
Equivalence Partition : A portion of an input or output domain for which the behavior of a
component or system is assumed to be the same, based on the specification.
Equivalence Partitioning (EP) is applied at all levels of testing
The technique divides(i.e. partitions) a set of test conditions into groups or sets that can be
considered same (i.e. system handles them equivalently) --- Equivalence Partitioning
Equivalent partitions are also known as equivalence classes.
Equivalence testing considers that for each partition, a single test case associated to that
partition is sufficient to evaluate the behaviour of the system. More test cases for a partition
may not find new defects or faults in the behaviour of the system. Equivalence classes are
formed by considering each condition specified on an input value. The input values are then
partitioned into a valid EC and one or more invalid EC.
Boundary Value Analysis:
Boundary Value Testing (BVT) or BVA is a black box testing technique that refines
equivalence class testing. BVT is functional testing technique that allows partitioning the
inputs and outputs in ordered sets with distinct boundaries. Values under same set would be
treated in similar manner. Test values are selected such that they are inside, on and just outside
the boundaries. Test cases are designed to focus near the limits of valid ranges or boundaries
of equivalence classes. Efforts of software testing focus at the extreme boundaries of the
equivalence classes. Errors are easily tracked because when input values change from valid to
invalid at extreme ends; there is more probability of occurrence of errors.
Purpose of BVT is to concentrate the testing effort on error-prone areas accurately pinpointing
the boundaries of conditions (E.g. programmer may interpret requirement of SRS as >=, <= ,
>,<,= in his program)
(students may give any 5 points of difference from above giving examples of each type)
10
State transition tables are useful tool to capture certain types of system requirements
and also document internal system design
State transition tables document events that are processed by the system as well as
systems responses
Easy to use and provide information that is complete and systematic
It consists of 4 columns:
o Current State
o Event
o Action
o Next State
The procedure to build the state transition table is as follows:
For each state of the system
For each event/trigger of the system
Find the corresponding (Action, Next State, if any)
Document (State, Event, Action, Next State)
The benefits of using state transition table in testing are:
o It enumerates all possible state transition combinations, not just the valid ones but
also the invalid ones
o Testing of critical, high risk systems including aviation, financial or medical
applications requires testing every state transition pair including the invalid pairs
o Knowledge of valid and invalid state transition provide necessary and complete
information that makes decision more stable and consistent in a given environment
Example:
Digital watch:
The software responds to input requests to change the display mode for a time display device.
The display mode can be set to one of the four values:
Two corresponding to displaying either time or date.
The other two when altering either time or date.
Four possible input requests:
Change mode (CM)
Reset (R)
Time Set (TS)
Ans.
The metric (Cyclomatic Complexity) helps determine the number of independent paths that
provides the number of test cases that have to be designed:
o The Cyclomatic Complexity of a strongly connected graph is provided by the formula:
o Method 1: V(G) = e-n+2 (where e represents the number of edges, n
represents the number of nodes)
o Method 2: V(G) = P+1 ( where P = no. of predicate nodes with out- degree =2)
o Method 3: V(G) = number of enclosed regions +1
1
2
3
4
5
6
7
8
9
10
Dim i as Integer
Dim n as Integer
i=1
n=1
while(i<=5)
{
n=n*i
i=i+1
}
output n
R1
1
10
0
Cyclomatic complexity :
Method 1: V(G) = e - n + 2, where e = edges and n = nodes
= 10 10 +2
= 2
Method 2: V(G) = P + 1, where P = Predicate nodes
=1+1
= 2
Method 3 : V(G) = no. of enclosed regions + 1
=1+1
=2
Cyclomatic Complexity = 2
V.
Ans.
Risk Based Testing: Testing oriented towards exploring and providing information about
product risks.
Risk based testing Involves:
o Mitigation provide chance to reduce defects
o Contingency testing to identify work-around defects and reduce impact of risk
o Measuring and removing risks in critical areas
o Risk analysis to proactively remove or prevent defects
o Risks classified as:
o Project Risks
o Product Risks
Project Risk: A risk related to management and control of the test project
o Late delivery of test items to test team
o Availability issues of test environment or lack of test effort
o Indirect issues of budget, administrative support, changing business needs etc.
Incident reports managed from discovery to resolution. All incident reports move through a
series of transition states:
1. Incident reported
2. Peer testers or reviewers review the report
3. Review successful, report opened
4. Project team decides whether to repair the defect or not
5. If defect is to be repaired, programmer is assigned to repair it
6. Programmer completes repair and sends to tester for confirmation testing
7. If confirmation testing fails it is reopened and then reassigned
8. Once tester confirms good repair incident report is closed- no other work
9. In any state like rejected, deferred or closed further work is required prior to ending
the project incident not assigned to owner
10. Owner of the incident is responsible for transitioning the incident to subsequent
allowed state
repaire
d
Approved
for repair
Reviewed
opened
fixed
assigned
reported
Declined
for repair
Rewritten
Bad report
Failed
confirmation
test
Not a
proble
m
closed
reopened
rejected
deferred
Gathered new
Information
Confirmed to
be repaired
Problem
returned
10
Test Control: A test management task that deals with developing and applying a set of
corrective actions to get project on track when monitoring shows a deviation from what was
planned
Configuration Management:
o A discipline applying technical and administrative direction and surveillance to
identify and document the function and physical characteristics of a configuration item
o Control changes those characteristics
o Records and reports change processing and implementation status
o Verifies compliance with specified requirements
o Determine items that make up the software or system
o E.g. Source code, test scripts, software, hardware, data, development and test
documentation
o Configuration management is making sure that these items are managed throughout
entire project or product lifecycle
o Testers can manage test ware and test results
o Configuration management allows to map what is being tested so that defects are
version controlled & map to the component
o Advantages:
o Avoids wrong test execution for wrong software
o Receiving not possible to uninstall builds
o Reporting defects against wrong version of code not existing in test
environment
d. Explain test organization. What are the factors affecting test efforts?
Ans.
o Tools perform repetitive tasks like regression testing, checking code standards, creating
logs more effectively
o Tools have consistent performance for repetitive tasks like debugging, entering test inputs,
generating tests (from requirements)
o Tools have no subjective bias
o Tools represent information more effectively using charts, graphs etc.
o Potential Benefits of using Tool support:
o Reduction of repetitive work
o Greater consistency and repeatability
10
o Objective assessment
o Ease of access to information about tests or testing
o Specific tools provide effective point solutions
o Tools are associated with potential risks:
o Unrealistic expectations from the tools
o Underestimating time, cost and effort for initial introduction of tool
o Underestimating time, cost and effort to achieve significant benefits from the tool
o Underestimating the effort required to maintain test assets generated by the tool
o Over-reliance on the tool
o Limited skill available to use the tool
b. Explain dynamic analysis tools with performance testing.
Ans.
Tools that support testing that can be carried out on a system when it is executing or running
or after a system is released an is in live operation
o Dynamic Analysis Tools
o Performance-Testing, Load-Testing and Stress-testing Tools
o Monitoring Tools
Dynamic Analysis Tools:
o Code should be in execution or running
o Tools used for analysis of the software during execution/operation
o E.g. performance of the computer after installation of new software, website
response time
o These tools may be used by developers in component testing and integration testing
e.g. testing middle ware, security testing, checking dead links in a website
Features of Dynamic Analysis tools include support for:
o Detecting memory leaks
o Identifying pointer arithmetic errors such as null pointers
o Identifying time dependencies
Features of Performance-Testing Tools include support for:
o Generating a load on the system to be tested
o Measure characteristics like response times, throughput or mean time between
failures
o Measuring time of specific transaction as load on the system varies
o Producing graphs or charts of response over time
d. List down the tool support for management testing. Explain any two.
Ans.
Risk: A factor that could result in future negative consequences, usually expressed as impact
and likelihood.
o Possibility of negative or undesirable outcome
o Likelihood of risk in future: 0% to 100%
o Likelihood in past: 0% to 100% or has become outcome or issue
Error (mistake) : a human action producing an incorrect result
Defect (bug, flaw, fault): a flaw in a component /system due to which it fails to perform its
required function
Failure: deviation of component / system from its expected delivery, service or result
Quality: the degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.
15
V-Model of Testing:
V-Model developed to address problems of traditional waterfall approach
V-model based on Early Testing Principle, integrated into each phase of life cycle
V-model proves that testing is just not execution based activity
Various testing activities performed parallel to development activities
Need, Wish,
Policy, Law
Operational System
User Requirements
System Requirements
Global Design
Preparation
Acceptance Test
Preparation
System Test
Preparation
Integration
Test
Detailed Design
Implementation
d. Justify 100% Condition Coverage implies 100% Decision Coverage giving an example.
Conditional coverage and relational coverage are two popular forms of coverage.
Ans. In conditional coverage i.e. binary logical operators (&&, ||), the individual components are
evaluated in every possible combination of true or false
e.g.
Conditional coverage = (no. true values taken by all basic conditions / 2 * no. of basic
conditions) * 100%
Decision Coverage the decisions i.e. true and false are considered without considering the
combinations leading to such decisions.
Decision coverage = no. od decisions executed / Total no. of decision *100%
Thus we can see that when evaluating all conditions the decision of true and false will be
executed at least once. Therefore 100% condition coverage ensures 100% decision coverage.
(Students are required to give example using a program or control flow graph to explain the
above)
e. Explain various types of test strategies? How to pick up the best type of strategies for
success?
Ans.
Test approach or strategy is a powerful factor in success of test planning and estimation
of cost and effort
Under the control of testers and test leader
Major types of test strategies:
o Analytical risk based or requirement based - Preventive approach
o Model-Based mathematical models- Methodical check list, preplanned ISO
9126 , systematic - Preventive approach
o Process or standard compliant IEEE 829 standard Preventive / reactive
o Dynamic exploratory during execution - Reactive approach
o Consultative or directed Preventive/ reactive
o Regression averse automated tests Preventive
o GUI standards
o Layout specifications adopted
Code Metrics:
Code analysis uses Structural attributes of the code such as
o Comment frequency
o Depth of nesting
o Cyclomatic number
o Number of LOC
These metrics help to design alternatives when redesigning a piece of code
Complexity metrics identify high risk and complex areas
Cyclomatic Complexity Metric (CCM) is based on the number of decisions in the program
Code Structure:
Control Flow Structure:
o Addresses sequence in which instructions are executed
o Reflects the iterations and loops in a programs design
o Control Flow Analysis identifies unreachable code or dead code
o Many code metrics depend on the Control Flow Structure
Data Flow Structure:
o Follows the trail of data items as it is accessed and modified by the code
o Number of times transactions applied to the data item
o Data Flow measures show how data act as they are transformed by the program
o Defects like undefined symbol, referencing variables, undefined values may be
found
Data Structure:
o Refers to organization of data itself, independent to program
o Data arranged as lists, queues, stacks or other well-defined structure have welldefined algorithms for creating, modifying or deleting them
o Helps design test cases to show correctness of a program
(students may list and explain any two of the above in details)
~~~~~~~~~~~~~~~~~~~