Anda di halaman 1dari 18

(2.

5 Hours)
N. B.: (1)
(2)
(3)
(4)
(5)

[Total Marks: 75]

All questions are compulsory.


Make suitable assumptions wherever necessary and state the assumptions made.
Answers to the same question must be written together.
Numbers to the right indicate marks.
Draw neat labeled diagrams wherever necessary.

I.

Answer any two of the following:


a. State phases of Fundamental Test Process. Explain Planning and Control.

Ans.

Phases within fundamental test process can be divided into following steps
o Planning and control
o Analysis and design
o Implementation and execution
o Evaluating exit criteria and reporting
o Test closure activities
Though logically sequential these activities may overlap, occur concurrently or even
repeat

Test planning has the following series of major tasks:


Determine the scope and risk and identify objectives of testing
Determine the test approach
o Techniques, test items, coverage, identifying and interfacing with testing teams,
test ware)
Implement the test policy and/or test strategy
Determine the required test resources
o People, test environment, hardware
Schedule test analysis and design tasks, test implementation, execution and evaluation
Determine Exit criteria
o e.g. percentage or LOC of s/w to be executed during testing
Test control Major Tasks:
o Measure and analyze the results of reviews and testing
o Monitor and document progress, test coverage and exit criteria
o Provide information on testing
o Initiate corrective actions
o Make decisions: e.g. continue testing, stop testing, release software, retain it for
further improvement
(students may draw diagram also to show the phases sequence)
b. What is Quality? Explain Quality Viewpoints for producing and buying software.
Ans.

Quality: Quality is the degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations.

Customer : Quality is fitness for use


Software Engineer: Quality is Conformance to specifications
Other beliefs:
o Meet customer standards (defined by general usage, or national or international
body)
o Meet and fulfill customer needs (business requirements)
o Meet customer expectations (more than requirement for loyalty)
o Meet anticipated /unanticipated future needs (understand future business needs)

Quality must be defined in measurable terms to use it as a reference


Some view based definitions of quality are:
o Customer based Definition of Quality (DOQ) : Quality is fitness for use
o Manufacturing based DOQ: Quality is conformance to specification
o Product based DOQ: Quality product to have at least one unique attribute suiting
customer needs in addition to generic attributes
o Value based DOQ: Quality product must provide value for investment as people
do not buy products, they buy attributes
o Transcendent Quality: Customers perception of quality product or service:
customer centric quality that delights the owner

10

c. Write a note on Psychology of Testing


Ans.

Psychological factors influencing testing & its success:


o Clear objectives for testing
o Proper Roles and balance of Self-testing
o Independent Testing
o Clear and courteous communication
o Feedback on defects
o Independent Testing: Different mind-set of developer and tester
Developers look positively to solve any problems in the design / code /
software to meet specified needs
Testers/ reviewers critically look and the product / work- product with
intention to find defects
o Several stages of reviews and testing carried out throughout the lifecycle of the s/w,
may be Independent reviews / tests
o Degree of independence avoids author bias and is more effective in finding defects
and failures
o Levels of independence from lowest to highest:
Testing by the author/developer of the s/w
Testing by another member in developers team
Testing by tester from another organizational group like Independent testing
team
Testing by tester from different organization/ company like outsourcing
testing or certification by external agency

d. State Principles of software testing. Explain in detail any two.


Ans.

Principle 1: Testing shows presence of Defects


o Testing is checking for correctness or performance or functionality with an
intention of finding defects / errors/ bugs/ mistakes / faults / failures
Principle 2: Exhaustive Testing is impossible
o Testing everything is not feasible except for trivial cases.
o Instead of exhaustive testing, we use risks and priorities to focus testing efforts
Principle 3: Early Testing
o Testing activities should start as early as possible in SDLC and should be
focused on defined objectives
Principle 4: Defect Clustering
o A small number of modules contain most of the defects discovered during prerelease testing or show the most operational failures
o Phenomenon of Defect Clustering
Complex or tricky code
Changing software
Knock-on defects
o Testers focus on hot spots for bug fixing
Principle 5: Pesticide Paradox
o Repeated tests and test cases no longer find new bugs
Test cases be reviewed and revised,
Add new tests to exercise different parts of software to potentially find
more defects
o Debugging removes defects
Debugging: process of finding, analyzing and removing the cause of
failure in software
o After fixing hot spots bugs focus on next set of risk
Principle 6: Testing is context dependent
o Software Testing is testing a Software System to detect and avoid potential
risks
o Levels of risks differ for every software
o E.g. safety critical software tested differently than an e-commerce website
Principle 7: Absence of Errors fallacy
o Finding and Fixing defects does not help if the system build is unusable and
does not fulfill the users needs and expectations

(students may explain any two of the above principles)

II.

Answer any two of the following:


a. Explain Lifecycle Testing for Iterative model & RAD model.

Ans.

Iterative Model:
Requirements may change at a previous stage of development and may be taken care of
in the next stage
Feedback loop at every stage in development
Regression testing increasingly important on all iterations one after another
Testing plan to allow more testing at each subsequent delivery phase
More practical than the waterfall model
Limitation:
They are many cycles of waterfall model
Fixed price projects have problem of estimation
Product architecture and design becomes fragile due to many iteration
RAD:
Usable software created at fast speed with the involvement of the user for every
development stage (functionality)
Miniature form of spiral model where requirements are added in small chunks and
refined with every iteration
Components or functions are time-boxed, delivered and then assembled into working
prototype
Early validation of technical risks and rapid response to changing customer
requirements
Limitation:
Refactoring is the main constraint
Involves huge cycles of retesting and regression testing
Efforts of integration are huge

b. What is Integration Testing? List and explain the approaches Integration Testing.
Ans.

Definition: Testing performed to expose defects in the interfaces and in the interactions
between integrated components or systems
Integration testing starts when two components are individually tested ( i.e. after
performing unit testing) and ends when all component interfaces have been tested
Approaches/Methodologies/Strategies of Integration Testing:
o Approaches of integration depend upon how the system is integrated
There are two ways to integrate modules:
o Non Incremental Software Integration
Big Bang Integration approach
o Incremental Software Integration
Top-down Software Integration
Bottom-up Software Integration
Sandwich Integration
Non Incremental Approach
Big Bang Approach:
o All or most of the developed modules are coupled together to form a
complete software system or major part of the system and then integration
testing is performed
o Effective in saving time
o If test cases and results are not recorded then the integration process
becomes difficult and the testing team may not achieve goal of integration
testing
o Effective for small systems but not the large ones
Advantages:
o Simple and convenient for small systems
o To check the data flow from one module to another
o Communication between various modules is checked
o Needs less planning, less formal development process
o No stub/driver is required to be designed and coded
o No Cost incurred in creating stubs and drivers
o Effective in terms of time as it is fast
Disadvantages:
o Since all modules are tested at once, high risk modules are not isolated and
tested on priority
o Integration testing can only commence when all modules are ready
o Fault localization is difficult
o Probability to miss interface faults is high
o If defect is found in the integrated code then there can be much of
throwaway code

10

Incremental Software Integration:


Top-Down Approach:
Top-level of application is tested first and then modules are tested till the
component level
Involves testing the topmost component interface with other components in the
order of top-down navigation until all components are covered
It takes help of stubs for testing
Only Highest level modules are tested in isolation
After the next module is tested, the modules directly called by that module are
merged and the combination is tested until all subsystems are integrated to be
finally tested
Modules subordinate to the main control module are incorporated into the system
in either depth-first or breadth-first manner
Bottom Up Approach
Modules subordinate to the main control module are incorporated into the system
in either depth-first or breadth-first manner
Opposite of top-down integration
Components for new product development become available in reverse order,
starting from bottom
Drivers are used for testing
An Approach to integrate testing where the lowest level components are tested first,
then used to facilitate the testing of higher level components
Process repeated until the component at top hierarchy is tested
Module driver required to feed test case input to the interface of the module being
tested
Sandwich or Hybrid Approach:
Sandwich / Hybrid is a approach is combination of Top-down and Bottom-up
approach of integration testing defined by Myers(1979)
Combines the advantages of both the approaches
Top-down approach starts from middle layer and goes downward
Bottom-up testing starts from middle layer and goes upward to the top layer
Sub teams or individuals conduct bottom-up testing of the parts or modules built by
them before releasing them
Integration team assembles them together for top-down testing
(students can give brief explanation of the above approaches)
c. Explain Triggers for Maintenance Testing.
Ans.

Maintenance Testing triggered by Modifications, Migration or retirement of the system


Maintenance testing for Modifications include:
o Planned enhancements
o Corrective and emergency changes
o Changes of environment like upgrades of OS or databases
o Patches to newly exposed or vulnerable parts of OS
Maintenance testing for Migrations include:
o Operational testing of new environment
o Changed software
Maintenance testing for Retirement of System include:
o Testing of data migration or archiving
Modification is major part of Maintenance Testing
Two types of modifications include:
o Planned Modifications may be
Perfective Modifications
Adaptive Modifications
Corrective Planned Modification
o Ad-hoc Corrective Modifications: concerned with defects requiring an
immediate solution
Structured approach of testing impossible
Risk analysis of system and specify set of standard tests

d. Explain Conformance directed Testing Levels.

Ans.

Levels of testing may be categorized as:


o Fault-Directed Testing
Component / Unit Testing
Integration Testing
o Conformance Directed Testing
System Testing
Acceptance Testing
System Testing: System Testing is final level of software testing process on behalf of
developers where a complete system/ software is tested
Testing carried out to analyse the behaviour of the whole system according to the
requirement specification is known as system testing
Computer based system checked for validity and objectives met
System testing should investigate both functional and non-functional requirements of the
system
System test may be based on risks, SRS, Business processes, use cases or other high level
descriptions of system behaviour, interaction with operating systems and system resources
System testing includes:
o Performance Testing
Load testing
Stress testing
o Security Testing
Authentication testing
Authorization testing
o Volume Testing
o Sanity Testing
o Auxiliary Testing
o Recovery Testing
(Students may give Brief explanation of above )
Acceptance Testing:
Application undergoes a series of rigorous tests to ensure that the program passes the
requirement of the client and has no bugs that may cause serious problems later on
Before the software is released to public, there are testing stages that ensure its
appropriateness
User Acceptance testing
Operational Acceptance Testing
Contract and regulation Acceptance testing
Business Acceptance Testing
Alpha and Beta / Gamma Testing
(Students may give Brief explanation of above )

III.

Answer any two of the following:


a. List down the phases of Formal Reviews. Explain any two phases in detail.

Ans

o
o
o
o
o
o

o
o
o
o
o
o
o
o
o

Phases of a Formal Review: Typical format consists of 6 steps:


Planning
Kick-off Meeting
Preparation
Review Meeting
Rework
Follow-up
Planning:
Review process begins with a request for review by author to the moderator( or
inspector)
Moderator responsible for scheduling (dates, time, place and invitation) of the review
Project planning to allow time for review and rework
For formal reviews e.g. inspection, moderator performs entry check and defines
formal exit criteria
Entry check ensures that reviewers time is not wasted on a document that is not ready
for review
Moderator in co-ordination with author decides the composition of the review team
Team : 4 to 6 participants including moderator and the author
Moderator assigns roles to the reviewers
Different roles are assigned to participants to improve the effectiveness of the review

10

Kick-off Meeting:
o Optional step in review procedure
o Goal: to get all participants at the same wavelength regarding the document under
review
o Result of entry check and exit criteria are discussed
o Kick-off has a positive effect on the motivation of the reviewers
o Introduction, document relation, role assignments, checking rate, pages to be checked,
process changes, any other discussion are highlights of the Kick-off Meeting
o Documents under review distributed to participants
Preparation:
o Participants to work individually on the documents under review using related
documents, procedures, rules and checklist
o Participants to identify questions, defects and comments as per their role
o Issues to be logged
o Spelling mistakes recorded but not discussed
o Check lists based on perspectives of developer, tester or maintainer be used to make
reviews efficient and effective
o Annotated document to be handed to the author after meeting
o Critical success factor for preparation is checking rate (documents checked per hour),
ranging 5-10 /hr not exceeding exit criterion
Review Meeting:
o Meeting consists of:
o Logging phase:
Issues identified in preparation phase are mentioned page by page, reviewer by
reviewer and logged by author or scribe(during inspection)
No real discussion allowed in this phase
Focus is on logging as many defects in a certain timeframe
Moderator keeps the logging rate (well-led & disciplined:1-2 defect/min)
For informal reviews there may be no separate logging phase and may start with a
discussion
o Discussion phase:
Detailed discussion on issue or defect and its impact
Participant proposes the severity of the defect
Moderator paces this part of the meeting
Meeting ends with decision on the document under review based on the formal exit
criteria
If defects per page exceed the average the document must be reviewed again
Average defects per page is quality indicator and ensures thoroughness of review
process
o Rework:
Author will improve the document under review
Not every defect found leads to rework
Author decides if the defect should be fixed and should accordingly report the
consideration issues
Changes made should be identified in the follow-up (Track Changes)
o Follow-up
Moderator to ensure that satisfactory actions are taken on all logged defects,
process improvement suggestions and change requests
Moderator /participants check for compliance with exit criteria
Moderator keeps track of all measures and stores for future analysis
(students may explain any two above phases)
b. Explain the characteristics and objectives of :Ans.

i)
o
o
o
o
o
ii)
o
o
o
o

Walkthrough
Meeting led by Author, Scribe may be present
Scenarios and dry runs may be used to validate the content
To present the document to the stakeholder to gather more information regarding
the document
To evaluate the contents of the document and for knowledge transfer
To examine and discuss the validity of proposed solution and viability of
alternatives, establishing consensus
Inspection
Usually led by a trained moderator (not the author)
Uses defined roles during the process
Involves peers to examine the product
Rules and checklist are used during the preparation phase

o
o
o

Defects found are documented in logging list or issue log


Remove defects efficiently, as early as possible
Create common understanding by exchanging information among the inspection
participants

iii)
Audit
o Independent assessment of the process or product to ensure that product as well as
processes used to build them meets predefined criteria. Auditors may not be experts in
the subject matter but may have experience in the field.
o The Audit report comprises of:
o Major and Minor Non-conformances
o Observations
o Achievements or good practices
iv)
o

Technical review
Technical review is discussion meeting that focuses on achieving consensus about
technical content of a document
Less formal as compared to Inspection
Assess the value of technical concepts and alternatives in the product and project
environment

o
o

v)
Peer Review
o Conducted frequently at various stages in SDLC, could be done by fellow developer,
and also called as Desk review. Checklist and processes may be defined for tracking
peer reviews.
o Types of Peer Reviews:
o Online Peer Review (Peer-to-Review)
o Offline Peer Review (Peer Review)
c. Explain the factors for successful performance of Reviews.
o
o
o
o
o
o
o
o
o

Ans.

Find a Champion
Pick things that really count
Explicitly plan and track review activities
Train participants
Manage people issues
Follow the rules but keep it simple
Continuously improve process and tools
Report results
Just do it!

(Students may elaborate the same)

d. List down the types of Static Analysis by Tools. Explain them.


Ans.

Most of the Static Analysis tools focus on software code


Tools typically used by developers before or during component and integration testing and
by designers during s/w modeling
Tools :
o can not only show structural attributes such as depth of nesting or Cyclomatic
complexity and check against the coding standards
o but also have graphic depictions of control flow, data relationships and number of
distinct paths from one line of code another
Static Analysis tools are important because:
o All languages are prone to recognizable fault modes
o These faults may escape conventional scrutiny by dynamic testing and may show
only when commercial product is in use
o All programming languages have problems and programmers cannot assure to
protect against them
o Programming languages cannot be standardized
Coding Standards:
Coding Standards consists of
o Set of Programming rules (dynamic memory allocation for array)
o Naming conventions (class names to begin with C)
o GUI standards
o Layout specifications adopted

Code Metrics:
Code analysis uses Structural attributes of the code such as
o Comment frequency
o Depth of nesting
o Cyclomatic number
o Number of LOC

These metrics help to design alternatives when redesigning a piece of code


Complexity metrics identify high risk and complex areas
Cyclomatic Complexity Metric (CCM) is based on the number of decisions in the program

Code Structure:
Control Flow Structure:
o Addresses sequence in which instructions are executed
o Reflects the iterations and loops in a programs design
o Control Flow Analysis identifies unreachable code or dead code
o Many code metrics depend on the Control Flow Structure
Data Flow Structure:
o Follows the trail of data items as it is accessed and modified by the code
o Number of times transactions applied to the data item
o Data Flow measures show how data act as they are transformed by the program
o Defects like undefined symbol, referencing variables, undefined values may be
found
Data Structure:
o Refers to organization of data itself, independent to program
o Data arranged as lists, queues, stacks or other well-defined structure have welldefined algorithms for creating, modifying or deleting them
o Helps design test cases to show correctness of a program

(students may list and explain any two points of each type)

IV. Answer any two of the following:


a. Explain, how is Equivalence Class Partitioning different from BVA?
Ans.

Equivalence Partitioning:
A Black-box test design technique in which test cases are designed to execute
representatives from equivalence partitions
Equivalence Partition : A portion of an input or output domain for which the behavior of a
component or system is assumed to be the same, based on the specification.
Equivalence Partitioning (EP) is applied at all levels of testing
The technique divides(i.e. partitions) a set of test conditions into groups or sets that can be
considered same (i.e. system handles them equivalently) --- Equivalence Partitioning
Equivalent partitions are also known as equivalence classes.
Equivalence testing considers that for each partition, a single test case associated to that
partition is sufficient to evaluate the behaviour of the system. More test cases for a partition
may not find new defects or faults in the behaviour of the system. Equivalence classes are
formed by considering each condition specified on an input value. The input values are then
partitioned into a valid EC and one or more invalid EC.
Boundary Value Analysis:
Boundary Value Testing (BVT) or BVA is a black box testing technique that refines
equivalence class testing. BVT is functional testing technique that allows partitioning the
inputs and outputs in ordered sets with distinct boundaries. Values under same set would be
treated in similar manner. Test values are selected such that they are inside, on and just outside
the boundaries. Test cases are designed to focus near the limits of valid ranges or boundaries
of equivalence classes. Efforts of software testing focus at the extreme boundaries of the
equivalence classes. Errors are easily tracked because when input values change from valid to
invalid at extreme ends; there is more probability of occurrence of errors.
Purpose of BVT is to concentrate the testing effort on error-prone areas accurately pinpointing
the boundaries of conditions (E.g. programmer may interpret requirement of SRS as >=, <= ,
>,<,= in his program)
(students may give any 5 points of difference from above giving examples of each type)

10

b. Explain Experience based testing and its types.


Ans.

Experienced-based testing: non-systematic techniques based on testers knowledge,


experience, imagination and intuitions
o Good bug-hunter can be creative at finding some elusive errors
Error Guessing:
o Used as complement to other more formal techniques
o Success of error guessing is dependent on the skill of the tester, his experience as a
tester or as working with the system
o No rules for error guessing
o E.g. divisions with zero, empty field, numeric/alphabetic data, any such assumption
that could never happen
o Structured approach to error-guessing includes
listing possible defects or failures based on testers experience
available history of defects or failures
common knowledge of system failure
Exploratory Testing:
o Its about exploring the software for what it does/does not do etc.
o Used to complement other formal techniques
o Hands-on approach where testers are involved in minimum planning and maximum
test execution
o Test design and execution are performed in parallel without formally documenting
test conditions, test cases or test scripts
o E.g. tester may choose BVA and test important boundaries without documenting all
cases
o Test logging, documenting the key aspects of testing, defects found or further
possible testing used for reporting
o Key aspect of exploratory testing is Learning
o Tester decides what to test next at every stage
o Used when specifications are limited and poorly stated or to establish confidence in
the software

c. Explain State Transition based testing with an example.


Ans.

State transition tables are useful tool to capture certain types of system requirements
and also document internal system design
State transition tables document events that are processed by the system as well as
systems responses
Easy to use and provide information that is complete and systematic
It consists of 4 columns:
o Current State
o Event
o Action
o Next State
The procedure to build the state transition table is as follows:
For each state of the system
For each event/trigger of the system
Find the corresponding (Action, Next State, if any)
Document (State, Event, Action, Next State)
The benefits of using state transition table in testing are:
o It enumerates all possible state transition combinations, not just the valid ones but
also the invalid ones
o Testing of critical, high risk systems including aviation, financial or medical
applications requires testing every state transition pair including the invalid pairs
o Knowledge of valid and invalid state transition provide necessary and complete
information that makes decision more stable and consistent in a given environment
Example:
Digital watch:
The software responds to input requests to change the display mode for a time display device.
The display mode can be set to one of the four values:
Two corresponding to displaying either time or date.
The other two when altering either time or date.
Four possible input requests:
Change mode (CM)
Reset (R)
Time Set (TS)

Date Set (DS)


Change Mode (CM):
Activation of this shall cause the display mode to move between Display Time (T) and
Display Date (D)
Reset (R):
If display mode is set to T or D, then a reset shall cause the display mode to be set to Alter
time (AT) or Alter Date (AD) modes.
Time Set (TS):
Activation of this shall cause the display mode to return to T from AT.
Date Set (DS):
Activation of this shall cause the display mode to return to D from AD.

d. Define Cyclomatic Complexity:


Draw a Program Graph to Graphically represent the given Code
Dim i as Integer
Dim n as Integer
i=1
n=1
while(i<=5)
{
n=n*i
i=i+1
}
output n
Also find the Cyclomatic Complexity of the Program Graph.

Ans.

The metric (Cyclomatic Complexity) helps determine the number of independent paths that
provides the number of test cases that have to be designed:
o The Cyclomatic Complexity of a strongly connected graph is provided by the formula:
o Method 1: V(G) = e-n+2 (where e represents the number of edges, n
represents the number of nodes)
o Method 2: V(G) = P+1 ( where P = no. of predicate nodes with out- degree =2)
o Method 3: V(G) = number of enclosed regions +1
1
2
3
4
5
6
7
8
9
10

Dim i as Integer
Dim n as Integer
i=1
n=1
while(i<=5)
{
n=n*i
i=i+1
}
output n

R1
1

10
0
Cyclomatic complexity :
Method 1: V(G) = e - n + 2, where e = edges and n = nodes
= 10 10 +2
= 2
Method 2: V(G) = P + 1, where P = Predicate nodes
=1+1
= 2
Method 3 : V(G) = no. of enclosed regions + 1
=1+1
=2
Cyclomatic Complexity = 2

V.

Answer any two of the following:


a. What is Risk Based Testing? How is product risk different from project risk?

Ans.

Risk Based Testing: Testing oriented towards exploring and providing information about
product risks.
Risk based testing Involves:
o Mitigation provide chance to reduce defects
o Contingency testing to identify work-around defects and reduce impact of risk
o Measuring and removing risks in critical areas
o Risk analysis to proactively remove or prevent defects
o Risks classified as:
o Project Risks
o Product Risks
Project Risk: A risk related to management and control of the test project
o Late delivery of test items to test team
o Availability issues of test environment or lack of test effort
o Indirect issues of budget, administrative support, changing business needs etc.

Product Risk: A risk directly related to the test object


o Possibility that system or software might fail to satisfy some reasonable customer or user
or stakeholder expectation
o Product Risk is a Quality Risk
o Unsatisfactory software may
o Omit key functions
o Be unreliable and frequently fail to behave normally
o Fail and cause financial damages
o Have quality issues with respect to security, reliability, usability, maintainability or
performance
b. Explain incident report lifecycle with the help of a diagram.
Ans.

Incident reports managed from discovery to resolution. All incident reports move through a
series of transition states:
1. Incident reported
2. Peer testers or reviewers review the report
3. Review successful, report opened
4. Project team decides whether to repair the defect or not
5. If defect is to be repaired, programmer is assigned to repair it
6. Programmer completes repair and sends to tester for confirmation testing
7. If confirmation testing fails it is reopened and then reassigned
8. Once tester confirms good repair incident report is closed- no other work
9. In any state like rejected, deferred or closed further work is required prior to ending
the project incident not assigned to owner
10. Owner of the incident is responsible for transitioning the incident to subsequent
allowed state

repaire
d

Approved
for repair
Reviewed

opened

fixed

assigned

reported
Declined
for repair
Rewritten
Bad report

Failed
confirmation
test

Not a
proble
m

closed

reopened
rejected

deferred
Gathered new
Information

Confirmed to
be repaired

Problem
returned

10

c. What is test control and explain configuration management?


Ans.

Test Control: A test management task that deals with developing and applying a set of
corrective actions to get project on track when monitoring shows a deviation from what was
planned
Configuration Management:
o A discipline applying technical and administrative direction and surveillance to
identify and document the function and physical characteristics of a configuration item
o Control changes those characteristics
o Records and reports change processing and implementation status
o Verifies compliance with specified requirements
o Determine items that make up the software or system
o E.g. Source code, test scripts, software, hardware, data, development and test
documentation
o Configuration management is making sure that these items are managed throughout
entire project or product lifecycle
o Testers can manage test ware and test results
o Configuration management allows to map what is being tested so that defects are
version controlled & map to the component
o Advantages:
o Avoids wrong test execution for wrong software
o Receiving not possible to uninstall builds
o Reporting defects against wrong version of code not existing in test
environment

d. Explain test organization. What are the factors affecting test efforts?
Ans.

o Testing is a complex and distinct activity


o Accounts for substantial overall project budget
o Test Organization is about:
1. Organizing testers and testing
2. Estimation, planning and strategizing test effort
3. Test progress Monitoring, Test Reporting and Test Control
4. Configuration Management
5. Product and project Risks
6. Management of Incidents
During estimation of Test Effort variety of factors need to be considered which include---1. Product factors:
o Product documentation
o Non-functional quality characteristics like usability, reliability, security etc.
o Complexity
Difficulty of comprehending and handling the problem e.g. avionics
Innovative technologies used
Need for multiple test configurations
Stringent security rules and regulations
Geographical distribution of the team
o Size of the Product and Project Team
2. Process factors:
o Availability of test tools, debugging tools reduce effort of testing & development
o Life cycle influences process V model is more fragile , incremental model relies
on heavy regression testing cost
o Process maturity and test process maturity plan adequately
o Time pressure no excuse to take unwarranted risks
o People factor most important as people have skills, teams do testing
o Test Results appropriate at an early stage

VI. Answer any two of the following:


a. What are the benefits and risks of using tools? Explain.
Ans.

o Tools perform repetitive tasks like regression testing, checking code standards, creating
logs more effectively
o Tools have consistent performance for repetitive tasks like debugging, entering test inputs,
generating tests (from requirements)
o Tools have no subjective bias
o Tools represent information more effectively using charts, graphs etc.
o Potential Benefits of using Tool support:
o Reduction of repetitive work
o Greater consistency and repeatability

10

o Objective assessment
o Ease of access to information about tests or testing
o Specific tools provide effective point solutions
o Tools are associated with potential risks:
o Unrealistic expectations from the tools
o Underestimating time, cost and effort for initial introduction of tool
o Underestimating time, cost and effort to achieve significant benefits from the tool
o Underestimating the effort required to maintain test assets generated by the tool
o Over-reliance on the tool
o Limited skill available to use the tool
b. Explain dynamic analysis tools with performance testing.
Ans.

Tools that support testing that can be carried out on a system when it is executing or running
or after a system is released an is in live operation
o Dynamic Analysis Tools
o Performance-Testing, Load-Testing and Stress-testing Tools
o Monitoring Tools
Dynamic Analysis Tools:
o Code should be in execution or running
o Tools used for analysis of the software during execution/operation
o E.g. performance of the computer after installation of new software, website
response time
o These tools may be used by developers in component testing and integration testing
e.g. testing middle ware, security testing, checking dead links in a website
Features of Dynamic Analysis tools include support for:
o Detecting memory leaks
o Identifying pointer arithmetic errors such as null pointers
o Identifying time dependencies
Features of Performance-Testing Tools include support for:
o Generating a load on the system to be tested
o Measure characteristics like response times, throughput or mean time between
failures
o Measuring time of specific transaction as load on the system varies
o Producing graphs or charts of response over time

c. State features of test design tools and data preparation tools.


Ans.

Tools that support various testing activities:


Test Design Tools
Test Data Preparation Tools
Test Design Tools features:
Help build test cases, inputs and expected results if automated oracle is available
If requirements are provided to requirement mgt. tools/test mgt. tools/CASE tools, then
input fields with valid range of input data may be identified
Tool to distinguish between valid & invalid inputs (to generate error msg.)
If expected inputs as well as results are known then test design tools may build good
test cases
Tools may help select various possible combination of techniques (using orthogonal
array) to ensure adequate testing
Some tools have partial oracle to find if inputs accepted/rejected but not the exact
result/output
Screen Scraper Tool : A structured template or test frame, to test all input fields and
other GUI interfaces like buttons, lists etc. but may not be able to handle the GUI
events, checks functionality only
Some design tools bundled with coverage tool, to test branch coverage, path coverage,
calculate input to increase coverage as presence of oracle is less likely
Test Data Preparation tools include features that include:
Extracts selected data records from files or databases
Massage data records for data protection (not identified by people but identified by
machines)
Enable records to be sorted or arranged in different order
Generate new records populated with pseudo-random data, or data set up with a
suitable guideline e.g. MIS data
Construct large number of records from a template e.g. set of records for volume
testing

d. List down the tool support for management testing. Explain any two.
Ans.

Features of Test Management Tools include support for:


Management of tests - keeping track of test planned, written, run, passed or failed
Scheduling of tests to be executed test execution tool
Management of testing activities time for test design, budget, schedules, test
execution etc.
Interfaces to other tools like
o Test execution tools, requirement management tools, configuration
management tools, incident management tools
Requirement Management Tools:
May find defects in requirement document by checking for ambiguous or contradicting
words like may, might, certain level, as required as per decision etc.
Storing information of requirement attributes
Checking consistency of requirements
Identifying undefined, missing or to be defined later requirements
Prioritize requirements for testing
Trace requirements - test and tests - requirements, functions or features
Traceability through levels
Incident Management Tools:
Storing information about attributes of incidents (for severity)
Storing attachments (screen shots)
Prioritizing incidents
Assigning actions to people (fix, confirmation tests)
Status (open, rejected, duplicate, deferred, closed etc.)
Reporting of statistics/ metrics about the incident (average time to open, number of
incidents with their status, total number raised, opened and closed)
Configuration Management Tools :
Storing information about versions and builds of the software and test ware
Traceability between software and test ware, versions or variants
Keep track of which version belongs to which configuration (operating system,
libraries, browsers)
Build and released management
Base lining (all configuration items that make up specific release)

(Students may explain any two of the above management tools)


VII. Answer any three of the following:
a. Define the terms Risk, Error, Defect, Failure and Quality.
Ans

Risk: A factor that could result in future negative consequences, usually expressed as impact
and likelihood.
o Possibility of negative or undesirable outcome
o Likelihood of risk in future: 0% to 100%
o Likelihood in past: 0% to 100% or has become outcome or issue
Error (mistake) : a human action producing an incorrect result
Defect (bug, flaw, fault): a flaw in a component /system due to which it fails to perform its
required function
Failure: deviation of component / system from its expected delivery, service or result
Quality: the degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.

15

b. Explain V- Model of testing? Explain its advantages.


Ans

V-Model of Testing:
V-Model developed to address problems of traditional waterfall approach
V-model based on Early Testing Principle, integrated into each phase of life cycle
V-model proves that testing is just not execution based activity
Various testing activities performed parallel to development activities
Need, Wish,
Policy, Law

Operational System

User Requirements
System Requirements
Global Design

Preparation
Acceptance Test

Acceptance Test Execution

Preparation
System Test

System Test Execution

Preparation
Integration
Test

Integration Test Execution


Component Test Execution

Detailed Design

Implementation

Different Levels of testing identified for different work products


V-model uses 4 test levels:
o Component Testing: functioning of software components (e.g. modules,
programs, objects, classes etc.)
o Integration Testing: interfaces and interactions of different products or projects
o System Testing: whole system/product
o Acceptance Testing: validation w.r.t. requirements, business processes and
needs
Advantages of V-model:
o Allows early testing.
o Can be used with all the development models
o Allows static and dynamic testing
o Cost of failure can be minimized
o Software Release can be well planned.
c. What is Static Testing? Explain the importance of Static Testing.
Ans.

Static Testing : Testing of a component or system at specification or implementation level


without execution of that software e.g. reviews of code
Software work products are examined manually or with a set of tools, but not executed
Not all work products can be subjected to execution
Dynamic and static testing are complementary methods as they tend to identify different
defects effectively and efficiently
In addition to finding defects the objectives of static testing are :
o Informational
o Communicational and
o Educational
Reviews represent project milestones and support to establish test basis for the product
Review feedback helps testers focus their testing and are a means of customer/user
communication with developers
Early feedback on quality issues can be established
E.g. early reviews of user requirement as against during user acceptance testing
Allows for process improvements avoiding similar errors to be repeated in future
Static tests contribute to an increased awareness of quality

d. Justify 100% Condition Coverage implies 100% Decision Coverage giving an example.

Conditional coverage and relational coverage are two popular forms of coverage.
Ans. In conditional coverage i.e. binary logical operators (&&, ||), the individual components are
evaluated in every possible combination of true or false
e.g.
Conditional coverage = (no. true values taken by all basic conditions / 2 * no. of basic
conditions) * 100%
Decision Coverage the decisions i.e. true and false are considered without considering the
combinations leading to such decisions.
Decision coverage = no. od decisions executed / Total no. of decision *100%
Thus we can see that when evaluating all conditions the decision of true and false will be
executed at least once. Therefore 100% condition coverage ensures 100% decision coverage.
(Students are required to give example using a program or control flow graph to explain the
above)
e. Explain various types of test strategies? How to pick up the best type of strategies for

success?
Ans.

Test approach or strategy is a powerful factor in success of test planning and estimation
of cost and effort
Under the control of testers and test leader
Major types of test strategies:
o Analytical risk based or requirement based - Preventive approach
o Model-Based mathematical models- Methodical check list, preplanned ISO
9126 , systematic - Preventive approach
o Process or standard compliant IEEE 829 standard Preventive / reactive
o Dynamic exploratory during execution - Reactive approach
o Consultative or directed Preventive/ reactive
o Regression averse automated tests Preventive

Choice of strategies , preventive or reactive, blend or borrow influenced by factors like:


Risks: regression is imp. risk - regression-averse
Skills: skill of the testers imp. - standard-compliant
Objectives: objectives of stakeholders imp. - dynamic strategy
Regulations: regulations of organization imp. - methodical test strategy
Product: well- specified requirements imp. - requirement-based analytical strategy
Business: legacy system can be used, - model-based strategy
f. Describe any two tools in static testing.
Ans.

Review Process Support Tools:


When people are form different geographical locations tool support becomes important
Information may be kept in excel sheet or word documents:
But tools designed work better
o Tools measure checking rate, recommend checking rate, flag exceptions
o Tools can be tailored for a specific review process or type
o Common reference of review process or processes in different situations
o Storing and sorting review comments
o Communicating comments to relevant people
o Coordinating online reviews
o Keeping track of comments, defects found and providing statistical information
about them
o Providing traceability between comments and documents
o Repository of rules, procedures, checklists, entry and exit criteria used in reviews
o Monitoring review status
o Collecting metrics and reports on key factors
Static analysis tools are more likely to be used by developers for development and
component testing
Source code is input data for the tool
E.g. Compiler, interpreter, parsers also offer static analysis features
Static analysis tools may also be used for static analysis of various documents (e.g. SRS)
and websites
Static analysis tools help to understand structure of the code and enforce standards for
coding
Coding Standards:
Coding Standards consists of
o Set of Programming rules (dynamic memory allocation for array)
o Naming conventions (class names to begin with C)

o GUI standards
o Layout specifications adopted
Code Metrics:
Code analysis uses Structural attributes of the code such as
o Comment frequency
o Depth of nesting
o Cyclomatic number
o Number of LOC
These metrics help to design alternatives when redesigning a piece of code
Complexity metrics identify high risk and complex areas
Cyclomatic Complexity Metric (CCM) is based on the number of decisions in the program
Code Structure:
Control Flow Structure:
o Addresses sequence in which instructions are executed
o Reflects the iterations and loops in a programs design
o Control Flow Analysis identifies unreachable code or dead code
o Many code metrics depend on the Control Flow Structure
Data Flow Structure:
o Follows the trail of data items as it is accessed and modified by the code
o Number of times transactions applied to the data item
o Data Flow measures show how data act as they are transformed by the program
o Defects like undefined symbol, referencing variables, undefined values may be
found
Data Structure:
o Refers to organization of data itself, independent to program
o Data arranged as lists, queues, stacks or other well-defined structure have welldefined algorithms for creating, modifying or deleting them
o Helps design test cases to show correctness of a program
(students may list and explain any two of the above in details)

~~~~~~~~~~~~~~~~~~~

Anda mungkin juga menyukai