Anda di halaman 1dari 102

Quality & Configuration

Management
Testing
 Software testing is a process of executing a program or application with the
intent of finding the software bugs.

 Activity to check whether the actual results match the expected results and to
ensure that the software system is Defect free.

 Software testing also helps to identify errors, gaps or missing requirements in


contrary to the actual requirements.

 done manually or using automated tools


 Static testing- during verification process

 Dynamic testing- during validation process

2
Testing Techniques

 Black-box testing

 Knowing the specified function that a product has been

designed to perform, test to see if that function is fully

operational and error free

 Includes tests that are conducted at the software interface

 Not concerned with internal logical structure of the

software

4
Testing Techniques contd..

 White-box testing
 Knowing the internal workings of a product, test that all
internal operations are performed according to
specifications and all internal components have been
exercised

 Involves tests that concentrate on close examination of


procedural detail

 Logical paths through the software are tested

 Test cases exercise specific sets of conditions and loops

5
White-box Testing
White-box Testing
 Uses the control structure part of component-level design to derive
the test cases

 These test cases

 Guarantee that all independent paths within a module have


been exercised at least once

 Exercise all logical decisions on their true and false sides

 Execute all loops at their boundaries and within their


operational bounds

 Exercise internal data structures to ensure their validity


7
Basis Path Testing

 White-box testing technique proposed by Tom McCabe

 Enables the test case designer to derive a logical complexity


measure of a procedural design

 Uses this measure as a guide for defining a basis set of execution


paths

 Test cases derived to exercise the basis set are guaranteed to


execute every statement in the program at least one time during
testing

8
Flow Graph Notation

 A circle in a graph represents a node, which stands for a sequence of


one or more procedural statements

 A node containing a simple conditional expression is referred to as a


predicate node

 Each compound condition in a conditional expression containing


one or more Boolean operators (e.g., and, or) is represented by a
separate predicate node

 A predicate node has two edges leading out from it (True and False)

9
Flow Graph Notation
 An edge, or a link, is a an arrow representing flow of control in a
specific direction

 An edge must start and terminate at a node

 An edge does not intersect or cross over another edge

 Areas bounded by a set of edges and nodes are called regions

 When counting regions, include the area outside the graph as a region,
too

10
Flow Graph Example
FLOW CHART FLOW GRAPH
0 0

R4
1 1

2 2

3 R3
3

6 4 6 4
R2
7 8 5
7 R1 8 5
9
9
11 10 11 10 11
Independent Program Paths

 Defined as a path through the program from the start node until the end node that
introduces at least one new set of processing statements or a new condition (i.e.,
new nodes)

 Must move along at least one edge that has not been traversed before by a previous
path

 Basis set for flow graph on previous slide

 Path 1: 0-1-11

 Path 2: 0-1-2-3-4-5-10-1-11

 Path 3: 0-1-2-3-6-8-9-10-1-11

 Path 4: 0-1-2-3-6-7-9-10-1-11

 The number of paths in the basis set is determined by the cyclomatic complexity
12
Cyclomatic Complexity
 Provides a quantitative measure of the logical complexity of a program

 Defines the number of independent paths in the basis set

 Provides an upper bound for the number of tests that must be conducted to ensure all
statements have been executed at least once

 Can be computed three ways

 The number of regions

 V(G) = E – N + 2, where E is the number of edges and N is the number of nodes in


graph G

 V(G) = P + 1, where P is the number of predicate nodes in the flow graph G

 Results in the following equations for the example flow graph

 Number of regions = 4

 V(G) = 14 edges – 12 nodes + 2 = 4

 V(G) = 3 predicate nodes + 1 = 4


13
Deriving the Basis Set and Test
Cases

1) Using the design or code as a foundation, draw a corresponding


flow graph

2) Determine the cyclomatic complexity of the resultant flow graph

3) Determine a basis set of linearly independent paths

4) Prepare test cases that will force execution of each path in the
basis set

14
A Second Flow Graph Example
3
1 int functionY(void)
2 {
3 int x = 0; 4
4 int y = 19;

5 A: x++; 5
6 if (x > 999)
7 goto D;
8 if (x % 11 == 0)
6
9 goto B; 8 7
10 else goto A;

11 B: if (x % y == 0) 10 9 16
12 goto C;
13 else goto A; 11 17
14 C: printf("%d\n", x);
15 goto A; 13 12
16 D: printf("End of list\n");
17 return 0; 14
18 }
15 15
A Sample Function to Diagram and Analyze
1 int functionZ(int y)
2 {
3 int x = 0;

4 while (x <= (y * y))


5 {
6 if ((x % 11 == 0) &&
7 (x % y == 0))
8 {
9 printf(“%d”, x);
10 x++;
11 } // End if
12 else if ((x % 7 == 0) ||
13 (x % y == 1))
14 {
15 printf(“%d”, y);
16 x = x + 2;
17 } // End else
18 printf(“\n”);
19 } // End while

20 printf("End of list\n");
21 return 0;
22 } // End functionZ 16
A Sample Function to Diagram and Analyze
1 int functionZ(int y) 3
2 {
3 int x = 0;
4
4 while (x <= (y * y))
5 { 6 7
6 if ((x % 11 == 0) &&
7 (x % y == 0))
8 { 9
9 printf(“%d”, x);
12 13
10 x++;
11 } // End if 10
12 else if ((x % 7 == 0) || 15
13 (x % y == 1))
14 { 16
15 printf(“%d”, y);
16 x = x + 2; 18
17 } // End else
18 printf(“\n”);
19 } // End while 20
20 printf("End of list\n"); 21
21 return 0;
22 } // End functionZ 17
Graph Matrices
 To develop s/w tool that assist in basis path testing, a DS called graph
matrix is quite useful.

 Is a square matrix whose size is equal to no of nodes on the flow


graph

 Each node is identified by nos. & each edge is identified by letters

 Letter entry is made to correspond to a connection between two


nodes.

 Link weights is a 1 or 0. (connection matrix)

 Provides other way to determine cyclomatic complexity


18
Graph Matrices contd..

19
When is White Box Testing
appropriate

“The rigour that white box Testing employs is quite useful – yes,
but not all the time.”
By mission critical, we mean for instance the Core Banking system that
provides the IT backbone to operate a Bank.
The core banking system will house all transactions and corresponding
customer data, and helps run the bank day-to-day.
Another example could be IT systems that help governments-run defence
operations.
Any system that provides such critical utility to a company, organization or
government needs to be bug-free. Any level of bugs or downtime is unacceptable
for such systems, as they perform extremely vital functions for the stakeholders
involved. 20
Black-box Testing
Black Box Testing - Steps

 Here are the generic steps followed to carry out any type of Black Box Testing.

 Initially requirements and specifications of the system are examined.

 Tester chooses valid inputs (positive test scenario) to check whether SUT processes
them correctly . Also some invalid inputs (negative test scenario) are chosen to verify
that the SUT is able to detect them.

 Tester determines expected outputs for all those inputs.

 Software tester constructs test cases with the selected inputs.

 The test cases are executed.

 Software tester compares the actual outputs with the expected outputs.

 Defects if any are fixed and re-tested.

22
Black-box Testing Categories

 Incorrect or missing functions

 Interface errors

 Errors in data structures or external data base access

 Behavior or performance errors

 Initialization and termination errors

23
Black-box Testing
 Practically, due to time and budget considerations, it is not possible to
perform exhausting testing for each set of test data, especially when
there is a large pool of input combinations.

 We need an easy way or special techniques that can select test cases
intelligently from the pool of test-case, such that all test scenarios are
covered.

 We use two techniques - Equivalence Partitioning & Boundary Value


Analysis testing techniques to achieve this.

24
What is Boundary Testing?
 Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values

 So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
"boundary testing".

 The basic idea in boundary value testing is to select input variable values at
their:
 Minimum
 Just above the minimum
 A nominal value
 Just below the maximum
 Maximum
25
What is Equivalent Class
Partitioning?
 Equivalent Class Partitioning is a black box technique (code is not visible
to tester) which can be applied to all levels of testing like unit,
integration, system, etc. In this technique, you divide the set of test
condition into a partition that can be considered the same.

 It divides the input data of software into different equivalence data


classes.

 You can apply this technique, where there is a range in input field.

26
Example 1: Equivalence and
Boundary Value

Let's consider the behavior of tickets in the Flight reservation application, while
booking a new flight.

27
Ticket values 1 to 10 are considered valid & ticket is booked. While value 11 to 99
are considered invalid for reservation and error message will appear, "Only ten
tickets may be ordered at one time."

28
 Here is the test condition

 Any Number greater than 10 entered in the reservation column (let say 11) is
considered invalid.

 Any Number less than 1 that is 0 or below, then it is considered invalid.

 Numbers 1 to 10 are considered valid

 Any 3 Digit Number say -100 is invalid.

 We cannot test all the possible values because if done, the number of test cases
will be more than 100. To address this problem, we use equivalence
partitioning hypothesis where we divide the possible values of tickets into
groups or sets as shown below where the system behavior can be considered
the same.
29
30
 The divided sets are called Equivalence Partitions or Equivalence Classes.
Then we pick only one value from each partition for testing. The hypothesis
behind this technique is that if one condition/value in a partition passes all
others will also pass. Likewise, if one condition in a partition fails, all other
conditions in that partition will fail.

31
Boundary Value Analysis- in Boundary Value Analysis, you test boundaries
between equivalence partitions

32
Example 2
Equivalence and Boundary Value
Suppose a password field accepts minimum 6 characters and maximum
10 characters
That means results for values in partitions 0-5, 6-10, 11-14 should be
equivalent

Test Scenario # Test Scenario Description Expected Outcome


Enter 0 to 5 characters in password
1 System should not accept
field
Enter 6 to 10 characters in password
2 System should accept
field
Enter 11 to 14 character in password
3 System should not accept
field

33
Examples 3: Input Box should accept
the Number 1 to 10

Here we will see the Boundary Value Test Cases

Test Scenario Description Expected Outcome

Boundary Value = 0 System should NOT accept

Boundary Value = 1 System should accept

Boundary Value = 2 System should accept

Boundary Value = 9 System should accept

Boundary Value = 10 System should accept

Boundary Value = 11 System should NOT accept

34
Why Equivalence & Boundary
Analysis Testing

 This testing is used to reduce very large number of test cases to


manageable chunks.

 Very clear guidelines on determining test cases without


compromising on the effectiveness of testing.

 Appropriate for calculation-intensive applications with large


number of variables/inputs

35
Equivalence Partitioning

 A black-box testing method that divides the input domain of a


program into classes of data from which test cases are derived

 An ideal test case single-handedly uncovers a complete class of


errors, thereby reducing the total number of test cases that must be
developed

 Test case design is based on an evaluation of equivalence classes for


an input condition

 An equivalence class represents a set of valid or invalid states for


input conditions

36
Guidelines for Defining Equivalence Classes

 If an input condition specifies a range, one valid and two invalid equivalence
classes are defined

 Input range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}

 If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined

 Input value: 250 Eq classes: {250}, {x < 250}, {x > 250}

 If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined

 Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any other x}

 If an input condition is a Boolean value, one valid and one invalid class are define

 Input: {true condition} Eq classes: {true condition}, {false


condition}

37
Boundary Value Analysis

 A greater number of errors occur at the boundaries of the input


domain rather than in the "center"

 Boundary value analysis is a test case design method that


complements equivalence partitioning

 It selects test cases at the edges of a class

 It derives test cases from both the input domain and output
domain

38
Guidelines for
Boundary Value Analysis

 1. If an input condition specifies a range bounded by values a and b, test cases


should be designed with values a and b as well as values just above and just below a
and b

 2. If an input condition specifies a number of values, test case should be developed


that exercise the minimum and maximum numbers. Values just above and just below
the minimum and maximum are also tested

 Apply guidelines 1 and 2 to output conditions; produce output that reflects the
minimum and the maximum values expected; also test the values just below and just
above

 If internal program data structures have prescribed boundaries (e.g., an array),


design a test case to exercise the data structure at its minimum and maximum
boundaries

39
Software Testing
Strategies
Introduction

 A strategy for software testing integrates the design of


software test cases into a well-planned series of steps that
result in successful development of the software

 The strategy provides a road map that describes the steps to be


taken, when, and how much effort, time, and resources will be
required

 The strategy incorporates test planning, test case design, test


execution, and test result collection and evaluation

41
A Strategic Approach to
Testing
General Characteristics of Strategic Testing

 To perform effective testing, a software team should conduct effective


formal technical reviews

 Testing begins at the component level and work outward toward the
integration of the entire computer-based system

 Different testing techniques are appropriate at different points in time

 Testing is conducted by the developer of the software and (for large


projects) by an independent test group

 Testing and debugging are different activities, but debugging must be


accommodated in any testing strategy
43
Verification and Validation

 Software testing is part of a broader group of activities called


verification and validation that are involved in software quality
assurance

 Verification (Are the algorithms coded correctly?)

 The set of activities that ensure that software correctly


implements a specific function or algorithm

 Validation (Does it meet user requirements?)

 The set of activities that ensure that the software that has been
built is traceable to customer requirements
44
A Strategy for Testing Conventional
Software

System Testing
Validation Testing
Integration Testing
Unit Testing

Code
Design

Requirements
System Engineering

45
Levels of Testing for
Conventional Software
 Unit testing
 Concentrates on each component/function of the software as
implemented in the source code
 Integration testing
 Focuses on the design and construction of the software
architecture
 Validation testing
 Requirements are validated against the constructed software
 System testing
 The software and other system elements are tested as a whole

46
Unit Testing

 Focuses testing on the function or software module

 Concentrates on the internal processing logic and data structures

 Is simplified when a module is designed with high cohesion

 Reduces the number of test cases

 Allows errors to be more easily predicted and uncovered

 Concentrates on critical modules and those with high cyclomatic


complexity when testing resources are limited

47
Targets for Unit Test Cases
 Module interface
 Ensure that information flows properly into and out of the module
 Local data structures
 Ensure that data stored temporarily maintains its integrity during all steps in an
algorithm execution
 Boundary conditions
 Ensure that the module operates properly at boundary values established to
limit or restrict processing
 Independent paths (basis paths)
 Paths are exercised to ensure that all statements in a module have been
executed at least once
 Error handling paths
 Ensure that the algorithms respond correctly to specific error conditions

48
Integration Testing

 Defined as a systematic technique for constructing the software


architecture
 At the same time integration is occurring, conduct tests to uncover
errors associated with interfaces

 Objective is to take unit tested modules and build a program


structure based on the prescribed design

 Two Approaches
 Non-incremental Integration Testing

 Incremental Integration Testing

49
Sample Integration Test Cases for the
following scenario:

Application has 3 modules say 'Login Page', 'Mail box' and 'Delete mails' and
each of them are integrated logically.
Here do not concentrate much on the Login Page testing as it's already been
done in Unit Testing. But check how it's linked to the Mail Box Page.
Similarly Mail Box: Check its integration to the Delete Mails Module.

Test Case
Test Case ID Test Case Objective Expected Result
Description
Check the interface link Enter login
To be directed to
1 between the Login and credentials and click
the Mail Box
Mailbox module on the Login button
From Mail box Selected email
Check the interface link
select the an email should appear in
2 between the Mailbox and
and click delete the Deleted/Trash
Delete Mails Module
button folder
50
Non-incremental
Integration Testing

 Commonly called the “Big Bang” approach

 All components are combined in advance

 The entire program is tested as a whole

 Chaos results

 Many seemingly-unrelated errors are encountered

 Correction is difficult because isolation of causes is complicated

 Once a set of errors are corrected, more errors occur, and testing
appears to enter an endless loop

51
Incremental Integration Testing

 Three kinds
 Top-down integration

 Bottom-up integration

 Sandwich integration

 The program is constructed and tested in small increments

 Errors are easier to isolate and correct

 Interfaces are more likely to be tested completely

 A systematic test approach is applied

52
Top-down Integration

 Modules are integrated by moving downward through the control


hierarchy, beginning with the main module

 Subordinate modules are incorporated in either a depth-first or


breadth-first fashion

53
Top-down Integration contd..

 Advantages

 This approach verifies major control or decision points early in the test
process

 Disadvantages

 Stubs need to be created to substitute for modules that have not been
built or tested yet; this code is later discarded

 Because stubs are used to replace lower level modules, no significant


data flow can occur until much later in the integration/testing process

54
Bottom-up Integration

In the bottom up strategy,


each module at lower
levels is tested with
higher modules until all
modules are tested. It
takes help of Drivers for
testing

55
Bottom-up Integration
Advantages:

 Fault localization is easier.

 No time is wasted waiting for all modules to be developed


unlike Big-bang approach

Disadvantages:

Critical modules (at the top level of software architecture) which


control the flow of application are tested last and may be prone
to defects.

 Early prototype is not possible


56
Bottom-up Integration

 Integration and testing starts with subsystems & then added to the modules above
them.

 Advantages

 This approach verifies low-level data processing early in the testing process

 Need for stubs is eliminated

 Disadvantages

 Driver modules need to be built to test the lower-level modules; this code is later
discarded or expanded into a full-featured version

 Drivers inherently do not contain the complete algorithms that will eventually use
the services of the lower-level modules; consequently, testing may be incomplete
or more testing may be needed later when the upper level modules are available

57
Sandwich Integration

 Consists of a combination of both top-down and bottom-up integration

 Occurs both at the highest level modules and also at the lowest level
modules

 Proceeds using functional groups of modules, with each group completed


before the next

 High and low-level modules are grouped based on the control and data
processing they provide for a specific program feature

 Reaps the advantages of both types of integration while minimizing the


need for drivers and stubs

58
Regression Testing

 Each new addition or change to baselined software may cause problems with functions
that previously worked flawlessly

 Regression testing re-executes a small subset of tests that have already been conducted

 Ensures that changes have not propagated unintended side effects

 Helps to ensure that changes do not introduce unintended behavior or additional


errors

 May be done manually or through the use of automated capture/playback tools

 Regression test suite contains three different classes of test cases

 A representative sample of tests that will exercise all software functions

 Additional tests that focus on software functions that are likely to be affected by
the change

 Tests that focus on the actual software components that have been changed 59
Smoke Testing

 Taken from the world of hardware


 Power is applied and a technician checks for sparks, smoke, or other dramatic
signs of fundamental failure
 Designed as a pacing mechanism for time-critical projects
 Allows the software team to assess its project on a frequent basis
 Includes the following activities
 The software is compiled and linked into a build
 A series of breadth tests is designed to expose errors that will keep the build from
properly performing its function
 The goal is to uncover “show stopper” errors that have the highest likelihood
of throwing the software project behind schedule
 The build is integrated with other builds and the entire product is smoke tested
daily
 Daily testing gives managers and practitioners a realistic assessment of the
progress of the integration testing
 After a smoke test is completed, detailed test scripts are executed
60
Benefits of Smoke Testing

 Integration risk is minimized

 Daily testing uncovers incompatibilities and show-stoppers early in the testing


process, thereby reducing schedule impact

 The quality of the end-product is improved

 Smoke testing is likely to uncover both functional errors and architectural and
component-level design errors

 Error diagnosis and correction are simplified

 Smoke testing will probably uncover errors in the newest components that were
integrated

 Progress is easier to assess

 As integration testing progresses, more software has been integrated and more has
been demonstrated to work ,Managers get a good indication that progress is being
61
made
Validation Testing
Background

 Validation testing follows integration testing


 Focuses on user-visible actions and user-recognizable output from the system
 Demonstrates conformity with requirements
 Designed to ensure that
 All functional requirements are satisfied
 All behavioral characteristics are achieved
 All performance requirements are attained
 Documentation is correct
 Usability and other requirements are met (e.g., transportability,
compatibility, error recovery, maintainability)
 After each validation test
 The function or performance characteristic conforms to specification and is
accepted
 A deviation from specification is uncovered and a deficiency list is created
 A configuration review or audit ensures that all elements of the software
configuration have been properly developed, cataloged, and have the
necessary detail for entering the support phase of the software life cycle
63
64
Alpha and Beta Testing

 Alpha testing
 Conducted at the developer’s site by end users
 Software is used in a natural setting with developers watching intently
 Testing is conducted in a controlled environment
 Beta testing
 Conducted at end-user sites
 Developer is generally not present
 It serves as a live application of the software in an environment that
cannot be controlled by the developer
 The end-user records all problems that are encountered and reports
these to the developers at regular intervals
 After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to the
entire customer base
65
System Testing
Different Types

 Recovery testing
 Tests for recovery from system faults
 Forces the software to fail in a variety of ways and verifies that recovery is
properly performed
 Tests reinitialization, checkpointing mechanisms, data recovery, and restart for
correctness
 Security testing
 Verifies that protection mechanisms built into a system will, in fact, protect it
from improper access
 Stress testing
 Executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
 Performance testing
 Tests the run-time performance of software within the context of an integrated
system
 Often coupled with stress testing and usually requires both hardware and
software instrumentation
67
What is Quality Assurance ?

 Quality =
– ...meeting the customer’s requirements,
– ...at the agreed cost,
– ...within the agreed timescales.
 Quality = “Fitness for purpose”
 Quality = Customer satisfaction !
“Success” v “Failure”
Quality Assurance helps us avoid failure !

Failure Failure
Success

Content
Failure
Quality Concepts

 Quality control: a series of inspections,


reviews, tests
 Quality assurance: analysis, auditing and
reporting activities
 cost of quality
Software Quality Assurance
 Software quality is defined as :
 conformance to explicitly stated functional and
performance requirements,
 explicitly document development standards
 and implicit characteristics that are expected
of all professionally developed software.
Software Quality Assurance

Process Formal
Definition & Technical
Standards Reviews
Process Test
Formal
Definition & Planning
Technical
Standards & Review
Measurement
Analysis Reviews
& TestTest
Reporting Planning
Planning
Measurement & Review
& Review
Measure
ment
Software Quality Assurance
1. SQA is an umbrella activity that is applied throughout the
software process.

2. SQA encompasses :

1. A quality management approach,

2. Effective software engineering technology,

3. Formal Technical Reviews that are applied throughout the


software process,

4. A multitier testing strategy,

5. Control of software documentation & the change made to it,

6. A procedure to ensure compliance with software


development standards (when applicable) and

7. Measurement & reporting mechanisms


SQA ACTIVITIES

1. Prepare an SQA plan for a project


 evaluations to be performed
 Audits and reviews to be performed
 Documents to be produced by the SQA group
 procedures for error reporting and tracking
2. Participates in the development of the project’s software process
description.
3. Audits designated software work products to verify compliance
with those defined as part of the software process.
4. Reviews software engineering activities to verify compliance with
the defined software process
5. Ensures that deviations in software work and work products are
documented and handled according to a documented procedure.
What Are Reviews?

 a meeting conducted by technical people for


technical people
 a technical assessment of a work product
created during the software engineering
process
 a software quality assurance mechanism
The Players
review
leader standards bearer (SQA)

producer

maintenance

recorder reviewer
user rep
Conductingbethe Review
prepared—evaluate
1.
product before the review

2. review the product, not


the producer
3.keep your tone mild, ask
questions instead of
making accusations

4. stick to the review agenda


5. raise issues, don't resolve them
6. avoid discussions of style—stick to technical
correctness
7. schedule reviews as project tasks
8. record and report all review results
• Product revision (ability to change).

• Product transition (adaptability to new environments).

• Product operations (basic operational characteristics)


78
The SQA Attribute

 There is a list of attributes which describes the step by


step approach to obtain Software Quality Assurance.
The attributes are given as in the diagram below:

79
The SQA Attribute contd..

 Functionality: The attributes considers the set of all the


functions used in the software.
◦ Suitability: Ensures the functions of the software are appropriate.
◦ Accuracy: Ensures the accurate usage of the functions.
◦ Interoperability: Ensure the effective interaction of the software
with other components.
◦ Security: Ensure the software is capable of handling any security
issues

80
The SQA Attribute contd..
 Reliability: The purpose of the attribute is to check the
capability of the system to perform without delay during any
conditions
◦ Maturity: Less possibility of failure of the software in any activities.
◦ Recoverability: The rate of recovery ability once a failure occurs.
 A simple measure of reliability is mean time
between failure (MTBF), where
MTBF = MTTF + MTTR

MTTF = Mean time to failure.


MTTR = Mean time to repair.

81
Software Availability

 Software Availability is the probability that a


program is operating according to requirements at
a given point in time and is defined as
Availability = [MTTF / (MTTF + MTTR)] * 100%
 Usability: The purpose is to ensure the
use of a function
 Understandability: How much effort a user
needs to understand the functions.
The SQA Attribute contd..
 Maintainability: The way to analyze and fix a
fault/issue in the software
 Analyzability: Finding out the cause of failure.
 Changeability: How the system response to
necessary changes.
 Stability: How stable the system is when the changes
made.
 Testability: Testing efforts
 Adaptability: Ability of the system to adopt the
changes in its environment.
83
ISO 9000 Quality Standards
 ISO 9000 describes quality assurance elements in
generic terms that can be applied to any business
regardless of product or services offered.

 To become registered to one of the quality


assurance system models contained in ISO 9000, a
company’s quality system and operations are
scrutinized by third party auditors for compliance to
the standard and for effective operation.
ISO 9000 Quality Standards
 ISO 9000 describes the elements of quality assurance
system in general terms.

 These elements include the organizational structure,


procedure, processes and resources needed to
implement quality planning, quality control, quality
assurance and quality improvement.

 However, ISO 9000 does not describe how an


organization should implement these quality system
elements.
The ISO 9001 Standard
 ISO 9001 is the quality assurance standard that applies to
software engineering.

 The standard contains 20 requirements that must be present for


an effective quality assurance system. [ Management
responsibility, quality system, contract review, design control,
document and data control, product identification and
traceability, process control, Inspection and testing, corrective
and preventive action, control of quality records, internal quality
audits, training , servicing and statistical techniques.]
Software
Configuration
Management
Software Configuration Management

 SCM is an umbrella activity i.e. applied


throughout the software process,
because change can occur at any time.
 SCM activities are developed to,
1. Identify change.
2. Control change
3. Ensure that change is being properly
implemented, and
4. Report to others who may have an
Fundamental Source of change
New business or market conditions
dictate changes to product
requirements or business rules
New customer needs demand
modification of data, functionality, or
services
software engineering team structure
Budgetary or scheduling constraints
cause system to be redefined
What Are These Changes?

changes in
business requirements
changes in
technical requirements
changes in
user requirements other
documents

software models
Project
Plan
data
Test
code
Software Configuration Items

It is information i.e. created as part of


software engineering process.
It may be,
• Computer programs (both source and
executable).
• Documentation (both technical and user).
• Data (contained within the program or
external to it).
Baseline
• A work product or specification becomes a
baseline only after it is reviewed and
approved, that thereafter serves as the
basis for further development.

• Baseline product or specification can be


changed only through formal change
control procedure.

• A baseline is a milestone in software


Baseline
For Example:
(Continue)

The elements of Design Specification have


been documented and reviewed. Errors are
found & corrected. Once all parts of the
specification have been reviewed, corrected and
then approved, the Design specification
becomes a baseline.

Further changes to the Design specification


can be made only after each has been evaluated
Software Configuration Management

 It is an important element of software


quality assurance.
Its primary responsibility is
Identification (tracking multiple versions to enable
efficient changes)
Version control (control changes before and after
release to customer)
Change control (authority to approve and prioritize
changes)
Configuration auditing (ensure changes made
properly)
Reporting (tell others about changes made)
The Evolution Graph
 The evolution graph describe the change history of an
object.
Version Control
 Combines procedures and tools to manage the different versions
of configuration objects created during the software process.

 Configuration management allows a user to specify alternative


configuration of the software system through the selection of
appropriate versions.

 The evolution graph can be used to describe different versions of a


system.

 Each version of the software is a collection of SCIs.


Change Control Process—I

need for change is recognized


change request from user
developer evaluates
change report is generated
change control authority decides

request is queued for action


change request is denied
user is informed
change control process—II
Changeassign
Control Process-II
people to SCIs

check-out SCIs

make the change

review/audit the change

establish a “baseline” for testing

change control process—III


Change Control Process-III
perform SQA and testing activities

check-in the changed SCIs

promote SCI for inclusion in next release

rebuild appropriate version

review/audit the change

include all changes in release


Configuration audit
• How to ensure change has been properly implemented.

• Formal Technical review

• Software Configuration audit.

• The FTR focuses on the technical correctness of the


configuration object has been modified.

• In FTR, the reviewers assess the SCI to determine


consistency with other SCIs, omissions, or potential side
effects.
Configuration audit (Continue)
 The audit asks and answers the following questions:

• Has the change specified by the ECO (Engineering change


order) been made without modifications?

• Has an FTR been conducted to assess technical correctness?

• Was the software process followed and software engineering


standards applied?

• Have the SCM standards for recording and reporting the


change been followed?

• Were all related SCI's properly updated?


Configuration Status
Reporting

Configuration status reporting (or status


accounting) is an SCM task that answers
following questions:

• What happened?

• Who did it?

• When did it happen?

• What else will be affected by the change?


Configuration Status Reporting contd..

 each time SCI is assigned a new or


updated identification , A CSR entry is made.

 After CA. results are reported as a part of the


CSR task.

Anda mungkin juga menyukai