Anda di halaman 1dari 12

Glossary of Software QA Terms

Software Quality Assurance


Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'.

Software Quality Assurance


(1) A planned and systematic pattern of all actions necessary to provide adequate
confidence that an item or product conforms to established technical requirements.
(2) A set of activities designed to evaluate the process by which products are developed
or manufactured.

Software 'quality'
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
subjective term. It will depend on who the 'customer' is and their overall influence in the
scheme of things. A wide-angle view of the 'customers' of a software development project
might include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization's
management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant
on 'quality' - the accounting department might define quality in terms of profits while an
end-user might define quality as user-friendly and bug-free.

Software Testing
Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn't or things don't
happen when they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common
are project teams that include a mix of testers and developers who work closely together,
with overall QA processes monitored by project managers. It will depend on what best
fits an organization's size and business structure.

Quality Assurance
(1)The planned systematic activities necessary to ensure that a component, module, or
system conforms to established technical requirements.
(2) All actions that are taken to ensure that a development organization delivers products
that meet performance requirements and adhere to standards and procedures.
(3) The policy, procedures, and systematic actions established in an enterprise for the
purpose of providing and maintaining some degree of confidence in data integrity and
accuracy throughout the life cycle of the data, which includes input, update,
manipulation, and output.
(4) (QA) The actions, planned and performed, to provide confidence that all systems and
components that influence the quality of the product are working as expected individually
and collectively.

Quality Control
The operational techniques and procedures used to achieve quality requirements.

Traceability matrix
(2)A matrix that records the relationship between two or more products; e.g., a matrix
that records the relationship between the requirements and the design of a given software
component. See: traceability, traceability analysis.
(1)Traceability Matrix is for mapping the requirements to Test cases. To verify whether
all the test cases covering all the stated requirements or not. The purpose of the
Traceability Matrix is to identify all business requirements and to trace each requirement
through the project's completion.

Validation
Establishing documented evidence which provides a high degree of assurance that a
specific process will consistently produce a product meeting its predetermined
specifications and quality attributes. Validation typically involves actual testing and takes
place after verifications are completed.

Verification
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings.

Validation and verification testing


Used as an entity to define a procedure of review, analysis, and testing throughout the
software life cycle to discover errors, determine functionality, and ensure the production
of quality software.

software verification
In general the demonstration of consistency, completeness, and correctness of the
software at each stage and between each stage of the development life cycle. See:
validation, software.

software life cycle


The life cycle begins when an application is first conceived and ends when it is no longer
in use. It includes aspects such as initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting, phase-out, and other
aspects. s

Audit
(1)An independent examination of a work product or set of work products to assess
compliance with specifications, standards, contractual agreements, or other criteria.
(2)To conduct an independent review and examination of system records and activities in
order to test the adequacy and effectiveness of data security and data integrity procedures,
to ensure compliance with established policy and operational procedures, and to
recommend any necessary changes.

Boundary value
(1)A data value that corresponds to a minimum or maximum input, internal, or output
value specified for a system or component.
(2)A value which lies at, or just inside or just outside a specified range of valid input and
output values.

Boundary value analysis


A selection technique in which test data are chosen to lie along "boundaries" of the input
domain [or output range] classes, data structures, procedure parameters, etc. Choices
often include maximum, minimum, and trivial values or parameters. This technique is
often called stress testing.

Branch coverage
Test coverage criteria which requires that for each decision point each possible branch be
executed at least once. Syn: decision coverage. Contrast with condition coverage,
multiple condition coverage, path coverage, statement coverage.

Bug
(1)A fault in a program which causes the program to perform in an unintended or
unanticipated manner. See: anomaly, defect, error, exception, and fault.
(2)A bug is a glitch in computer software or hardware (where something doesn't do what
it is supposed to do). Since computers and computer software are very complicated to
design, human beings will make mistakes in the design. Unfortunately, in the rush to
market, many of these mistakes are not found until after a product has shipped. This is
why fixes (also called patches) are often posted on web sites. When considering the
quality of a product, one must consider not only the number of bugs, but also the value of
the features of a program, since a feature-rich program is likely to have more bugs than a
"plain-vanilla" program.

Cause effect graph


A Boolean graph linking causes and effects. The graph is actually a digital-logic circuit (a
combinatorial logic network) using a simpler notation than standard electronics notation.
Cause effect graphing
(1)Test data selection technique. The input and output domains are partitioned into
classes and analysis is performed to determine which input classes cause which effect. A
minimal set of inputs is chosen which will cover the entire effect set.
(2)A systematic method of generating test cases representing combinations of conditions.

walkthrough
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required.

Inspection
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at inspections
are like the 'eldest brother' in the parable in 'Why is it often hard for management to get
serious about quality assurance?’ Their skill may have low visibility but they are
extremely valuable to any software development organization, since bug prevention is far
more cost-effective than bug detection.

Code inspection
A manual [formal] testing [error detection] technique where the programmer reads source
code, statement by statement, to a group who ask questions analyzing the program logic,
analyzing the code with respect to a checklist of historically common programming
errors, and analyzing its compliance with coding standards. Contrast with code audit,
code review, code walkthrough. This technique can also be applied to other software and
configuration items.

Code review
A meeting at which software code is presented to project personnel, managers, users,
customers, or other interested parties for comment or approval. Contrast with code audit,
code inspection, code walkthrough.

Code walkthrough
A manual testing [error detection] technique where program [source code] logic
[structure] is traced manually [mentally] by a group with a small set of test cases, while
the state of program variables is manually monitored, to analyze the programmer's logic
and assumptions. Contrast with code audit, code inspection, code review.

Coverage analysis
Determining and assessing measures associated with the invocation of program structural
elements to determine the adequacy of a test run. Coverage analysis is useful when
attempting to execute each statement, branch, path, or iterative structure in a program.
Tools that capture this data and provide reports summarizing relevant information have
this feature.

Crash
The sudden and complete failure of a computer system or component.

Criticality
The degree of impact that a requirement, module, error, fault, failure, or other item has on
the development or operation of a system. Syn: severity.

Cyclomatic complexity
(1)The number of independent paths through a program.
(2)The cyclomatic complexity of a program is equivalent to the number of decision
statements plus 1.

Error
A discrepancy between a computed, observed, or measured value or condition and the
true, specified, or theoretically correct value or condition.

Error guessing
Test data selection technique. The selection criterion is to pick values that seem likely to
cause errors.

Error seeding
error seeding. (IEEE) The process of intentionally adding known faults to those already
in a computer program for the purpose of monitoring the rate of detection and removal,
and estimating the number of faults remaining in the program. Contrast with mutation
analysis.

Exception
An event that causes suspension of normal program execution. Types include addressing
exception, data exception, operation exception, overflow exception, protection exception,
and underflow exception.

Failure
The inability of a system or component to perform its required functions within specified
performance requirements.

Fault
An incorrect step, process, or data definition in a computer program which causes the
program to perform in an unintended or unanticipated manner.
Review
A process or meeting during which a work product or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment
or approval. Types include code review, design review, formal qualification review,
requirements review, test readiness review. Contrast with audit, inspection.

Risk
A measure of the probability and severity of undesired effects. Often taken as the simple
product of probability and consequence.

Risk Assessment
A comprehensive evaluation of the risk and its associated impact.

Software Review
An evaluation of software elements to ascertain discrepancies from planned results and to
recommend improvement. This evaluation follows a formal process.

Static analysis
(1) Analysis of a program that is performed without executing the program.
(2)The process of evaluating a system or component based on its form, structure, content,
documentation. Contrast with dynamic analysis.

Test
An activity in which a system or component is executed under specified conditions, the
results are observed or recorded and an evaluation is made of some aspect of the system
or component.

Testability
(1) The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have been met.
(2) The degree to which a requirement is stated in terms that permit establishment of test
criteria and performance of tests to determine whether those criteria have been met.

Test case
Documentation specifying inputs, predicted results, and a set of execution conditions for
a test item.
A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements
or design of an application, since it requires completely thinking through the operation of
the application. For this reason, it's useful to prepare test cases early in the development
cycle if possible.
Test case generator
A software tool that accepts as input source code, test criteria, specifications, or data
structure definitions; uses these inputs to generate test input data; and, sometimes,
determines expected results.

Test design
Documentation specifying the details of the test approach for a software feature or
combination of software features and identifying the associated tests.

Test documentation
Documentation describing plans for, or results of, the testing of a system or component,
Types include test case specification, test incident report, test log, test plan, test
procedure, test report.

Test driver
A software module used to invoke a module under test and, often, provide test inputs,
control and monitor execution, and report test results.

Test incident report


A document reporting on any event that occurs during testing that requires further
investigation.

Test item
A software item which is the object of testing.

Test log.
A chronological record of all relevant details about the execution of a test.

Test phase
The period of time in the software life cycle in which the components of a software
product are evaluated and integrated, and the software product is evaluated to determine
whether or not requirements have been satisfied.

Test plan
Documentation specifying the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing tasks,
responsibilities, required, resources, and any risks requiring contingency planning.

Test procedure
A formal document developed from a test plan that presents detailed instructions for the
setup, operation, and evaluation of the results for each defined test.

Test report
A document describing the conduct and results of the testing carried out for a system or
system component.
Test result analyzer
A software tool used to test output data reduction, formatting, and printing.

Testing
(1)The process of operating a system or component under specified conditions, observing
or recording the results, and making an evaluation of some aspect of the system or
component.
(2) The process of analyzing a software item to detect the differences between existing
and required conditions, i.e. bugs, and to evaluate the features of the software items.

Acceptance Testing
Testing conducted to determine whether or not a system satisfies its acceptance criteria
and to enable the customer to determine whether or not to accept the system. Contrast
with testing, development; testing, operational.

Alpha Testing
Acceptance testing performed by the customer in a controlled environment at the
developer's site. The software is used by the customer in a setting approximating the
target environment with the developer observing and recording errors and usage
problems.

Assertion Testing
A dynamic analysis technique which inserts assertions about the relationship between
program variables into the program code. The truth of the assertions is determined as the
program executes.

Beta Testing
(1)Acceptance testing performed by the customer in a live application of the software, at
one or more end user sites, in an environment not controlled by the developer.
(2) For medical device software such use may require an Investigational Device
Exemption [IDE] or Institutional Review Board [IRB] approval.

Boundary value Testing


A testing technique using input values at, just below, and just above, the defined limits of
an input domain; and with input values causing outputs to be at, just below, and just
above, the defined limits of an output domain.

Branch Testing
Testing technique to satisfy coverage criteria which require that for each decision point,
each possible branch [outcome] be executed at least once. Contrast with testing, path;
testing, statement.
Compatibility Testing
The process of determining the ability of two or more systems to exchange information.
In a situation where the developed software replaces an already working program, an
investigation should be conducted to assess possible comparability problems between the
new software and other programs or systems.

Exhaustive Testing
Executing the program with all possible combinations of values for program variables.
Feasible only for small, simple programs.

Functional Testing
(1) Testing that ignores the internal mechanism or structure of a system or component and
focuses on the outputs generated in response to selected inputs and execution conditions.
(2) Testing conducted to evaluate the compliance of a system or component with
specified functional requirements and corresponding predicted results.

Integration Testing
An orderly progression of testing in which software elements, hardware elements, or both
are combined and tested, to evaluate their interactions, until the entire system has been
integrated.

Interface Testing
Testing conducted to evaluate whether systems or components pass data and control
correctly to one another. Contrast with testing, unit; testing, system.

Mutation Testing
A testing methodology in which two or more program mutations are executed using the
same test cases to evaluate the ability of the test cases to detect differences in the
mutations.

Operational Testing
Testing conducted to evaluate a system or component in its operational environment.
Contrast with testing, development; testing, acceptance;

Parallel Testing
Testing a new or an altered data processing system with the same source data that is used
in another system. The other system is considered as the standard of comparison.

Path Testing
Testing to satisfy coverage criteria that each logical path through the program be tested.
Often paths through the program are grouped into a finite set of classes. One path from
each class is then tested.

Performance Testing
Functional testing conducted to evaluate the compliance of a system or component with
specified performance requirements.

Qualification Testing
Formal testing, usually conducted by the developer for the consumer, to demonstrate that
the software meets its specified requirements.

Regression Testing
Rerunning test cases which a program has previously executed correctly in order to detect
errors spawned by changes or corrections made during software development and
maintenance.

Statement Testing
Testing to satisfy the criterion that each statement in a program be executed at least once
during program testing.

Storage Testing
This is a determination of whether or not certain processing conditions use more storage
[memory] than estimated.

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its
specified requirements.

Structural Testing
(1)Testing that takes into account the internal mechanism [structure] of a system or
component. Types include branch testing, path testing, statement testing.
(2) Testing to insure each program statement is made to execute during testing and that
each program statement performs its intended function.

System Testing
The process of testing an integrated hardware and software system to verify that the
system meets its specified requirements. Such testing may be conducted in both the
development environment and the target environment.

Unit Testing
(1) Testing of a module for typographic, syntactic, and logical errors, for correct
implementation of its design, and for satisfaction of its requirements. (2) Testing
conducted to verify the implementation of the design for one software element; e.g., a
unit or module; or a collection of software elements.

Usability Testing
Tests designed to evaluate the machine/user interface. Are the communication device(s)
designed in a manner such that the information is displayed in an understandable fashion
enabling the operator to correctly interact with the system?
Volume Testing
Testing designed to challenge a system's ability to manage the maximum amount of data
over a period of time. This type of testing also evaluates a system's ability to handle
overload situations in an orderly fashion.

SEI
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the
U.S. Defense Department to help improve software development processes.

CMM and CMMI


CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model
Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that
determine effectiveness in delivering quality software. It is geared to large organizations
such as large U.S. Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied can be helpful.
Organizations can receive CMMI ratings by undergoing assessments by qualified
auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by
individuals to successfully complete projects. Few if any processes in place; successes
may not be repeatable.
Level 2 - software project tracking, requirements management, realistic planning, and
configuration management processes are in place; successful practices can be repeated.
Level 3 - standard software development and maintenance processes are integrated
throughout an organization; a Software Engineering Process Group is in place to oversee
software processes, and training programs are used to ensure understanding and
compliance.
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
Level 5 - the focus is on continuous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of
those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings
during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4%
at 5.) The median size of organizations was 100 software engineering/maintenance
personnel; 32% of organizations were U.S. federal contractors or agencies. For those
rated at Level 1, the most problematical key process area was in Software Quality
Assurance.

ISO
ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard
(which replaces the previous standard of 1994) concerns quality systems that are assessed
by outside auditors, and it applies to many kinds of production and manufacturing
organizations, not just software. It covers documentation, design, development,
production, testing, installation, servicing, and other processes. The full set of standards
consists of: (a) Q9001-2000 - Quality Management Systems: Requirements; (b) Q9000-
2000 - Quality Management Systems: Fundamentals and Vocabulary; (c) Q9004-2000 -
Quality Management Systems: Guidelines for Performance Improvements. To be ISO
9001 certified, a third-party auditor assesses an organization, and certification is typically
good for about 3 years, after which a complete reassessment is required. Note that ISO
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. Also see http://www.iso.ch/ for the latest
information. In the U.S. the standards can be purchased via the ASQ web site at http://e-
standards.asq.org/

IEEE
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI
Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008),
'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and
others.

ANSI
ANSI = 'American National Standards Institute', the primary industrial standards body in
the U.S.; publishes some software-related standards in conjunction with the IEEE and
ASQ (American Society for Quality). Other software development/IT management
process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium,
TickIT, Bootstrap, ITIL, MOF, and CobiT.