Anda di halaman 1dari 12

CBSQ 4103




Software quality assurance (SQA) has a means of monitoring the application engineering
processes and methods accustomed to ensure quality. The techniques by which this can be
accomplished are a lot and may include things like ensuring conformance to one or more
standards, such as ISO 9000 or possibly a model such seeing that CMMI. Below are the
attachments of ISO/IEC9126 and Capability Maturity Model Integration (CMMI) documentation

i. ISO/IEC 9126

ii. Capability Maturity Model Integration(CMMI)


In the early 1990s, in software engineering, there was an attempt to consolidate many
aspects of quality into a single model which would act as a global standard for measuring the
quality of software. One of the main objectives of international standards is to establish
consistency and compatibility in a specific field. This standard, known in literature as ISO 9126
(ISO 9126, 1991) standard, was to help in the understanding of the negotiation between the client
software and the manufacturer, and to recommend to what extent and what quality characteristics
software must have. As a basis for an international standard of quality software, McCall's model
is recommended. ISO 9126 defines the quality of the product as a set of product characteristics
(Maryoly, 2003). One of the main differences between the ISO and models made by McCall and
Boehm is that the ISO model is a model of strict hierarchies.
The first part of ISO/IEC 9126-1 was related to concepts, i.e. The preferred model of
software quality. The novelty was that the quality model was twofold: it consisted of the model
of internal / external quality and the model of quality in the software use. The quality model
divides the quality attributes according to six characteristics: functionality, reliability, usability,
efficiency, maintainability and portability. These characteristics are further divided into 27 sub
characteristics that can be measured by internal or external metrics.

The models of the internal/external quality had identical characteristics and sub
characteristics. They differed only in metrics by which they were quantified, and which were
defined elsewhere in the standard. For each characteristic, and sub characteristics, the ability of
the software is determined by a set of internal/external attributes that can be measured. In the
context of ISO/IEC 9126-1 standard, quality in use is how a complete system on which the
software runs is seen by the end user and it is measured by the results of the software use. The
attributes of internal and external quality are the cause, and the attributes of quality in use are the

According to Bevan (Bevan, 1999): "Quality in use is the goal, and the quality of the
software product is the means by which this goal is achieved." Therefore, quality in use is a
combined effect of the characteristics of internal and external quality of the end user. It can be
measured by the extent to which specific users can achieve specified tasks with effectiveness,
productivity, safety and satisfaction (four features of quality in use) in the specific context of use.
However, although there are three views on quality, we should not forget that these are only
different perspectives of the same thing, and that each of them has a relationship with the other

By measuring and evaluating quality in use, external software quality can be confirmed.
Furthermore, measurement and evaluation of external quality can verify internal software
quality, and the examination of internal quality can result in the conclusions about making
necessary improvements to the software. Similarly, taking into account the attributes of internal
quality is a prerequisite for achieving the required external behavior, and considering the
attributes of external quality is a prerequisite for achieving quality in use.
The second part of ISO/IEC TR 9126-2 (ISO/IEC 9126-2, 2003) applies to the external
characteristics of software quality metrics. The third part of the ISO/IEC 9126-3 (ISO/IEC 9126-
3, 2003) provides internal metrics to quantify the characteristics of software quality. Finally, the
fourth part of ISO/IEC 9126-4 (ISO/IEC 9126-4, 2004) standard contains a basic set of metrics
for each quality characteristic in use, the instructions for their application and examples of how
they are used in the life cycle of the software product.



ISO/IEC 9126 standards provide a comprehensive specification and evaluation model for
the software quality. This standard of product quality in software engineering consists of four
parts. Quality model (QM): This provides a comprehensive specification and evaluation model
for software product quality. External metrics: describe the external metrics used to measure the
characteristics and sub-characteristics identified in QM and behavior of the system of which it is
a part. The metrics applied during testing and operational stages fall in this category. Internal
metrics: describe the internal metrics used to measure the characteristics and sub-characteristics
identified in QM. These metrics are basically for nonexecutable software and provide the users'
means to measure quality of intermediate deliverables. Metrics for requirement, design and
source code fall in this category.

Quality in use metrics: identifies the metrics used to measure the effects of the combined
quality characteristics for the user. More specifically, these metrics care about the quality in
satisfaction of customers. The metrics for effectiveness, performance, productivity and safety in
real environment fall in this category.

Further, a framework for quality definition is provided in the ISO-9126 standards. This
framework is organized into quality characteristics and sub-characteristics including
functionality, reliability, usability, efficiency, maintainability and portability as the top level
quality characteristics.
CMMI is not a software process model. It is a framework which is used to analyse the
approach and techniques followed by any organization to develop a software product. It also
provides guidelines to further enhance the maturity of those software products.

The CMMI is designed for several disciplines/bodies of knowledge such as: systems
engineering, software engineering, integrated product and process development, and supplier
sourcing. Since the release of its first version in 2002, CMMI's goal is to improve the quality of
software companies and it still promises to focus on it with the announcement of the recent
version, 1.3 in November of 2010.

Briefly, CMMI provides a model of five steps each describing a level of maturity of an
organization. In fact, this model provides software developing companies to improve their
processes by following the levels of maturity given in the CMMI. These levels are named initial,
managed, defined, quantitatively managed and optimized levels, which are from informal and ad
hoc development of the processes to higher quality with low risk continuous process
improvement along with organizational innovation in software development.



The purpose of IT project identification is to develop a preliminary proposal for the most
appropriate set of interventions and course of action, within specific time, budget frames and
setting. Investment ideas can arise from many sources and contexts. They can originate from
programme or strategy, as follow-up of an existing project or from priorities identified in a multi-
stakeholder sector or local development dialogue.

Identification involves: a review of alternative approaches or options for addressing a set

of development problems and opportunities; the definition of project objectives and scope of
work at the degree of detail necessary to justify commitment of the resources for detailed
formulation and respective preparatory studies; and the identification of the major issues that
must be tackled and the questions to be addressed before a project based on the concept can be

Sufficient information on project options must be gathered to enable the IT companies to

select a priority project and reach agreements among stakeholders on arrangements for
preparation work. The results of identification work should be summarized in a report, project
brief or concept document, the format of which will depend upon the company requirements.

Good identification is critical to project success. If there is insufficient focus on expected

results, or if the potential of the most viable concepts has been overlooked at the identification
phase, there is little prospect that they will be retrieved at a later stage, when the emphasis shifts
from examining options to elaborating the details of a specific proposal. It can be costly and
difficult to abort or radically revise a concept once preparation is underway. Pressure to proceed
rapidly with project formulation can lead to settling quickly on a specific concept before
sufficient evidence has been assembled to confirm its validity.

Many design-related problems encountered during implementation are the result of poor
diagnosis of constraints, overly ambitious targets, time schedules and productivity projections.
Consultation and involvement of stakeholders at the identification stage is essential to ensure
appropriate concept selection and increase prospects for successful implementation and –
ultimately – sustainability. Concept consideration needs to focus clearly on expected results and
these should determine the appropriate interventions. In practice, concepts often start with ideas
for interventions, and it is important to rapidly focus the view on expected.

Meanwhile IEEE Standard for Software Test Documentation describes a basic set of
software test documents. It doesn’t specify any particular techniques for testing nor does it even
require that all the documentation within the standard be used. But what it does do is: a) provide
a frame of reference for the various stakeholders in an IT project; b) serve as a checklist for the
various kinds of topics that should be considered during a software test project, and; c) makes the
whole test process more manageable. Below is an attach file of IEEE Standard for Software Test

A test plan is a detailed document that outlines the test strategy, Testing objectives, resources
(manpower, software, hardware) required for testing, test schedule, Test Estimation and test
deliverables. The test plan serves as a blueprint to conduct software testing activities as a defined
process which is minutely monitored and controlled by the test manager. Test Plan helps us
determine the effort needed to validate the quality of the application under test. It also help
people outside the test team such as developers, business managers, customers understand the
details of testing. Test Plan guides our thinking. It is like a rule book, which needs to be

Important aspects like test estimation, test scope, Test Strategy are documented in Test Plan,
so it can be reviewed by Management Team and re-used for other projects. there are three basic
sections that should always be included in a test plan: Test Coverage, Test Methods, and Test
Responsibilities. Test coverage defines what you will be testing and what you will not.Test
methods define how you will be testing each part defined in the “coverage” section. Test
responsibilities assign tasks and responsibilities to different parties. This section should also
include what data each party will record and how it will be stored and reported.

The Institute of Electrical and Electronics Engineers (IEEE) publishes international standards
for testing and documenting software and system development. To hold our test plan to the
highest standard, consult with the IEEE publications below:

 29119-1-2013, Software and Systems Engineering - Software Testing - Part 1: Concepts

and Definitions
 29119-2-2013, Software and Systems Engineering - Software Testing - Part 2: Test
 29119-3-2013, Software and Systems Engineering - Software Testing - Part 3: Test
 829-2008, IEEE Standard for Software and System Test Documentation
 1008-1987 - IEEE Standard for Software Unit Testing

According to IEEE 829 test plan standard, following sections goes into creating a testing plan

1. Test plan identifier

As the name suggests, ‘Test Plan Identifier’ uniquely identifies the test plan. It
identifies the project and may include version information. In some cases,
companies might follow a convention for a test plan identifier. Test plan identifier
also contains information of the test plan type. There can be the following types of
test plans:
 Master Test Plan: A single high level plan for a project or product that
combines all other test plans.
 Testing Level Specific Test Plans: A test plan can be created for each level of
testing i.e. unit level, integration level, system level and acceptance level.
 Testing Type Specific Test Plans: Plans for major types of testing like
Performance Testing Plan and Security T esting Plan.

2. Introduction

Introduction contains the summary of the testing plan. It sets the objective, scope,
goals and objectives of the test plan. It also contains resource and budget
constraints. It will also specify any constraints and limitations of the test plan.
3. Test items

Test items list the artifacts that will be tested. It can be one or more module of the
project/product along with their version.

4. Features to be tested

In this section, all the features and functionalities to be tested are listed in detail. It
shall also contain references to the requirements specifications documents that
contain details of features to be tested.

5. Features not to be tested

This section specifies the features and functionalities that are out of the scop e for
testing. It shall contain reasons of why these features will not be tested.

6. Approach

In this section, approach for testing will be defined. It contains details of how
testing will be performed. It contains information of the sources of test data, inputs
and outputs, testing techniques and priorities. The approach will define the
guidelines for requirements analysis, develop scenarios, derive acceptance criteria,
construct and execute test cases.

7. Item pass/fail criteria

This section describes a success criteria for evaluating the test results. It describes
the success criteria in detail for each functionality to be tested.

8. Suspension criteria and resumption requirements

It will describe any criteria that may result in suspending the testing activities and
subsequently the requirements to resume the testing process.
9. Test deliverables

Test deliverables are the documents that will be delivered by the testing team at the
end of testing process. This may include test ca ses, sample data, test report, issue

10. Testing tasks

In this section, testing tasks are defined. It will also describe the dependencies
between any tasks, resources required and estimated completion time for tasks.
Testing tasks may include creatin g test scenarios, creating test cases, creating test
scripts, executing test cases, reporting bugs, creating issue log.

11. Environmental needs

This section describes the requirements for test environment. It includes hardware,
software or any other environmental requirement for testing. The plan should
identify what test equipment is already present and what needs to be procured.

12. Responsibilities

In this section of the test plan, roles and responsibilities are assigned to the testing

13. Staffing and training needs

This section describes the training needs of the staff for carrying out the planned
testing activities successfully.

14. Schedule

The schedule is created by assigning dates to testing activities. This schedule shall
be in agreement with the development schedule to make a realistic test plan.
15. Risks and contingencies

It is very important to identify the risks, likelihood and impact of risks. Test plan
shall also contain mitigation techniques for the identified risks. Contingencies shall
also be included in the test plan.

16. Approvals

This section contains the signature of approval from stakeholders.


Azuma, M. 2001. SquaRE: The next generation of the ISO/IEC 9126 and 14598 international
standards series on software product quality.

Bevan, N. 1999. Common Industry format usability tests. Serco usability services 4 sandy lane,
Teddington, Middx, TW11 0 DU, UK.

Boehm, B.W., Brown, J.R., & Lipow, M. 1976. Quantitative Evaluation of Software Quality. In:
Proceedings of the 2nd International Conference on Software Engineering. , pp.592-605.

Boehm, B.W., & et. all., 1978. Characteristics of Software Quality. UTRW series on Software
Technologies. North Holland. Vol. 1.

IEEE, 1996. IEEE/EIA-12207: Information technology-software life cycle processes. The

Institute of Electrical and Electronics Engineers, New York, USA.

ISO., 1991. ISO/IEC IS 9126-software product evaluation-quality characteristics and

guidelines for their use. International Organization for Standardization, Geneva,