Anda di halaman 1dari 83

UNIT –I

Introduction to Software engineering


Origins of Software Engineering

The term software engineering first was used around 1960 as researchers, management, and
practitioners tried to improve software development practice.

The NATO Science Committee sponsored two conferences on software engineering in 1968
(Garmisch, Germany) and 1969, which gave the field its initial boost. Many consider these
conferences to be the start of the field. Software engineering has never looked back thereafter.

Software engineering is concerned with the

 Theories,
 Methods and
 Tools

which are needed to develop

 high quality, complex software in a


 cost effective way on a

predictable schedule.

Software crisis

Software engineering arose out of the so called software crisis of the 1960s, 1970s, and 1980s,
when many software projects had bad endings. Many software projects ran over budget and
schedule. Some projects caused property damage. A few projects caused loss of life. As software
becomes more pervasive, we all recognize the need for better software. The software crisis was
originally defined in terms of productivity, but evolved to emphasize quality.

Problem and the cause

According to a study by the Standish Group, in 2000, only 28 percent of software projects could be
classed as complete successes (meaning they were executed on time and on budget), while 23 percent
failed outright (meaning that they were abandoned).
Software(s) still come late, exceed the budget and are full of residual faults. According to IBM reports,
“31% of the projects get cancelled before they are completed, 53% over run their cost estimates and for
every 100 projects, there are 94 restarts. This is the concept behind software crises.

Due to the complexity of software and lack of proper methodology, most of the projects fail to deliver in
60’s, 70’s and 80’s. There was a conference of NATO countries in 1970s and there the word Software
Engineering was coined. It is always a better option to adopt software engineering principles to get a
quality software product in due time and within our cost constraints.

Since several decades, numerous approaches have emphasized the importance of managing the so-
called software crisis, which generally refers to the problem of developing cost-effective software
products.

To overcome the software crisis, many different techniques have been proposed, such as programming
languages, software engineering methods, frameworks, patterns, and architectures. Each technique
addresses the software crisis problem from a certain perspective.

Introduction to Software Engineering

Software Engineering is a systematic approach to software development, operation, maintenance and


its retirement. It is the combination of models of software science, management science and
mathematics to develop a quality software product.

By software product, we mean complete software (Programs, operating procedures and


documentation) delivered to the customer within scheduled time and estimated cost.

According to Barry Boehm [Boe81], Software Engineering is the application of science and mathematics
by which the capabilities of computer equipment are made useful to man via computer programs,
procedures and associated documentation.
At the first conference on software engineering in 1970, Fritz Bauer defined software engineering as
“The establishment and use of sound engineering principles in order to obtain economically developed
software that is reliable and works efficiently on real machines”

Stephen Schach defined it as “A discipline whose aim is the production of quality software, software
that is delivered on time, within budget and that satisfies our requirements.

Software engineering (SE) is the profession concerned with creating and maintaining software
applications by applying computer science, project management, domain knowledge, and other
skills and technologies (From Wikipedia, the free encyclopedia).

Software applications (including ATMs, compilers, databases, email, embedded systems,


graphics, office suites, operating systems, robotics, video games, and the world wide web)
embody social and economic value, by making people more productive, improving their quality
of life, and enabling them to do things that would otherwise be impossible. SE technologies and
practices (including databases, languages, libraries, patterns, platforms, processes, standards, and
tools) help developers, by improving productivity and quality.

Software myths

There are number of myths associated with software development and its process. Some of them affect
the way in which software development should take place. This goes on to show the difference of
opinion in the minds of developers and the management. As a result of this, developers are the ultimate
sufferers. Management will always have the following myths in their minds:

1. Software is easy to change: It is easy editing the program and changing it but that may
result in introduction of errors, which may upset all our calculations. Management hardly
considers the problems associated with software changes and still maintain that software
changes are easy to perform.

2. Testing or “proving” software correct can remove all the errors: Testing can show the
presence of errors in our programs but it cannot guarantee absence of errors. Our aim is to
develop effective test cases to find out the maximum possible errors. Management or
people concerned always believe that after testing no more problems can surface. This fact
is a fallacy as errors can creep into the system at a later stage.
3. Reusing software increases safety: Re-use of software may pose integration, logical or
syntax problems in new software even if the modules were working correctly in old
software. Code re-use may be a powerful tool but it requires analysis to determine its
suitability and testing to determine if it works. Management will say “how come the system
is not performing when it was working fine the previous time”. Reusability can induct its
own set of problems.

4. Software can work right the first time: Software developers are sometimes made to build
the system without making a prototype but for a successful and efficient system, software
development will take more than just the first time.

5. Software can be designed thoroughly enough to avoid most integration problems: There is
an old saying among software developers “Too bad, there is no compiler for specifications”.
This point out the fundamental difficulty with detailed specifications. They always have
inconsistencies and there is no computer tool to perform consistency check therefore
special care is to be taken to understand the specifications and if there is an ambiguity, it
should be resolved before proceeding for design.

6. Software with more features is better software: Don’t try too hard with the software and
incorporate too much. The best programs are those that do one thing very well.

7. Addition of more software engineers will make up the delay: “Too many cooks spoil the
dish” applies to software also. This may further delay the software.

8. Aim is to develop working programs (rather than good quality maintainable programs):
The aim has shifted to developing good quality maintainable programs. Maintaining
software has become very crucial area for software engineering.

Process of Software System Development

In this part, we are going to study the process of system development which is a long and time
consuming procedure and requires thorough planning and systematic approach to problem solving.
SDLC (Software development Life Cycle) is an important step in this direction. This life cycle approach
covers steps like System study (problem definition), Feasibility study, Analysis, Design (coding), Testing,
Implementation and Maintenance.
After studying this part of the chapter, we should be able to

1) Study the problem properly so that all user requirements can be acquired
2) Conduct a feasibility study on the basis of requirements and constraints
3) Develop logical design on paper which is referred to as Design
4) Do physical designing (coding) on the basis of Logical design which is referred to as
Development
5) Implement the system on user’s site
6) Maintain the system considering corrective, adaptive and perfective issues.

Software Development Life Cycle

Everybody on this earth follows a life cycle be it plants, human beings or animals. Since the time
of their birth they pass through certain stages from adolescence to maturity and old age. Similar
is the case with Systems or we will say Information Systems. In today’s business world
Information Systems are becoming an integral part so they should be carefully planned and
developed. For this purpose Information Systems should follow standard procedure which is
termed as Systems Development Life Cycle. The life cycle of systems has seven steps and they
are performed one after the other.

When an organization needs some change in the present system for the sake of better
performance an idea of developing a computerized system is conceived in the minds of the
management of organization. This abstract idea pass through various stages called phases of
SDLC to take a physical form that will fulfill the requirements of the users of the system and
ultimately benefit the organization through better performance.

Definition of Software development Life cycle


(IEEE Standard Glossary of Software Engineering
Terminology, 1983).
The period of time that starts when a software product is conceived and ends when the product

is no longer available for use. The software life-cycle typically includes a requirements phase,
logical design phase, physical design phase, test phase, implementation (installation) and

checkout phase, operation and maintenance phase, and sometimes, retirement phase.

A software life-cycle model is either a descriptive or prescriptive characterization of


software evolution.

A prescriptive life-cycle model describes how software systems should be developed. Most of
these models are intuitive and many software development details are ignored, glossed over, or
generalized. This raises concern for the relative validity and robustness of such models.

Descriptive life-cycle models characterize how software systems are actually developed. One
must observe or collect data throughout the development of a software system. These models
are much less common than prescriptive models. Descriptive models are specific to the system
observed, and only generalizable through systematic analysis.

How can software life-cycle models be used?


1. As a means to organize, plan, staff, budget, schedule and manage software projects,
2. As prescriptive outlines for what documents to produce,
3. As a basis for determining what software engineering tools and methodologies will be most
appropriate,
4. As frameworks for analyzing or estimating patterns of resource allocation and consumption
during the software life-cycle,
5. As comparative descriptive or prescriptive accounts for how software systems come to be the
way they are
6. As a basis for conducting empirical studies to determine what affects software productivity,
cost, and overall quality.

Let’s have a quick glance on the phases /steps of SDLC and what is done in each phase:

System Study (Problem definition): The working of present system (manual or computerized) is
studied to gain in-depth knowledge of the system and also of the problems inherent in the
present system due to which management is seeking change.

Example of computerized information systems are: Accounting information system, Personnel


information system, Payroll system, Library information system, Students information system
etc. The working of these systems may vary from organization to organization.
Feasibility Study: This phase is also known as feasibility analysis phase. In this phase the
requirements of the proposed system are compared with the constraints like cost, time,
technical resources, physical resources, manpower etc to have an idea that all the requirements
can be fulfilled in the lieu of constraints or not.

Before beginning a project, a short, low-cost study to identify Clients, Scope, Potential
benefits, Resources needed (staff, time, equipment, etc.), Potential obstacles etc. Further we
may enquire where the risks are? How can they be minimized?

A feasibility study leads to a decision:

 Either go ahead in the project


 do not go ahead
 think again

Various types of feasibilities are there which are checked.

The feasibilities checked are

a) Technical feasibility
b) Economic feasibility
c) Behavioral feasibility
d) Operational feasibility

In Technical feasibility, it is checked whether the proposed system is technically feasible


(possible) or not. It is very easy to say that system should be developed in Visual Basic with
Oracle at the back-end. The analyst will have to find out if we are technically sound (Do we have
the technical manpower to handle the situation) or do we have the required hardware
configuration to load Visual Basic and Oracle. For example, if we have 486 processor with 16-bit
word length then it is not possible to load Visual basic and Oracle, as they require 32-bit word
length.

In Economic feasibility, we check to see if we are economically sound to go for a new system.
Cost benefit analysis is performed to see if we expect benefits from the system after spending
such a large amount of money. If the system doesn’t appear to be profitable then is not feasible
to move ahead.
In Behavioral feasibility, we check to see if the system is behaviorally feasible. It happens that
people are resistant to change and when they hear of computerization they think their jobs will
go and be replaced by computer professionals so they go on a strike putting the software
proposal in jeopardize so it is always better to look into this kind of feasibility before the start of
actual system.

In Operational feasibility, we check to see if the system is operationally feasible i.e. is it possible
to make an operational system.

System Analysis: After the feasibility study is conducted and the system analyst gets a go-ahead
from the organization, the logical designing of the system is done on paper in the form of Data
Flow Diagrams, Systems flow charts etc.

System Design/Development: It is the conversion of logical design to physical design in the form
of computer programs, databases, files etc. Coding is performed for Input, Output, Process and
Menu forms. If the system has been properly analyzed then there is very little chance of system
failure due to design phase.

System Testing: The system will be tested for errors and to see whether it fulfils the
requirements of users properly or not. There are various kinds of testing like unit testing, system
testing, string testing, white box testing, black box testing etc.

System implementation: After the system is developed and tested, the users are trained and
the users’ manual is prepared to help the users use the proposed system properly and
efficiently. After this the proposed system takes place of the present working system i.e. it is
installed on users’ site. This comes under implementation phase. There are various kinds of
implementation procedures like direct conversion (new system directly implemented), phased
conversion (implementation of software in phased manner i.e. modules are implemented
separately) and parallel conversion (where the new system is run in parallel with the old
system).

System maintenance: After the user has used the system for a reasonable time he evaluates the
performance of the system and if there is any need for improvement then the system is
modified. There are three kinds of maintenances: Corrective, adaptive and perfective and will be
dealt with in detail in chapter on maintenance.

Important Note: The terms used in SDLC like analysis, design, development and
Implementation vary from literature to literature. The fact is that steps of SDLC are so closely
tied together that it is always difficult to separate them. Some authors define analysis as
requirement analysis, design as logical design (drawing on paper) and implementation as coding
of the system whereas I’ve taken it as analysis for logical design, design for physical design i.e.
Coding of the system and implementation as loading the newly made software product on
user’s site.

Software Engineering models


Over the years, a number of different models have been developed, beginning with the oldest and
simplest being the Waterfall Model. However, as software has become larger and more complex, this
method of development has been found to be counter productive, especially when large teams are
involved. Models that are iterative have evolved including Prototyping, Evolutionary Prototyping,
Incremental Development, spiral model.

There are basically four models in Software Engineering and they more or less make use of SDLC steps.
The difference lies in their applicability. Waterfall model is used for smaller projects while spiral model is
being used for bigger projects.

1. Waterfall model: This is the simplest SDLC model and is mostly used for small projects (In software
engineering, there are many kinds of projects). The waterfall model derives its name due to the
cascading effect from one phase to the other as is illustrated in Figure. In this model each phase has well
defined starting and ending point, with identifiable deliveries to the next phase.
Note this model is sometimes referred to as the linear sequential model or the software life cycle.

The steps of waterfall model are the following:

1. Problem definition
2. Feasibility study

3. Analysis (Design on paper)

4. Design (Development i.e. coding on the system)

5. Testing

6. Implementation (Loading the software)

7. Maintenance

These steps are performed one after the other and they fall like water drops falling from the sky.
Though waterfall model is not used for big projects but some of its steps (analysis, design,
implementation and maintenance) are followed in other models also.

Advantages:
The biggest advantage of waterfall model is its simplicity. Other advantages are
 Testing is inherent to every phase of the waterfall model
 It is an enforced disciplined approach
 It is documentation driven, that is, documentation is produced at every stage

Limitations:

1. All the requirements of the system have to be specified in advance before proceeding with
the actual design of the system.
2. There is very little interaction of the user with the Systems/ software engineer when it
comes to the ultimate development of the system
3. Problems are not discovered until system testing.
4. Requirements must be fixed before the system is designed - requirements evolution makes
the development method unstable.
5. Design and code work often turn up requirements inconsistencies, missing system
components, and unexpected development needs.
6. System performance cannot be tested until the system is almost coded; under capacity may
be difficult to correct.

The standard waterfall model is associated with the failure or cancellation of a number of large
systems. It can also be very expensive. As a result, the software development community has
experimented with a number of alternative approaches

2. Prototype model: In this type of model, we develop a prototype (dummy software) of the
system and show it to the user for feedback about the system. The software prototype is
analogous to building prototype built by an architect when designing a big complex. The
prototype (in form of a menu with some forms attached along with few tables) is shown to the
user and his views are taken to improve the design of the system.

A prototype is a toy implementation of a system with

-Limited functional capabilities,

-Low reliability,

-Inefficient performance.

The reasons of Rapid (Evolutionary) prototyping are

 To increase effective communication.


 To decrease development time.
 To decrease costly mistakes.
 To minimize sustaining engineering changes.
 To extend product lifetime by adding necessary features and eliminating
redundant features early in the design.

Rapid Prototyping decreases development time by allowing corrections to a product to be made


early in the process. By giving engineering, manufacturing, marketing, and purchasing a look at
the product early in the design process, mistakes can be corrected and changes can be made
while they are still inexpensive.
The biggest advantage of prototype model is that it reduces the communication gap between
the developer and the client so that systems are built in a better way and are successful.
Prototype is a catching on. It is well suited for projects where requirements are hard to
determine and the confidence in obtained requirements is low. It doesn’t require all
requirements to be known in advance.

Even though construction of a working prototype model involves additional cost, overall
development cost might be lower for systems with unclear user requirements and systems
with unresolved technical issues. Many user requirements get properly defined and technical
issues get resolved as these would have appeared later as change requests and resulted in
incurring massive redesign costs.

Limitation:

Since two systems are built instead of one, extra money is required to develop two systems.
Design and code for the prototype is usually thrown away.

3. Iterative enhancement model: Also called Iterative Refinement model or Evolutionary


Development, it counters another drawback of waterfall model and combines the benefits of
both prototyping and waterfall model. The basic idea is that the software should be developed
in increments, with each increment adding some functionality to the software till a complete
system is implemented.

An advantage of this approach is that it can result in better testing because testing each
increment is likely to be easier than testing the entire system. Further increments provide
feedback to the client that is useful for determining the final requirements of the system.

The basic idea behind iterative enhancement is to develop a software system


incrementally, allowing the developer to take advantage of what was being learned
during the development of earlier, incremental, deliverable versions of the system.
Learning comes from both the development and use of the system, where possible. Key
steps in the process were to start with a simple implementation of a subset of the software
requirements and iteratively enhance the evolving sequence of versions until the full
system is implemented. At each iteration, design modifications are made along with
addition new functional capabilities.
Concept: Initial implementation for user comment, followed by refinement until system is
complete.

Evaluation Requirements

Implementation
Design

The biggest advantage of iterative enhancement is that prototype is not thrown away but is further

refined according to the client needs. It is evolutionary model that uses prototype as the base and

iterates till complete system is ready.

4. Spiral model: Proposed by Boehm, this model deals with uncertainty (risk factor), which is
inherent in software projects. Risk analysis is performed along with every step as can be seen
from the figure. Barry Boehm considered risk factors and tried to incorporate project risk into
life cycle model. This model was proposed in 1986.
Risk

Analysis
Risk

Risk analysis

analysis
Rapid

prototype
verify

Verify
Verify

Verify

Verify Implementation
Design

This model is well suited to accommodate any mixture of a specification oriented, prototype

oriented, simulation oriented or some other type of approach as it covers risk factor. For a high risk
Risk analysis
project this might be a preferred model.

Another important feature of this model is that each cycle of the spiral is completed by a review

(under verify category in the figure above) that covers all the products developed during that cycle.
Phases of the spiral model:

1. Determine objective, alternatives, constraints: Specific objectives for the project phase are
identified

 Objectives
o Procure software component catalogue
 Constraints
o Within a year
o Must support existing component types
o Total cost less than $100, 000
 Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
 Risks
 Risk resolution
 Results
 Plans
 Commitment
2. Evaluate alternatives; identify, resolve risks: Key risks are identified, analysed and
information is sought to reduce these risks

 Objectives
 Constraints
 Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
 Risks
o May be impossible to procure within constraints
o Catalogue functionality may be inappropriate
 Risk resolution
o Develop prototype catalogue (using existing 4GL and an existing DBMS) to
clarify requirements
o Commission consultants report on existing information retrieval system
capabilities.
o Relax time constraint
 Results
 Plans
 Commitment

3. Develop, verify next-level product: An appropriate model is chosen for the next
phase of development.

 Objectives
 Constraints
 Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
 Risks
 Risk resolution
 Results
o Information retrieval systems are inflexible. Identified requirements cannot be
met.
o Prototype using DBMS may be enhanced to complete system
o Special purpose catalogue development is not cost-effective
 Plans
 Commitment

4. Planning: Plan next phase of the spiral model

 Objectives
 Constraints
 Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
 Risks
 Risk resolution
 Results
 Plans
o Develop catalogue using existing DBMS by enhancing prototype and improving
user interface
 Commitment
o Fund further 12 month development

We have studied four models along with their advantages and disadvantages. Spiral model is the best
model among these if risk is involved in the project. The next topic deals with Boehm’s risk factors
(Risk factors involved in a software project) otherwise iterative enhancement is the best choice.

Boehm’s identified Software Risks

Barry Boehm has identified the following risk factors that play an important role during the project

duration. It is not necessary that these risks are always associated with the project but more or less

projects have the same risk factors.

1. Personnel shortfalls: Manpower shortage due to any reason is the biggest risk involved
with a project. Professionals can either leave the job in between or can be fired or there
can be so many reasons for shortfalls.
2. Unrealistic schedules and budgets: Schedules and budgets that have not been properly
made. There can be many reasons for that, for example budget was prepared just to
attract a tender or schedule was wrongly put across.
3. Developing the wrong software functions: Any body can commit an error “To err is
human” so it is always a possibility that software function has been incorrectly
developed.
4. Developing the wrong user interface: Developer may also develop a wrong user
interface (Input/ Output Screens) and this is always a risk.
5. “Gold plating”: It is always possible that a developer may just plat the software product
from outside and re-use a system that is not worthy of being sold.
6. Continuing stream of requirements changes: This again is a big problem encountered
by software professionals. Requirements keep changing and are always a hindrance in
the work of developers. If requirements change midway a project then it is a big loss in
terms of money and time.
7. Real-time performance shortfalls: When the software is ready it is not certain whether
it will perform 100%, there can be real time performance issue bottlenecks

Comparison of Waterfall, Prototype, Iterative enhancement and Spiral model

Model Strengths Weaknesses

Waterfall Disciplined approach Delivered product may not meet client’s


needs

Document driven
Does not scale down – its difficult to go back
up

Rapid Ensures that delivered A need to build twice


product meets client’s needs
Prototyping

Cannot always be used

Spiral Incorporates features of all Can be used only for large-scale products
the above models

Developers have to be competent at risk-


Adds risk assessment analysis

Incremental Maximizes early return on Requires open architecture


investment

May degenerate into build-and-fix


Lower risk of project failure

Software Quality factors


When a client needs a software system, he has some quality attributes in mind. These quality attributes
have to be considered while designing the system. What will happen if the management asks the
software engineer to port (load) the new system onto another hardware configuration and the
developer hadn’t kept portability factor in mind while developing the software system. The result would
be some unnecessary modifications in the system. There are certain factors given and software
developers are advised to keep the following factors in mind to get a good, efficient and universal
software system. The quality software product means that it has got the following:

Correctness – The software should behave according to specifications specified by the user

Reliability – It is a Statistical property and states the probability that software system will not fail
during a specified period of time under certain constraints.

– i.e. Mean time between failures

Robustness – It states that software should behave “reasonably” in unanticipated circumstances.

Performance – It states that the software should use resources economically (in an efficient
manner)

Usability – It states that the software should be easy to use and we should keep in mind the
intended audience who may not be computer literate or expert for that matter.

Maintainability – It states that software should be conducive to corrective, adaptive, and perfective
maintenance

– Corrective – removal of bugs


– Adaptive – adjust to changing environment
– Perfective – change to improve qualities
Evolvability – It states that it should be easy to add functionality or modify existing functions (which
area of maintainability does this address?)

Reusability – It states how easily can we use it for another application (minor modifications allowed)
or use some part ourselves from another software.

Portability – It states how easily can it run on different environments and is one of the most
desirable property these days

Verifiability – It states that satisfaction of desired properties should be easily determined


UNIT II

Requirement Engineering

Software requirement & requirement specification

What are Requirements?

The Institute of Electrical and Electronics Engineers (IEEE) defines a requirement as "(1)
A condition or capability needed by a user to solve a problem or achieve an objective. (2)
A condition or capability that must be met or possessed by a system or system component
to satisfy a contract, standard, specification or other formally imposed document.

These are pretty good in that they help balance the fine line of requirements describing the
basics of what needs to be done as opposed to the implementation. What these definitions
do not necessarily answer to, however, is the idea that requirements are a multidimensional
set of product end-state needs, and all such needs are vying for the same project resources.
Anytime you have something vying for the needs of finite resources you run into issues of
risk and prioritization since this applies to requirements. With risk and prioritization comes
the understanding that deriving requirements is an engineering task and, as with any form
of engineering, there should be a stable process in place.

Types of Requirements
There are business requirements and there are user requirements. The former are
generally a high-level set of requirements that pertain to the needs and wants of the
organization such that they can adequately construct, maintain, and support the product(s).
Business requirements can also cover aspects of the business-customer relationship. The
user requirements pertain to the tasks and goals that the user should be able to accomplish
with the product(s). These tasks and goals then feed directly into functional requirements,
which cover what types of functionality must be put into the product in order to allow the
users to accomplish those tasks and goals. In other words, functional requirements cover
the external behavior of the product(s).

Note that functional requirements often cover the type of functionality and not the exact
implementation of the functionality, which is more of a design and implementation issue.
Another way to look at it is that functional requirements define what will be constructed,
but not necessarily how it will be constructed. Relating to this, according to Karl Wiegers, a
feature is "a set of logically related functional requirements that provides a capability to
the user and enables the satisfaction of a business requirement." Wiegers also states that
one of the key things to understand is that user requirements must align with the business
requirements. Not only should that but all functional requirements be traceable to user
requirements.

In many organizations, functional requirements are defined and documented in a Software


Requirements Specification (SRS). The business requirements are often defined and
documented in what is known as a Vision and Scope document.

So to take a brief look at some common conceptions of requirement types, consider:

1. System Requirements: Such a set of requirements will usually refer to the


requirements that describe the capabilities of the system with which,
through which, and on which a certain product (or group of products) will
function.

2. User Requirements: User requirements will describe the needs, goals, and
tasks of the user (or of various users) as they are thought to be by the
business. Usually this will be based on industry statistics for such things or
focus groups that were conducted.

3. Functional Requirements: Functional requirements describe what the


product is supposed to do by defining specific functional areas and high-level
logic as to how users move between those functional areas.

4. Functional Specifications: Functional specifications describe the necessary


functions at the level of particular units and components within given
functional areas of the product.
5. Design specifications : The design specification will address the overall
"look and feel" of the interface, with rules for the display of global elements,
like screen names, and particular elements such as icons, buttons, hyperlinks,
screen layouts, etc.

6. Technical Specifications: Technical specifications are typically written by developers or


engineers and will describe exactly how they will build the functionality.

Gathering requirements
Gathering data/information/requirements from the user is an uphill task in life cycle model. There
are a plenty of reasons why gathering requirements is always difficult and it becomes even more
challenging as user’s requirements keep on changing again and again.

After gathering user requirements, they are analysed and then put together in a document called
”Software Requirement Specification” document. This document also serves as a contract between
the developer and the client.

Techniques for Gathering requirements from users


Asking the potential clients about their requirements and needs and following are the time proven
techniques

- Interviewing the users/clients: Both the structured and unstructured interview may be planned. In
the case of the structured interview a list of specific closed ended questions are posed and the
answers recorded. E.g. “How long does it take to perform activity X” or “How many people are in the
marketing dept?” or “How much was spent on the current system last month?”

In the case of the unstructured interview open-ended questions are asked which allow the
interviewee to outline broad areas or express views/opinions/convictions that may be hard to
quantify. E.g. “Explain why the current product is unsatisfactory?” or “What are the best features of
the current system?” or “What would be the most effective way to accomplish task X?”

At the end of the interview process the interviewer prepares a written report outlining the results of
the interview. Those interviewed should be given copies and allowed to add/clarify statements
made.

- Questionnaire: A well-structured questionnaire is a useful tool for gathering information. In the


case of large departments/organizations it may be impractical to conduct numerous interviews.
Unlike the interview process that is interactive in nature, there is no way to pose new questions
(follow-ups) based on the answers given to previous questions. This means a skilful and methodical
interviewer will obtain better information than that obtained using a well-developed and well-
worded questionnaire.

- ”Brainstorming” sessions
- Analysing an existing system to know more about it, its limitations so that they can be rectified.
We must understand how the new system will
differ from any old such system

- Analysing the environment gives insight into the system- e.g. process analysis

- Prototyping can be performed to gather requirements and it gives best feedback and more formal
specifications but can be expensive

Tips for Getting Useful Information from Users


IEEE Software, July 1988 (Human Factors)

1. Include real end users, not their representatives.


2. Don't ask users to do your job.
3. Overcome resistance to change.
4. Use data to settle differences of opinion.
5. Leave room for users to change their minds.
6. Keep an open mind.
7. Live in their camp for a while.
8. Get some communications help.
9. Don't rely on memory or general impressions.
10. Don't rush to write things off as too difficult.

Requirements Analysis

This step is also known as problem analysis. Here we are analyzing the problem so that we can
proceed forward towards our goal of software project development. The members of a software
development team must have a clear understanding of what the software product must do. The first
step is to perform a thorough analysis of the client’s current situation, careful to define the situation
as precisely as possible. This analysis may require examination of a current manual system being
operated, or may need an appraisal of some computerized system to be performed. Once a clear
picture of the current situation is obtained, then the question of “What must the new product be
able to do?” may be answered.
Requirements analysis begins with the requirements team meeting with members of the client
organization. The initial meeting may be used to plan subsequent interviews or techniques for
soliciting the relevant information from the client’s organization.

Following are the steps involved in requirement analysis:

1. Understand the requirements in detail

• Domain understanding: The problem domain (area) for which the system is being developed
should be properly studied in detail to pick up each and every requirement of the system so that
none of the requirement is missed out.

• Stakeholders: We should know exactly who are the stakeholders (all persons interested in the
system) and what are their expectations from the system.

Once this much work has been done, we have a lot collected requirements at our disposal which
may be correct, incorrect or duplicated.

2. Organize the requirements:

Once we have collected the requirements, they are now classified and organized according to some
scheme. Duplicated and incorrect requirements are removed from the list.

• Classification into coherent clusters: Clusters of requirements are made that help in later stages
(e.g., legal requirements)

• Recognize and resolve conflicts: If conflicts occur between requirements then they are recognized
and resolved (e.g., functionality vs. cost vs. timeliness)
3. Model the requirements:

Once we have organized the requirements, they can now be modeled using any of the methods
given below. Modeling of requirements helps in understanding the system in a better way. (Process
modeling will be covered later in analysis chapter using DFD, decision tables, Pseudo code etc.)

Requirements can be modeled using

• Informal methods:

Prose: Requirements can be stated inform of English prose which everybody can
understand easily

• Systematic methods:

Procedural models (Flowcharts, System diagrams etc)

Data-centric models (ER Diagram, Data Flow Diagrams etc)

Object models (Use case diagrams, class diagrams etc.)

Software Requirement Specification

Following points should be kept in mind regarding requirements if we want to succeed in the
development process:

 Requirements tell us what the software system should do and not how it should do it. Stress
should be given only on what is to be done to solve the problem and not how?
 Requirements are independent of the implementation tools, programming paradigm, etc. They
can therefore be stated in English representation also.

 However, the requirements are then analysed with the intended implementation methodology
in mind.

Definition of Software Requirement Specification

 a set of precisely stated properties or constraints which a software system must satisfy.
 a software requirements document establishes boundaries on the solution space of the problem
of developing a useful software system.

A software requirements document allows a design to be validated - if the constraints and


properties specified in the document are satisfied by the software design then that design is
an acceptable solution to the problem.

The task should not be underestimated, e.g. the requirements document for a ballistic missile
defense system (1977) contained over 8000 distinct requirements and support paragraphs and
was 2500 pages in length.

Requirement specification – motivation and basics

• Requirement specification is generally the most crucial phase of an average software project - if
it succeeds then a complete failure is unlikely.

• The requirements specification can be used as a basis for a contract.

• The requirements specification can (and should) also be eventually used to evaluate if the
software fulfills the requirements.

• As users generally can not work with formal specifications, natural language specifications must
or should often be used.

Good Requirements Specification Qualities

• SRS should be complete: Fully describe functionality to be delivered


• SRS should be accurate: Must accurately describe functionality to be built.
• SRS should be unambiguous: Requirement should allow for single, consistent
interpretation. Writing should be succinct and straightforward in the language of the user
domain.
• SRS should be verifiable: A small number of tests or verifications should be possible to
determine if the requirement was properly implemented.
• SRS should be Prioritized: Implementation priority should be given for each requirement
• SRS should be modifiable: It should be changeable as the requirements change
• SRS should be feasible: Capable of being implemented with known capabilities and
limitations of the system and its environment.

Who should do requirement specification?


• Someone who can communicate with the users: It should always be written by somebody who
has got good communication skills otherwise we may face problems later.

• Someone who has experience : Experienced persons should be given the preference as they can
handle the problems faced during requirement specification in a more efficient manner.

• Someone who knows similar systems and/or the application area : Again the stress is on finding
somebody who has got the experience of having worked on similar systems or application areas.

• Someone who knows what is possible and how (and how much work is roughly needed).

Typical Documents used for SRS

• Basic textual document, e.g. according to the ANSI/IEEE Standard 830 and will be discussed next
• A conceptual model of the domain, which may be already available or built separately
• A description of the processes, e.g. a data flow diagram or a system flowchart.

What can go wrong ?

• Missing specifications in SRS


- Happens often
- Experience helps
- Sometimes it is impossible to notice

• Contradictions can occur in SRS


- Do not document the same thing many times
- Integrate different users’ views with the users
- Sometimes the users disagree strongly.

• Noise
- Do not include material which does not contain relevant information

• Documenting a solution rather than the problem

- If the users know some information technology, they want to start solving the problem as they
express it

- Many formal (also graphical) methods tend to direct the process into this.

• Unrealistic requirements can be stated

- Although we model the problem rather than the solution, it is good to have some idea of what is
possible.

Overall Structure For Req. Spec. (ANSI/IEEE Standard 830)

1. Introduction
1.1. Purpose
1.2. Scope
1.3. Definitions, Acronyms and Abbreviations
1.4. References
1.5. Overview

2. General Description
2.1. Product Perspective
2.2. Product Functions
2.3. User Characteristics
2.4. General Constraints
2.5. Assumptions and Dependencies

3. Specific Requirements
3.1. Functional Requirements
3.2. External Interface Requirements
3.3. Performance Requirements
3.4 Design Constraints
3.4.1. Standards Compliance
3.4.2. Hardware Limitations …
3.5. Attributes
3.5.1. Security
3.5.2. Maintainability …
3.6. Other Requirements
3.6.1. Data Base …

4. Extensions (acceptance criteria, other material...)

ANSI/IEEE: Functional Requirements

• 3.1. Functional Requirements


3.1.1. Functional Requirement 1
3.1.1.1 Introduction
3.1.1.2 Inputs
3.1.1.3 Processing
3.1.1.4 Outputs
3.1.2 Functional Requirement 2

3.1.n Functional Requirement n
UNIT III
System Design
Introduction

After we have designed the system on paper (logical design), now it is the time to translate that
paper work into a physical system or a coded software system. There are so many options
available with us as far as choice of computer languages or packages is concerned. The recent
trend of programming (coding) is CLIENT SERVER computing and we are making use of front
end (Visual Basic) and back end (Oracle) equally supported by middleware (ex ODBC). Visual
Basic is the ideal choice for coding the system and is supported by Oracle at the back end. Above
all, we are coding the system in a modular manner. This is also called MODULAR DESIGN or
STRUCTURED DESIGN.

STRUCTURED DESIGN

After the analysis phase is complete, structured design uses DFDs, data dictionaries, flow charts,
etc for design process. The stress here is on 'how' the system will be developed? Design is a
highly creative and challenging phase and it focuses on how to make a system that is fully
functional, reliable and reasonably easy to understand and operate. The purpose of design phase
is to produce a solution to a problem given in SRS document.
D
What How
E

V
Logical E Physical
design design
L

OBJECTIVES OF DESIGN

Design bridges gap between specifications and coding. The design of the system is correct if the system
built precisely according to design satisfies the requirements of that system. Clearly, the goal during
design phase is to produce correct design or in other words the goal is to find the BEST possible design,
within the limitations imposed by requirements and the physical and social environment in which the
system will operate.

Some of the properties of system design are:

Verifiability: is concerned with how easily the corrective ness of a design can be argued.

Traceability: It helps in design verification. It requires that all design elements must be traceable to the
requirements.

Completeness: The software must be complete in all respects.

Consistency: requires that there are no inherent inconsistencies in the design.

Efficiency: is concerned with proper usage of resources by the system.

Simplicity / Understandability: are related to simple design so that user can easily understand and use it.
Simple design is easily to understand and maintain also, in the long run or through the life cycle of a
software system.
DESIGN PRINCIPLES

There are certain principles that can be used for developing / coding a system. These principles are
meant to effectively handle the complexity of design process. These principles are:

i) Problem Partitioning: If the software is large than we can partition the problem. "Divide and conquer"
is the policy for such system. We can divide the software into modules and go for the development
separately. One module can go to one programmer named A and the other module can go to
programmer B.

Problem Partitioning improves the efficiency of the system. It is necessary that all components /
modules have interaction between them.

ii) Abstraction: Abstraction is an indispensable part of design process and is essential for a problem
partitioning. Abstraction is a tool that permits a designer to consider a component at an abstract level
(outer view), without worrying about the details of implementation of the component.

Abstraction means to look at the software components from outside. The process of establishing
the decomposition of a problem into simpler and more understood primitives is basic to science
and software engineering. This process has many underlying techniques of abstraction.

An abstraction is a model. The process of transforming one abstraction into a more detailed
abstraction is called refinement. The new abstraction can be referred to as a refinement of the
original one. Abstractions and their refinements typically do not coexist in the same system
description. Precisely what is meant by a more detailed abstraction is not well defined. There
needs to be support for substitutability of concepts from one abstraction to another. Composition
occurs when two abstractions are used to define another higher abstraction. Decomposition
occurs when an abstraction is split into smaller abstractions.

Information management is one of the goals of abstraction. Complex features of one abstraction
are simplified into another abstraction. Good abstractions can be very useful while bad
abstractions can be very harmful. A good abstraction leads to reusable components.

Information hiding distinguishes between public and private information. Only the essential
information is made public while internal details are kept private. This simplifies interactions and
localizes details and their operations into well-defined units.
Abstraction, in traditional systems, naturally forms layers representing different levels of
complexity. Each layer describes a solution. These layers are then mapped onto each other. In
this way, high level abstractions are materialized by lower level abstractions until a simple
realization can take place.

In functional abstraction, details of the algorithms to accomplish the function are not visible to
the consumer of the function. The consumer of the function need to only know the correct calling
convention and have trust in the accuracy of the functional results.

In data abstraction, details of the data container and the data elements may not be visible to the
consumer of the data. The data container could represent a stack, a queue, a list, a tree, a graph,
or many other similar data containers. The consumer of the data container is only concerned
about correct behavior of the data container and not many of the internal details. Also, exact
details of the data elements in the data container may not be visible to the consumer of the data
element. An encrypted certificate is the ultimate example of an abstract data element. The
certificate contains data that is encrypted with a key not know to the consumer. The consumer
can use this certificate to be granted capabilities but cannot view nor modify the contents of the
certificate.

Traditionally, data abstraction and functional abstraction combine into the concept of abstract
data types (ADT). Combining an ADT with inheritance gives the essences of an object-based
paradigm.

In process abstraction, details of the threads of execution are not visible to the consumer of the
process. An example of process abstraction is the concurrency scheduler in a database system. A
database system can handle many concurrent queries. These queries are executed in a particular
order, some in parallel while some sequential, such that the resulting database cannot be
distinguished from a database where all the queries are done in a sequential fashion. A consumer
of a query which represents one thread of execution is only concerned about the validity of the
query and not the process used by the database scheduler to accomplish the query.

Some other definitions

ABSTRACTION

"A view of a problem that extracts the essential information relevant to a particular purpose and ignores
the remainder of the information." [IEEE, 1983]
"An abstraction denotes the essential characteristics of an object that distinguish it from all other
kinds of object and thus provide crisply defined conceptual boundaries, relative to the perspective of the
viewer." [Booch, 1991]

PAYROLL SYSTEM

DATA QUERIES PROCESSING REPORTS QUIT


ENTRY

In the above software system, programmer coding module "Data Entry" will know the internal details of
his own module. He just knows from outside that there are modules named "Queries", "Processing",
"Reports" and "Quit". Similarly programmer coding "Queries" modules will have complete knowledge
about his own module and so on.

Abstraction is necessary when we are dividing the problem into smaller parts so that we can proceed
with one design process effectively and efficiently.

Abstraction can be functional or data abstraction. In functional abstraction, we specify the module by
the function it performs.

In data abstraction, data is hidden behind functions / operations (remember C++ or Java). Data
abstraction forms the basis for object-oriented design.

TOP DOWN AND BOTTOM UP STRATEGIES


In the previous topics we studied about design principles. These principles are necessary for efficient
software design. Top down and bottom up strategies help accomplish these principles and objectives.

A system consists of components, which have components of their own. A system is a hierarchy of
components and the highest-level component corresponds to the total system. To design such a
hierarchy there are two approaches - top down and bottom up.

The top down approach starts from the highest-level component of the hierarchy and proceeds through
to lower level. By contrast, bottom up approach starts with lowest level components and proceeds
through higher levels to the top-level component.

TOP DOWN DESIGN

This approach starts by identifying major components of the system and decomposing them into their
own lower level components and iterating until the desired level of detail is achieved. Top down design
methods often results in some form of stepwise refinement starting from an abstract design, in each
step the design is refined to a more concrete level until we reach a level where no more refinement is
needed and the design can be implemented directly.

LIBRARY INFO SYSTEM

This is the top (root) of software system. Now it can be further decomposed.
LIBRARY INFO SYSTEM

DATA ENTRY QUERIES PROCESSING REPORTS QUIT

Now we can move further down and divide the system even further.

LIBRARY INFO SYSTEM

DATA ENTRY QUERIES PROCESSING REPORTS QUIT

STUDENT FACULTY
MEMBER MEMBER
ISSUE RETURN

This iterative process can go on till we have reached a complete software system. A complete software
system is a system that has been coded completely using any front-end tool (ex Java, Visual Basic, VC++,
Power Builder etc).
Top down design strategies work very well for system that are made from the scratch i.e. the developer
has no knowledge of the system prior developments. We can always start from the main menu and
proceed down the hierarchy designing data entry modules, queries modules, etc.

BOTTOM UP STRATEGIES

In bottom strategy we start from the bottom and move upwards towards the top of the software.

The approach leads to a style of design where we decide how to combine these modules to provide
larger ones; to combine these to provide even larger ones and so on till we arrive at one big module,
which is the whole of the desired program.

This method has one weakness - we need to use a lot of intuition to decide exactly what functionality a
module should provide. If we get wrong then at higher level, we will find that this is not as per
requirements, then we have to redesign at a lower level. if a system is to be built from EXISTING
SYSTEM, this approach is more suitable as it starts from some existing modules. For example - suppose a
hospital named Dayanand Nursing Home wants to modify its existing computerized system then we'll
definitely go with bottom up approach.

MODULARITY

By the term modularity, we mean that a system is decomposed into manageable components and these
components can be coded separately. By modularity we do not mean that a system is chopped into
smaller parts. We follow certain concepts like coupling and cohesion while breaking the system into
modular pieces.

There are various variations to the term module. he range from sub routine in Fortran to package in Ada
to functions in C and PASCAL to classes in Java / C++. A modular design consists of well-defined
manageable units with well-defined interfaces among the units.
Desirable properties of modular system include:-

1) Each module is a well-defined system that is potentially useful in other applications.

2) Each module has a single, well-defined purpose.

3) Modules can be separately compiled and stored in a library.

4) Modules can use other modules.

5) Modules are easier to use than build.

6) Modules should be simpler from outside than from the inside.

According to C. Myers, Modularity is a single attribute of software that allows a program to be


intellectually manageable. It increases design clarity, which in turn results in easy implementations,
debugging, testing, documenting & maintenance of software product.

MODULE COUPLING

Coupling is the measure of the degree of interdependence between modules. Two modules with high
coupling are strongly connected and thus dependent on each other. A change in one module can bring
about a change in other module too and this is not a good practice. A software system should support
LOOSE COUPLING and avoid tight coupling.

If B is tightly coupled with A then a change in A will cause a change in B also.


A

Coupling can be of many types:

1) Data coupling

2) Stamp coupling Best (loose coupling)

3) Stamp coupling

4) Control coupling

5) Common coupling

6) Content coupling Worst (tight coupling)


Un coupled: No dependencies

The strength of coupling between two modules is influenced by the complexity of the interface, type of
connection and type of communication.

Given two modules A and B we can identify a number of ways in which they can be coupled.

DATA COUPLING

A and B communicate only by passing parameters or data. This is highly desirable but data should not be
passed between procedures unnecessarily. If one module needs only a part of data structure, it should
be passed just that part, not the whole thing.
Data coupling is the most desirable type of coupling in software systems.

STAMP COUPLING

When a data structure is used to pass information from one module to another and the data structure
itself is passed, modules are connected by stamp coupling. Modules A and B make use of common DATA
TYPE but perhaps perform different operations on it.

A combination of DATA and STAMP coupling is sometimes used in our software systems.

CONTROL COUPLING

This type of coupling makes use of control flag for activation purposes. A module A can transfer control
to module B by procedures call. This is the proper way to pass control around the program.

When one module passes parameters to control the activity of another module, we say there is control
coupling between the two.

Module A Module B

.
go_on = .T.
.
.
If go_on = .T.
.
Dataentry();
read (go_on)
Else

Queries();
As seen from the example, go-on is a Boolean variable and its value is read from module A. go-on is
being used in Module B as a control flag. A change of value in module A changes the course of action in
module B.

COMMON COUPLING
Global variable

A1

A2

A3

V1

V2

Change
Increment V1 = V2 + A1

V1
V1 = 0

Module A and Module B when use some shared data area (e.g. global variables). This is quite
undesirable because if we want to change the shared data, we have to find and modify all the
procedures that access it. This is the reason why global variable are avoided now a days with the fear of
common coupling.

CONTENT COUPLING

It occurs when module A modifies B (modifying local data values or instructions). Content coupling is the
least desirable coupling, we never want a module to modify another module.

MODULE COHESION
Cohesion is the measure of the degree to which elements of a module are functionally related.

A strongly cohesive module is the one in which all instructions of a module are related to a single
function. There are many types of cohesion:

1) Functional cohesion Best (highest cohesion)

2) Sequential cohesion

3) Communication cohesion

4) Temporal Cohesion

5) Logical cohesion

6) Co-incidental cohesion. Worst (lowest cohesion)

FUNCTIONAL COHESION

This type of cohesion is found when all instructions in a module are performing the same function or
achieving the same goal.

For example A data entry module will contain instructions that will accomplish just one task of entering
data into the database. Similarly a module for processing would contain instructions needed to do the
processing.

Module Data Entry

Initialize variables

Read or input values

Save to the database.


SEQUENTIAL COHESION

It occurs when output of some instructions forms the input to other instructions. It is seen quite often in
software system.

Produce purchase order

Prepare shipping order

Update inventory

Update accounts

This is the type of cohesion where output of one is the input to other instructions.

COHESION

It occurs when instructions are operating on the same data or contribute towards the same output data.

Update inventory file

Re-index inventory file

Search inventory file


There is nothing common between these instructions except that they are operating on the common
inventory file.

TEMPORAL COHESION

Temporal here relates to time. It says that we can group instructions that are performed at the same
time during the day.

Delete duplicate from inventory file

Re-index inventory file

Back up of inventory file.

All these operations have nothing in common except for the fact that all these operations are “END OF
DAY CLEAN UP ACTIVITIES” and time permitting, they can be performed separately.

LOGICAL COHESION

It is found when module clubs together similar operations. This is not a good reason for being used in a
same module.

Sort inventory file

Sort accounts file

Sort employee file


This is bad type of cohesion, putting together instructions just because all operations are of sorting.

CO-INCIDENTIAL COHESION

This type of cohesion occurs when instructions of a module have no apparent relationship between
them. It should never be used.

STRUCTURE CHART

The fundamental tool of structured design is structured chart. It is a widely used tool for designing a top
down modular design DFDs form the basis for drawing structured chart. These are graphic descriptions.
They describe interaction between modules and the data passing between modules. These functional
module specifications can in turn be passed to the programmers prior to writing program code.

NOTATIONS:

Calling module

Called module

Module symbol
Sender receiver

Data couple control flag symbol

Decision
symbol Loop repetition symbol

For example let's draw a structured chart to get employee details (one module is calling for the details
and other module is giving the details.)

Get employee
details Calling module

Emp_name, address, other details

Emp_no

Emp is O.K.

Find
employee
details
Called module
Here “get employee details” is the calling module while “find employee details” is the called module.

Arrow with black end circle is the control flag used for message purpose while arrow with white end
circle is used for passing data.

Another example
Calculate net pay

Calculate earnings

& Deductions
Database design in Information systems

In the design of Information systems, the combination of front end tool and back end tool is
used. The front end options are Visual Basic, .Net, Java, C, C++. HTML etc where programmers
develop (code) the Information system. The back end options are Oracle, Sybase, Informix,
Access etc where the database designers design the tables (only data structure in RDBMS). The
front end forms make use of the tables made at the back end to do the complete processing.

The Input forms, Output forms, Reports, Queries and Processing forms are all made in some
front end tool (These forms have been covered in unit no. 5).

Here we are concerned with the database design so we will study what databases are and how are
they created while designing Information systems. DBMS (Database Management Systems)
helps us in database design.

Database Management Systems

1. A database management system (DBMS), or simply a database system (DBS),


consists of
o A collection of interrelated and persistent data (usually referred to as the database
(DB)).
o A set of application programs used to access, update and manage that data (which
form the data management system (MS)).
2. The goal of a DBMS is to provide an environment that is both convenient and efficient
to use in
o Retrieving information from the database.
o Storing information into the database.
3. Databases are usually designed to manage large bodies of information. This involves
o Definition of structures for information storage (data modeling).
o Provision of mechanisms for the manipulation of information (file and systems
structure, query processing).
o Providing for the safety of information in the database (crash recovery and
security).
o Concurrency control if the system is shared by users.

Data Abstraction

1. The major purpose of a database system is to provide users with an abstract view of the
system.

The system hides certain details of how data is stored and created and maintained

Complexity should be hidden from database users.

2. There are several levels of abstraction:


1. Physical Level:
How the data are stored.
E.g. index, B-tree, hashing.
Lowest level of abstraction.
Complex low-level structures described in detail.
2. Conceptual Level:
 Next highest level of abstraction.
 Describes what data are stored.
 Describes the relationships among data.
 Database administrator level.
3. View Level:
 Highest level.
 Describes part of the database for a particular group of users.
 Can be many different views of a database.
 E.g. tellers in a bank get a view of customer accounts, but not of payroll
data.

Fig. illustrates the three levels.

Figure : The three levels of data abstraction

The E-R Model

1. The entity-relationship model is based on a perception of the world as consisting of a


collection of basic objects (entities) and relationships among these objects.
o An entity is a distinguishable object that exists.
o Each entity has associated with it a set of attributes describing it.
o E.g. number and balance for an account entity.
o A relationship is an association among several entities.
o e.g. A cust_acct relationship associates a customer with each account he or she
has.
o The set of all entities or relationships of the same type is called the entity set or
relationship set.
o Another essential element of the E-R diagram is the mapping cardinalities,
which express the number of entities to which another entity can be associated via
a relationship set.

We'll see later how well this model works to describe real world situations.

2. The overall logical structure of a database can be expressed graphically by an E-R


diagram:
o rectangles: represent entity sets.
o ellipses: represent attributes.
o diamonds: represent relationships among entity sets.
o lines: link attributes to entity sets and entity sets to relationships.

See figure for an example.

Figure 1.2: A sample E-R diagram.

The Object-Oriented Model

1. The object-oriented model is based on a collection of objects, like the E-R model.
o An object contains values stored in instance variables within the object.
o Unlike the record-oriented models, these values are themselves objects.
o Thus objects contain objects to an arbitrarily deep level of nesting.
o An object also contains bodies of code that operate on the the object.
o These bodies of code are called methods.
o Objects that contain the same types of values and the same methods are grouped
into classes.
o A class may be viewed as a type definition for objects.
o Analogy: the programming language concept of an abstract data type.
o The only way in which one object can access the data of another object is by
invoking the method of that other object.
o This is called sending a message to the object.
o Internal parts of the object, the instance variables and method code, are not visible
externally.
o Result is two levels of data abstraction.

For example, consider an object representing a bank account.


o The object contains instance variables number and balance.
o The object contains a method pay-interest which adds interest to the balance.
o Under most data models, changing the interest rate entails changing code in
application programs.
o In the object-oriented model, this only entails a change within the pay-interest
method.
2. Unlike entities in the E-R model, each object has its own unique identity, independent of
the values it contains:
o Two objects containing the same values are distinct.
o Distinction is created and maintained in physical level by assigning distinct object
identifiers.

Instances and Schemes

1. Databases change over time.


2. The information in a database at a particular point in time is called an instance of the
database.
3. The overall design of the database is called the database scheme.
4. Analogy with programming languages:
o Data type definition - scheme
o Value of a variable - instance
5. There are several schemes, corresponding to levels of abstraction:
o Physical scheme
o Conceptual scheme
o Subscheme (can be many)

Data Independence

1. The ability to modify a scheme definition in one level without affecting a scheme
definition in a higher level is called data independence.
2. There are two kinds:
o Physical data independence
 The ability to modify the physical scheme without causing application
programs to be rewritten
 Modifications at this level are usually to improve performance
o Logical data independence
 The ability to modify the conceptual scheme without causing application
programs to be rewritten
 Usually done when logical structure of database is altered
3. Logical data independence is harder to achieve as the application programs are usually
heavily dependent on the logical structure of the data. An analogy is made to abstract data
types in programming languages.

Data Definition Language (DDL)


1. Used to specify a database scheme as a set of definitions expressed in a DDL
2. DDL statements are compiled, resulting in a set of tables stored in a special file called a
data dictionary or data directory.
3. The data directory contains metadata (data about data)
4. The storage structure and access methods used by the database system are specified by a
set of definitions in a special type of DDL called a data storage and definition language
5. basic idea: hide implementation details of the database schemes from the users

Data Manipulation Language (DML)

1. Data Manipulation is:


o retrieval of information from the database
o insertion of new information into the database
o deletion of information in the database
o modification of information in the database
2. A DML is a language which enables users to access and manipulate data.

The goal is to provide efficient human interaction with the system.

3. There are two types of DML:


o procedural: the user specifies what data is needed and how to get it
o nonprocedural: the user only specifies what data is needed
 Easier for user
 May not generate code as efficient as that produced by procedural
languages
4. A query language is a portion of a DML involving information retrieval only. The terms
DML and query language are often used synonymously.

Database Manager

1. The database manager is a program module which provides the interface between the
low-level data stored in the database and the application programs and queries submitted
to the system.
2. Databases typically require lots of storage space (gigabytes). This must be stored on
disks. Data is moved between disk and main memory (MM) as needed.
3. The goal of the database system is to simplify and facilitate access to data. Performance
is important. Views provide simplification.
4. So the database manager module is responsible for
o Interaction with the file manager: Storing raw data on disk using the file system
usually provided by a conventional operating system. The database manager must
translate DML statements into low-level file system commands (for storing,
retrieving and updating data in the database).
o Integrity enforcement: Checking that updates in the database do not violate
consistency constraints (e.g. no bank account balance below $25)
o Security enforcement: Ensuring that users only have access to information they
are permitted to see
o Backup and recovery: Detecting failures due to power failure, disk crash,
software errors, etc., and restoring the database to its state before the failure
o Concurrency control: Preserving data consistency when there are concurrent
users.
5. Some small database systems may miss some of these features, resulting in simpler
database managers. (For example, no concurrency is required on a PC running MS-DOS.)
These features are necessary on larger systems.

Database Administrator

1. The database administrator is a person having central control over data and programs
accessing that data. Duties of the database administrator include:
o Scheme definition: the creation of the original database scheme. This involves
writing a set of definitions in a DDL (data storage and definition language),
compiled by the DDL compiler into a set of tables stored in the data dictionary.
o Storage structure and access method definition: writing a set of definitions
translated by the data storage and definition language compiler
o Scheme and physical organization modification: writing a set of definitions
used by the DDL compiler to generate modifications to appropriate internal
system tables (e.g. data dictionary). This is done rarely, but sometimes the
database scheme or physical organization must be modified.
o Granting of authorization for data access: granting different types of
authorization for data access to various users
o Integrity constraint specification: generating integrity constraints. These are
consulted by the database manager module whenever updates occur.

Database Users

1. The database users fall into several categories:


o Application programmers are computer professionals interacting with the
system through DML calls embedded in a program written in a host language
(e.g. C, PL/1, Pascal).
 These programs are called application programs.
 The DML precompiler converts DML calls (prefaced by a special
character like $, #, etc.) to normal procedure calls in a host language.
 The host language compiler then generates the object code.
 Some special types of programming languages combine Pascal-like
control structures with control structures for the manipulation of a
database.
 These are sometimes called fourth-generation languages.
 They often include features to help generate forms and display data.
o Sophisticated users interact with the system without writing programs.
 They form requests by writing queries in a database query language.
 These are submitted to a query processor that breaks a DML statement
down into instructions for the database manager module.
o Specialized users are sophisticated users writing special database application
programs. These may be CADD systems, knowledge-based and expert systems,
complex data systems (audio/video), etc.
o Naive users are unsophisticated users who interact with the system by using
permanent application programs (e.g. automated teller machine).
Overall System Structure

1. Database systems are partitioned into modules for different functions. Some functions
(e.g. file systems) may be provided by the operating system.
2. Components include:
o File manager manages allocation of disk space and data structures used to
represent information on disk.
o Database manager: The interface between low-level data and application
programs and queries.
o Query processor translates statements in a query language into low-level
instructions the database manager understands. (May also attempt to find an
equivalent but more efficient form.)
o DML precompiler converts DML statements embedded in an application
program to normal procedure calls in a host language. The precompiler interacts
with the query processor.
o DDL compiler converts DDL statements to a set of tables containing metadata
stored in a data dictionary.

In addition, several data structures are required for physical system implementation:

o Data files: store the database itself.


o Data dictionary: stores information about the structure of the database. It is used
heavily. Great emphasis should be placed on developing a good design and
efficient implementation of the dictionary.
o Indices: provide fast access to data items holding particular values.
UNIT IV

Software Testing

Introduction
Because of the fallibility of its human designers and its own abstract, complex nature, software
development must be accompanied by quality assurance activities. It is not unusual for
developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight
control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities
combined. The destructive nature of testing requires that the developer discard preconceived
notions of the correctness of his/her developed software.

Definitions of ‘‘TESTING’’

According to Hetzel: Any activity aimed at evaluating an attribute or capability of a program or system. It
is the measurement of software quality.

According to Beizer: The act of executing tests. Tests are designed and then executed to demonstrate
the correspondence between an element and its specification.

According to IEEE: The process of exercising or evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to identify differences between
expected and actual results.

According to Myers: The process of executing a program with the intent of finding errors.

Evolving Attitudes About Testing

1950’s

- Machine languages used

- Testing is debugging

1960’s

- Compilers developed

- Testing is separate from debugging


1970’s

- Software engineering concepts introduced

- Testing begins to evolve as a technical discipline

1980’s

- CASE tools developed

- Testing grows to Verification and Validation (V&V)

1990’s

- Increased focus on shorter development cycles

- Quality focus increases

- Testing skills and knowledge in greater demand

- Increased acceptance of testing as a discipline

2000’s

- More focus on shorter development cycles

- Better integration of testing/verification/reliability ideas

- Growing interest in software safety, protection, and security

Software Testing Fundamentals


Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software's reliability and quality. But, testing cannot
show the absence of defect -- it can only show that software defects are present.

Levels or Phases of Testing

Unit: testing at the lowest level of functionality (e.g., module, function, procedure, operation, method,
etc.)

Component: testing a collection of units that make up a component (e.g., program, object, package,
task, etc.)

Product: testing a collection of components that make up a product (e.g., subsystem, application, etc.)

System: testing a collection of products that make up a deliverable system

Other Types of Testing

Integration: It is the testing that takes place as sub-elements are combined (i.e., integrated) to form
higher-level elements

Regression: It is the testing to detect problems caused by the adverse effects of program change

Acceptance: It is a formal testing conducted to enable the customer to determine whether or not to
accept the system (acceptance criteria may be defined in a contract)

Alpha: It is the actual end-user testing performed within the development environment

Beta: It is the end-user testing performed within the user environment prior to general release
System Test Acceptance: It is the testing conducted to ensure that a system is ‘‘ready’’ for the system-
level test phase

Waterfall Model of the Testing Process

1. Test Planning

2. Test Design

3. Test Implementation

4. Test Execution

5. Execution Analysis

6. Result Documentation

7. Final Reporting

Testing as a Profession

Software testing has become a profession -- a career choice. The testing process has evolved
considerably, and is now a discipline requiring trained professionals.

To be successful today, a SE organization must be adequately staffed with skilled testing professionals
who get proper support from management. Testing requires knowledge, disciplined creativity, and
ingenuity

Testing Techniques

Black Box: Testing based solely on analysis of requirements (specification, user documentation, etc.).
Also know as functional testing.
White Box: Testing based on analysis of internal logic (design, code, etc.). (But expected results still
come from requirements.) Also known as structural testing.

Gray-Box: Testing based on ‘‘limited knowledge’’ of internal


logic

White Box Testing


White box testing is a test case design method that uses the control structure of the procedural
design to derive test cases. Test cases can be derived that

1. Guarantee that all independent paths within a module have been exercised at least once,
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural design
and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis
set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the derivation of
the basis set. Each flow graph node represents one or more procedural statements. The edges
between nodes represent flow of control. An edge must terminate at a node, even if the node
does not represent any useful procedural statements. A region in a flow graph is an area bounded
by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic
complexity is a metric that provides a quantitative measure of the logical complexity of a
program. It defines the number of independent paths in the basis set and thus provides an upper
bound for the number of tests that must be performed.

Black Box Testing

Black Box Testing Techniques


Equivalence Partitioning

Cause-Effect Analysis

Boundary Value Analysis

Intuition and Experience

Definition of Black-Box Testing

Testing based solely on analysis of requirements (specification, user documentation, etc.). Also know as
functional testing.

Black box testing concerns techniques for designing tests; it is not a level of testing. Black-box testing
techniques apply to all levels of testing (e.g., unit, component, product, and system).

Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:

1. incorrect or missing functions,


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function's validity tested?


2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing tends
to be applied during later stages. Test cases should be derived which
1. Reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test cases can
be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors
and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence
classes for an input condition. An equivalence class represents a set of valid or invalid states for
input conditions.

Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence
class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Can be thought of as exhaustive testing Las Vegas style... Idea is to partition the input space into a small
number of equivalence classes such that, according to the specification,every element of a given class is
‘‘handled’’ (i.e., mapped to an output) ‘‘in the same manner.’’ Assuming the program is implemented in
such a way that being ‘‘handled in the same manner’’ means that either (a) every element of the class
would be mapped to a correct output, or (b) every element of the class would be mapped to an
incorrect output (this may not be the case, of course), testing the program with just one element from
each equivalence class would be tantamount to exhaustive testing.

Two types of classes are identified: valid (corresponding to inputs deemed valid from the specification)
and invalid (corresponding to inputs deemed erroneous from the speci- fication) Technique is also
known as input space partitioning

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements
equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on
input conditions solely, BVA derives test cases from the output domain also. BVA guidelines
include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above
and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise
the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.

A technique based on identifying, and generating test cases to explore boundary conditions.
Boundary conditions are an extremely rich source of errors. Natural language based specifications of
boundaries are often ambiguous, as in ‘‘for input values of X between 0 and 40,...’’ May be applied to
both input and output conditions. Also applicable to white box testing (as will be illustrated later).

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions


and corresponding actions. There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

The previous section suggests that Cause-Effect Analysis can be viewed as a logical extension of
equivalence partitioning... But it can also be described more simply as a systematic means for generating
test cases to cover different combinations of input ‘‘Causes’’ resulting in output ‘‘Effects.’’
A CAUSE may be thought of as a distinct input condition, or an ‘‘equivalence class’’ of input conditions.
An EFFECT may be thought of as a distinct output condition, or a meaningful change in program state.
Causes and Effects are represented as boolean variables and the logical relationships among them CAN
(but need not) be represented as one or more boolean graphs.

Test Planning

Planning is as important in testing as it is in any other software development activity.


It allows for the effective use of limited resources and effective management.
In a nutshell, planning forces a disciplined, timely consideration of

- what testers are going to do,

- how they are going to do it,

- how long it will take,

- what resources will be required, and

- what it will cost.

Although documenting the results of this process is obviously very important, relatively speaking, the
plan is nothing

- the planning is everything.

Levels of Test Planning


The Master Test Plan (also known as the Comprehensive Test Plan) is the highest-level test plan for a
project. It must be compatible with, and support the overall project plan.

Lower level test plans reflect specific planning for the various levels of testing specified by the MTP.

- Unit Test Plan

- Component Test Plan

- Product Test Plan

- System Test Plan

In general, testing planned first (system) is carried out last; testing planned last (unit) is carried out first.
Note that there could be more than one product test plan. Similarly for component and unit plans.
(Why?)
Test Plan Template
1. Identifier

2. Introduction and references

3. Test items (versions, media, references, etc.)

4. Features to be tested

5. Features NOT to be tested (and reasons)

6. Testing strategies and approach

7. Dependencies

8. Test case success/failure criteria

9. Pass/fail criteria for the complete test level

10. Test entry/exit criteria

11. Test suspension criteria and resumption requirements

12. Test deliverables/status communications vehicles

13. Testing tasks

14. Hardware and software requirements

15. Problem determination and correction responsibilities

16. Staffing and training needs/assignments

17. Test schedules

18. Risks and contingencies

19. Approvals

Test Plan Inspections


Formal reviews detect errors in test plans, inform other parties of what is planned, and build consensus
regarding test procedures and their purpose.
Focus of inspections is on completeness, consistency, unambiguousness, and feasibility of the plans.

Depending on the level of the plan, participants may include:


- Test planners/coordinators

- Developers (designers and coders)

- Testers

- Quality Assurance reps.

- Marketing reps.

- Publication (e.g., user manual) developers

- Usability Experts

- Performance Specialists

- End-user reps.

Test Documents (IEEE Std. 829-1998)

Test Design Specification: A document specifying the details of the test approach for a software feature
or combination of software features and identifying the associated tests.

Test Case Specification: A document specifying inputs, predicted (expected) results, and a set of
execution conditions for a test item.
Test Procedure Specification: A document specifying a sequence of actions for the execution of a test.

Test Log: A chronological record of relevant details about

the execution of tests.

Test Incident Report: A document reporting on any event that occurs during the testing process which
requires investigation.

Test Summary Report: A document summarizing testing activities and results. It also contains an
evaluation of the corresponding test items.
UNIT V

SOFTWARE MAINTENANCE

Software maintenance is an activity that is performed by every development group when the software is
delivered to the customer. Maintenance occurs after the product is delivered at user's site, installed and
is in operational state. Delivery of the software product pens the gate for maintenance process. The
time spent and effort required keeping software operational after release is both very important and
crucial and consumes about 40-70% of the cost of the entire life cycle. It’s a pity that people pay little
attention to this important aspect of life cycle.

Definition of software maintenance

Maintenance means fixing things that break or wear out. In software, nothing wears out; it is either
wrong from the beginning or we conclude later that we want to do something different. According to
D.A Lamb, the term is so common that we must live with it. According to S.Y. Stephen, Software
Maintenance is a detailed activity that includes error detection and corrections, enhancements of
capabilities, deletion of obsolete capabilities, and optimization. Since change is unavoidable,
mechanisms must be devised for evaluating, controlling and making modifications. Any work done to
change the software after it is in operation or working state is said to be Maintenance. The aim is to
maintain the value/quality of software over time.

Types of Maintenance

Change is a necessity of life and this change can be due to compulsion or our own choice of modifying
the system. As the user demands and specification of the computer systems change due to changes in
the external world, the software system should also undergo changes. Most of the maintenance
activities are enhancements and modifications requested by the users.

There are three major types of software maintenance:

Corrective Maintenance
This type of maintenance refers to modifications initiated by defects in the software product. A problem
can result from design errors, logic errors and coding errors. Defects are also caused by data processing
errors and system performance errors.

In the event of system failure due to an error, steps are taken to restore operation of the software
system. According to K. Bennett, maintenance personnel sometimes resort to emergency fixes known as
patching to reduce pressure from the management. This approach may be simple but gives rise to a
range of problems that include increased program complexity and unforeseen ripple effects. Unforeseen
ripple effects mean that a change to one part of a program may affect other program sections in an
unpredictable and uncontrollable manner leading to complete mess up in the logic of the system. This is
often due to lack of time to carry out a "impact analysis" before effecting the corrective change.

Adaptive Maintenance

This type of maintenance means modifying the software to match changes in the ever changing outside
environment. The term environment refers to the totality of all conditions and influences which act from
outside upon the software. For example, business rules, government policies, software and hardware
operating platforms. According to R. Brooks, a change to the whole or part of this environment will
require a corresponding modification of the software.

This type of maintenance includes any work initiated as a result of moving the software to a different
hardware or software platform-compiler or operating system. Any change in the government policy can
have implications on the software. For example, if government increases the HRA allowance to 40%
from 30% then software systems of Payroll will have to undergo adaptive changes.

Perfective Maintenance

Perfective maintenance means improving efficiency or performance of the software product. When the
software becomes operational, the user prefers to experiment with new cases beyond the scope for
which it was initially designed. For example, sometimes people are not satisfied with the GUI of the
software product so changes can be incorporated to make it more attractive and efficient. Note that this
type of maintenance was not compulsory but just for perfection, we undergo changes in the system.
Problems faced during Maintenance

The most important problem faced by maintenance is that developer must first understand fully the
system to be maintained. Then, the developer must understand the impact of the intended change.

According to Lehman and Arnold few problems are:

 Often the program is written by another person or group of persons working over the years
in isolation from each other.
 Often the program is changed by wrong person who did not understand it clearly. That
would result in a deterioration of the program's original efficiency.
 Program source code listings, even those that are well organized, are not structured to
support reading for comprehension. Reading or inspecting a program is not like reading an
article or book. With program listing, developer/programmer reads back and forth due to
the flow of instructions.
 Some problems become clearer when the system is in use. Many users know what they
want but lack the ability to express it in a form understandable to programmers/analysts.
This is due to communication gap.
 Systems are not designed for change. If there is hardly any scope for change, maintenance
will be very difficult. Therefore approach of development should be the production of
maintainable software.

INTRODUCTION TO RISK MANAGEMENT (THE PROCESS)

Risk is the possibility of suffering loss. The losses in software development process can be vast
and can bring any business to a standstill. In the chapter on spiral model, we have seen various
risk factors as collected by Barry Boehm. These risk factors play an important role in the success
of any software project.

Here we look onto the process of risk management. The steps are depicted with the help of a
figure given below. These steps are:

1. Risk Identification: First all the risks are identifies and


their list is made. It is very difficult to identify risks as they
vary from project to project. New risks can always be
present in software business. For example: Nobody would
have dreamt of Sept 9/11 attacks and the impact on
business as a result of that.
2. Risk Analysis: The identified risks are analyzed and a
priority is set for those risks. Some risks may be very
critical while other may be delayed for some time.

3. Risk Planning: After having identified and analyzed the


risks, proper planning is done to avoid the occurrence of
risks and to plan any contingency if required.

4. Risk Monitoring: Finally risks are monitored to see if our


risk management initiative is going in the right direction or
not. Here risk assessment is done.
What are Standards?
Standards are documented agreements containing technical specifications or other precise criteria
to be used consistently as rules, guidelines, or definitions of characteristics, to ensure that
materials, products, processes and services are fit for their purpose.

International standards are suppose to contribute to making life simpler, and to increasing the
reliability and effectiveness of the goods and services we use.

Product Standards

Product standards define the characteristics which all product components should exhibit.

Process Standards

Process standards define how the software process should be conducted.

Why Standards?
Standards encapsulate the best or most appropriate practice.

 they capture historical knowledge, often gained by trial and error


 they preserve and codify organizational knowledge and memory

Standards also provide a framework for quality assurance (QA).

 QA now becomes the activity for ensuring that standards have been followed

Standards help to ensure project/personnel continuity.

 over a project’s lifecycle, new team members may be added and standards help in assisting their
useful integration

Using Standards
Standards exist for

 software engineering terminology


 notations (e.g. charting symbols)
 procedures for deriving software requirements
 QA procedures
 programming languages
 software verification and validation techniques
Integrating Standards

Team members should be involved in the development of product standards.

o standards documents should include the rationale behind each standardization decision

Standards should be reviewed and modified to reflect changing technologies.

o standards documents should be dynamic, not static


o but it must be decided which standards cannot change, which are subject to revision
and which do not apply to a particular project

Software tools should be provided to support the standards being used.

o this is probably the most important factor with respect to standards acceptance

Who Writes Standards?


Standards are difficult and time-consuming to create and administer. There are many
national and international standards organizations.

Certain organizations (usually governments) insist on contractors following their own


standards.

What are ISO, ITU, CCITT, ANSI, ...?

Many countries have national standards bodies where experts from industry and universities
develop standards for all kinds of engineering problems. Among them are, for instance,

ANSI American National Standards Institute


USA

The International Organization for Standardization, ISO, in Geneva is the head organization
of all these national standardization bodies (from some 100 countries, one from each country).

It is a non-governmental organization and was established in 1947.

ISO's work results in international agreements which are published as International Standards.

International standardization: What does it achieve?

Industry-wide standardization is a condition existing within a particular industrial sector


when the large majority of products or services conform to the same standards.
It results from consensus agreements reached between all economic players in that
industrial sector - suppliers, users, and often governments.

The aim is to facilitate trade, exchange and technology transfer through:

o enhanced product quality and reliability at a reasonable price,


o improved health, safety and environmental protection, and reduction of waste,
o greater compatibility and interoperability of goods and services,
o simplification for improved usability,
o reduction in the number of models, and thus reduction in costs,
o increased distribution efficiency, and ease of maintenance.

Assurance of conformity can be provided by producers' declarations, or by audits carried


out by independent bodies.

ISO Certification

The International Organizations of Standards body does not itself issue certificates to
organizations. It does certify third-party Certification Bodies who it authorizes to examine
('audit' or 'assess') organizations that wish to apply for ISO 9000 compliance certification. Both
ISO and the Certification Bodies charge fees for their services.

The applying organization will be assessed based on an extensive sample of its sites, functions,
products, services, and processes, and a list of problems ( 'action requests' or 'non-compliances' )
made known to management. Providing there are no major problems on this list, the certification
body will issue an ISO 900x certificate for each geographical site it has visited once it receives a
satisfactory improvement plan from the management showing how the problems will be
resolved.

An ISO certificate is not a once-and-for-all award, it must be renewed at regular intervals


recommended by the certification body - usually around 12 - 18 months.

ISO 9000 Auditing


Two types of auditing are required by the standard. Auditing by the external certification body,
and audits by internal staff who have been trained for this process. It is perhaps healthier if
internal auditors audit outside their usual management line to bring a degree of independence to
their judgments. Thus a continual process of assessment, leading to corrective and preventive
actions, is maintained throughout the scope of the certified organization.

Under the 1994 standard the auditing process could be adequately addressed by performing
'compliance auditing', which could be characterized simply as:-

 Tell me what you do - describe the business process


 Show me where it says that - reference the procedure manuals
 Prove that that is what happened - exhibit evidence in documented records
Under the 2000 standard the auditor performs a similar function but is required to make more value
judgments on what is effective instead of adhering safely to the formalism of what is prescribed.

ISO 9000 Document Suite


ISO 9000 is very lengthy. We offer here a brief encapsulization of the common members of the
ISO 9000 family.

 ISO 9000 covers the basic language and expectations of the whole ISO 9000 family.
 ISO 9001 is intended to be used in organizations who do design, development, installation, and
servicing of their product. It discusses how to meet customer needs effectively. This is the only
implementation for which third party auditors may grant certifications. The latest version is
:2000.
 ISO 9002 is nearly identical to 9001, except it does not incorporate design and development.
Cancelled in the ISO 9000:2000 version and replaced by the ISO 9001:2000.
 ISO 9003 is intended for organizations whose processes are almost exclusive to inspection and
testing of final products. Cancelled in the ISO 9000:2000 version and replaced by the ISO
9001:2000.
 ISO 9004 covers performance improvements. This gives you advice how you should (or could) do
in order to achieve ISO 9001 compliance and customer satisfaction.

There are over 20 different members of the ISO 9000 family, and most of them are not explicitly
referred to as "ISO 900x". For example, parts of the 10,000 range are also considered part of the 9000
family: ISO 10007:1995 talks about how to maintain a large system while changing individual
components. It is highly recommended that a serious look be taken at the ISO website and
documentation for a more in depth look at what each specification entails. Many have seemingly subtle
variations.

To the casual reader however, it is useful to understand that when someone claims to be ISO
9000 compliant, they are probably using a blanket statement meaning they conform to one of the
specifications in the ISO 9000 family. And more often than not, they are referring to ISO 9001,
ISO 9002, or ISO 9003. The certification according to the ISO 9000:1994 can not be valid after
year 2004.

Industry Specific Interpretations


As the paragraphs and clauses of the ISO 9000 standard have always been much generalized and
abstract they have to be carefully interpreted to make sense within a particular organization.
Developing software is not like making cheese, or offering counseling services, yet the ISO 9000
guidelines can potentially be interpreted in each of these industries.

Over time industry sectors have wanted to standardize their interpretations of the guidelines
within their own marketplace.
 The TICK-IT standard is an interpretation of ISO 9000 produced by the UK Board of Trade to
suite the processes of the Information Technology industry, especially developing software.
 AS9000 is the Aerospace Basic Quality System Standard is an interpretation of ISO 9000
developed by the mutual agreement of major aerospace manufacturers.
 QS9000 is an interpretation agreed upon by major automotive manufacturers.

Relationship with Other Standards


ISO 9000 is quite similar to ISO 14000. Both pertain to how a product is produced, rather than
how it is designed. ISO 9000 and ISO 14000 are more general, referring to a process, rather than
any single product.

ISO 9000 is more about making sure the product -- any product or service -- has been produced
in the most efficient and effective manner possible.

ISO 14000 exists to ensure the product -- any product or service -- has the lowest possible
environmental ramifications.

Capability Maturity Model


The Capability Maturity Model (CMM) of the Software Engineering Institute (SEI) describes the maturity
of software development organizations on a scale of 1 to 5.

According to the SEI, "Predictability, effectiveness, and control of an organization's software


processes are believed to improve as the organization moves up these five levels. While not
rigorous, the empirical evidence to date supports this belief."

 CMM level 1 (initial): Software development follows little to no rules. The project may go from
one crisis to the next. The success of the project depends on the skills of individual developers.
They may need to finish the project in an heroic effort.
 CMM level 2 (repeatable): Software development successes are repeatable. The organization
may use some basic project management to track cost and schedule. The precise
implementation differs from project to project within the organisation.
 CMM level 3 (defined): Software development across the organisation uses the same rules and
events for project management. Crucially, the organization follows this process even under
schedule pressures, ideally because management recognizes that it is the fastest way to finish.
 CMM level 4 (managed): Using precise measurements, management can effectively control the
software development effort. In particular, management can identify ways to adjust and adapt
the process to particular projects without measurable losses of quality or deviations from
specifications.
 CMM level 5 (optimizing): Quantitative feedback from previous projects is used to improve the
project management, usually using pilot projects, using the skills shown in level 4.
The CMM was invented to give military officers a quick way to assess and describe contractors'
abilities to provide correct software on time. It has been a roaring success in this role. So much
so that it caused panic-stricken salespeople to clamor for their engineering organizations to
"implement CMM."

The CMM reliably assesses an organization's sophistication about software development.

Drawbacks:

The CMM does not describe how to create an effective software development organization. The
traits it measures are in practice very hard to develop in an organization, even though they are
very easy to recognize.

The CMM has been criticized for being overly bureaucratic and for promoting process over
substance. In particular, for emphasizing predictability over service provided to end users. More
commercially successful methodologies have focused not on the capability of the organization to
produce software to satisfy some other organization or a collectively-produced specification, but
on the capability of organizations to satisfy specific end user "use cases".

The CMM's division into levels has also been criticized in that it ignores the possibility that a
single group may exhibit all of the behaviors and may change from behavior to behavior over
time. There is also the implication that a group must move from step to step and that it is
impossible for a project group to move from one to five without going through intermediate
steps.

COMPARING CMM AND ISO 9000

1. Initiatives, Objectives and Scope

In general, the CMM and the ISO 9000 are driven by similar issues and have the common
concern of quality and process management.

ISO Its primary focus is the customer-supplier relationship to reduce a customer’s


risk in choosing a supplier.

CMM Its focus is on the supplier to improve the internal software process.
2. Objective

ISO CMM

It is written for a wide range of industry Written specifically for software industry.
other than software.

Documents are more abstract Detailed document

ISO 9001 is only 5 pages long. ISO 9000- CMM is over 500 pages long.
3 is 11 pages. l

Identifies only the minimal It describes the software process in detail.


requirement for a quality system.

3. Product Development

Both the model support

1) definition and formalization of processes

2) standardized, objective evaluations by third parties of supplier’s


capabilities

3) on-going self-assessment

ISO CMM
It has a broad scope that encompasses It is specific to the software development.
hardware, software, processed materials,
and services.

4. Concept

ISO The ISO 9000’s concept is to follow a set of standards to make success repeatable.

CMM The CMM emphasizes on achieving "maturity" and improving its process
continuously

5. Structure

ISO It means that some basic practices are in place and the challenge is only to
maintain certification

CMM It emphasizes on continuous improvement, even at the last level.

1. Assessments, Capability Evaluations, Audits, and Certification


In essence, CMM’s capability evaluation has the same objective as ISO 9000’s third
party audits.

Both have been developed to check the overall capability of a software organization
to produce software in a timely, repeatable fashion.

ISO In an ISO 9000 audit, a software organization is checked that it follows a


certain set of standards.

CMM In a CMM capability evaluation, a software organization is ranked according to


the five levels.

An organization did a CMM-style self assessment after an ISO 9000 audit and found that the
auditors had mistakenly perceived that certain practices were in place.

Internal Assessment:

ISO This model requires auditors, such that the value of certification depends on the
expertise and experience of the auditors.

CMM The CMM allows self-assessment.

7. Software industry’s state

ISO View CMM View

From the ISO 9000 information, no such From the CMM’s statistical information, The software
information can be extracted, since industry still needs a lot of improvement.
companies are either certified or not
certified.
A geographical conclusion can be made Many companies are still in level one, and very few
in that Europe has the highest number are in level four and five.
of certified companies.

8. Time needed

ISO CMM

It takes about one and a half years to It takes an average of two years to move between levels
obtain ISO 9000 certification. of the CMM.

It shows that the ISO 9000 is aiming for a The CMM is aiming for a strong basis of software
general improvement. improvement.

9.Benefits

These benefits are often accompanied with great numbers. However, it should be noted
that companies feel more comfortable to report successes rather than failures, and that
numbers are sometimes bias because it depends how those numbers have been calculated.

The common benefits are:

Positive cultural change

Increased productivity

Better communications

Improved customer satisfaction

Anda mungkin juga menyukai