The term software engineering first was used around 1960 as researchers, management, and
practitioners tried to improve software development practice.
The NATO Science Committee sponsored two conferences on software engineering in 1968
(Garmisch, Germany) and 1969, which gave the field its initial boost. Many consider these
conferences to be the start of the field. Software engineering has never looked back thereafter.
Theories,
Methods and
Tools
predictable schedule.
Software crisis
Software engineering arose out of the so called software crisis of the 1960s, 1970s, and 1980s,
when many software projects had bad endings. Many software projects ran over budget and
schedule. Some projects caused property damage. A few projects caused loss of life. As software
becomes more pervasive, we all recognize the need for better software. The software crisis was
originally defined in terms of productivity, but evolved to emphasize quality.
According to a study by the Standish Group, in 2000, only 28 percent of software projects could be
classed as complete successes (meaning they were executed on time and on budget), while 23 percent
failed outright (meaning that they were abandoned).
Software(s) still come late, exceed the budget and are full of residual faults. According to IBM reports,
“31% of the projects get cancelled before they are completed, 53% over run their cost estimates and for
every 100 projects, there are 94 restarts. This is the concept behind software crises.
Due to the complexity of software and lack of proper methodology, most of the projects fail to deliver in
60’s, 70’s and 80’s. There was a conference of NATO countries in 1970s and there the word Software
Engineering was coined. It is always a better option to adopt software engineering principles to get a
quality software product in due time and within our cost constraints.
Since several decades, numerous approaches have emphasized the importance of managing the so-
called software crisis, which generally refers to the problem of developing cost-effective software
products.
To overcome the software crisis, many different techniques have been proposed, such as programming
languages, software engineering methods, frameworks, patterns, and architectures. Each technique
addresses the software crisis problem from a certain perspective.
According to Barry Boehm [Boe81], Software Engineering is the application of science and mathematics
by which the capabilities of computer equipment are made useful to man via computer programs,
procedures and associated documentation.
At the first conference on software engineering in 1970, Fritz Bauer defined software engineering as
“The establishment and use of sound engineering principles in order to obtain economically developed
software that is reliable and works efficiently on real machines”
Stephen Schach defined it as “A discipline whose aim is the production of quality software, software
that is delivered on time, within budget and that satisfies our requirements.
Software engineering (SE) is the profession concerned with creating and maintaining software
applications by applying computer science, project management, domain knowledge, and other
skills and technologies (From Wikipedia, the free encyclopedia).
Software myths
There are number of myths associated with software development and its process. Some of them affect
the way in which software development should take place. This goes on to show the difference of
opinion in the minds of developers and the management. As a result of this, developers are the ultimate
sufferers. Management will always have the following myths in their minds:
1. Software is easy to change: It is easy editing the program and changing it but that may
result in introduction of errors, which may upset all our calculations. Management hardly
considers the problems associated with software changes and still maintain that software
changes are easy to perform.
2. Testing or “proving” software correct can remove all the errors: Testing can show the
presence of errors in our programs but it cannot guarantee absence of errors. Our aim is to
develop effective test cases to find out the maximum possible errors. Management or
people concerned always believe that after testing no more problems can surface. This fact
is a fallacy as errors can creep into the system at a later stage.
3. Reusing software increases safety: Re-use of software may pose integration, logical or
syntax problems in new software even if the modules were working correctly in old
software. Code re-use may be a powerful tool but it requires analysis to determine its
suitability and testing to determine if it works. Management will say “how come the system
is not performing when it was working fine the previous time”. Reusability can induct its
own set of problems.
4. Software can work right the first time: Software developers are sometimes made to build
the system without making a prototype but for a successful and efficient system, software
development will take more than just the first time.
5. Software can be designed thoroughly enough to avoid most integration problems: There is
an old saying among software developers “Too bad, there is no compiler for specifications”.
This point out the fundamental difficulty with detailed specifications. They always have
inconsistencies and there is no computer tool to perform consistency check therefore
special care is to be taken to understand the specifications and if there is an ambiguity, it
should be resolved before proceeding for design.
6. Software with more features is better software: Don’t try too hard with the software and
incorporate too much. The best programs are those that do one thing very well.
7. Addition of more software engineers will make up the delay: “Too many cooks spoil the
dish” applies to software also. This may further delay the software.
8. Aim is to develop working programs (rather than good quality maintainable programs):
The aim has shifted to developing good quality maintainable programs. Maintaining
software has become very crucial area for software engineering.
In this part, we are going to study the process of system development which is a long and time
consuming procedure and requires thorough planning and systematic approach to problem solving.
SDLC (Software development Life Cycle) is an important step in this direction. This life cycle approach
covers steps like System study (problem definition), Feasibility study, Analysis, Design (coding), Testing,
Implementation and Maintenance.
After studying this part of the chapter, we should be able to
1) Study the problem properly so that all user requirements can be acquired
2) Conduct a feasibility study on the basis of requirements and constraints
3) Develop logical design on paper which is referred to as Design
4) Do physical designing (coding) on the basis of Logical design which is referred to as
Development
5) Implement the system on user’s site
6) Maintain the system considering corrective, adaptive and perfective issues.
Everybody on this earth follows a life cycle be it plants, human beings or animals. Since the time
of their birth they pass through certain stages from adolescence to maturity and old age. Similar
is the case with Systems or we will say Information Systems. In today’s business world
Information Systems are becoming an integral part so they should be carefully planned and
developed. For this purpose Information Systems should follow standard procedure which is
termed as Systems Development Life Cycle. The life cycle of systems has seven steps and they
are performed one after the other.
When an organization needs some change in the present system for the sake of better
performance an idea of developing a computerized system is conceived in the minds of the
management of organization. This abstract idea pass through various stages called phases of
SDLC to take a physical form that will fulfill the requirements of the users of the system and
ultimately benefit the organization through better performance.
is no longer available for use. The software life-cycle typically includes a requirements phase,
logical design phase, physical design phase, test phase, implementation (installation) and
checkout phase, operation and maintenance phase, and sometimes, retirement phase.
A prescriptive life-cycle model describes how software systems should be developed. Most of
these models are intuitive and many software development details are ignored, glossed over, or
generalized. This raises concern for the relative validity and robustness of such models.
Descriptive life-cycle models characterize how software systems are actually developed. One
must observe or collect data throughout the development of a software system. These models
are much less common than prescriptive models. Descriptive models are specific to the system
observed, and only generalizable through systematic analysis.
Let’s have a quick glance on the phases /steps of SDLC and what is done in each phase:
System Study (Problem definition): The working of present system (manual or computerized) is
studied to gain in-depth knowledge of the system and also of the problems inherent in the
present system due to which management is seeking change.
Before beginning a project, a short, low-cost study to identify Clients, Scope, Potential
benefits, Resources needed (staff, time, equipment, etc.), Potential obstacles etc. Further we
may enquire where the risks are? How can they be minimized?
a) Technical feasibility
b) Economic feasibility
c) Behavioral feasibility
d) Operational feasibility
In Economic feasibility, we check to see if we are economically sound to go for a new system.
Cost benefit analysis is performed to see if we expect benefits from the system after spending
such a large amount of money. If the system doesn’t appear to be profitable then is not feasible
to move ahead.
In Behavioral feasibility, we check to see if the system is behaviorally feasible. It happens that
people are resistant to change and when they hear of computerization they think their jobs will
go and be replaced by computer professionals so they go on a strike putting the software
proposal in jeopardize so it is always better to look into this kind of feasibility before the start of
actual system.
In Operational feasibility, we check to see if the system is operationally feasible i.e. is it possible
to make an operational system.
System Analysis: After the feasibility study is conducted and the system analyst gets a go-ahead
from the organization, the logical designing of the system is done on paper in the form of Data
Flow Diagrams, Systems flow charts etc.
System Design/Development: It is the conversion of logical design to physical design in the form
of computer programs, databases, files etc. Coding is performed for Input, Output, Process and
Menu forms. If the system has been properly analyzed then there is very little chance of system
failure due to design phase.
System Testing: The system will be tested for errors and to see whether it fulfils the
requirements of users properly or not. There are various kinds of testing like unit testing, system
testing, string testing, white box testing, black box testing etc.
System implementation: After the system is developed and tested, the users are trained and
the users’ manual is prepared to help the users use the proposed system properly and
efficiently. After this the proposed system takes place of the present working system i.e. it is
installed on users’ site. This comes under implementation phase. There are various kinds of
implementation procedures like direct conversion (new system directly implemented), phased
conversion (implementation of software in phased manner i.e. modules are implemented
separately) and parallel conversion (where the new system is run in parallel with the old
system).
System maintenance: After the user has used the system for a reasonable time he evaluates the
performance of the system and if there is any need for improvement then the system is
modified. There are three kinds of maintenances: Corrective, adaptive and perfective and will be
dealt with in detail in chapter on maintenance.
Important Note: The terms used in SDLC like analysis, design, development and
Implementation vary from literature to literature. The fact is that steps of SDLC are so closely
tied together that it is always difficult to separate them. Some authors define analysis as
requirement analysis, design as logical design (drawing on paper) and implementation as coding
of the system whereas I’ve taken it as analysis for logical design, design for physical design i.e.
Coding of the system and implementation as loading the newly made software product on
user’s site.
There are basically four models in Software Engineering and they more or less make use of SDLC steps.
The difference lies in their applicability. Waterfall model is used for smaller projects while spiral model is
being used for bigger projects.
1. Waterfall model: This is the simplest SDLC model and is mostly used for small projects (In software
engineering, there are many kinds of projects). The waterfall model derives its name due to the
cascading effect from one phase to the other as is illustrated in Figure. In this model each phase has well
defined starting and ending point, with identifiable deliveries to the next phase.
Note this model is sometimes referred to as the linear sequential model or the software life cycle.
1. Problem definition
2. Feasibility study
5. Testing
7. Maintenance
These steps are performed one after the other and they fall like water drops falling from the sky.
Though waterfall model is not used for big projects but some of its steps (analysis, design,
implementation and maintenance) are followed in other models also.
Advantages:
The biggest advantage of waterfall model is its simplicity. Other advantages are
Testing is inherent to every phase of the waterfall model
It is an enforced disciplined approach
It is documentation driven, that is, documentation is produced at every stage
Limitations:
1. All the requirements of the system have to be specified in advance before proceeding with
the actual design of the system.
2. There is very little interaction of the user with the Systems/ software engineer when it
comes to the ultimate development of the system
3. Problems are not discovered until system testing.
4. Requirements must be fixed before the system is designed - requirements evolution makes
the development method unstable.
5. Design and code work often turn up requirements inconsistencies, missing system
components, and unexpected development needs.
6. System performance cannot be tested until the system is almost coded; under capacity may
be difficult to correct.
The standard waterfall model is associated with the failure or cancellation of a number of large
systems. It can also be very expensive. As a result, the software development community has
experimented with a number of alternative approaches
2. Prototype model: In this type of model, we develop a prototype (dummy software) of the
system and show it to the user for feedback about the system. The software prototype is
analogous to building prototype built by an architect when designing a big complex. The
prototype (in form of a menu with some forms attached along with few tables) is shown to the
user and his views are taken to improve the design of the system.
-Low reliability,
-Inefficient performance.
Even though construction of a working prototype model involves additional cost, overall
development cost might be lower for systems with unclear user requirements and systems
with unresolved technical issues. Many user requirements get properly defined and technical
issues get resolved as these would have appeared later as change requests and resulted in
incurring massive redesign costs.
Limitation:
Since two systems are built instead of one, extra money is required to develop two systems.
Design and code for the prototype is usually thrown away.
An advantage of this approach is that it can result in better testing because testing each
increment is likely to be easier than testing the entire system. Further increments provide
feedback to the client that is useful for determining the final requirements of the system.
Evaluation Requirements
Implementation
Design
The biggest advantage of iterative enhancement is that prototype is not thrown away but is further
refined according to the client needs. It is evolutionary model that uses prototype as the base and
4. Spiral model: Proposed by Boehm, this model deals with uncertainty (risk factor), which is
inherent in software projects. Risk analysis is performed along with every step as can be seen
from the figure. Barry Boehm considered risk factors and tried to incorporate project risk into
life cycle model. This model was proposed in 1986.
Risk
Analysis
Risk
Risk analysis
analysis
Rapid
prototype
verify
Verify
Verify
Verify
Verify Implementation
Design
This model is well suited to accommodate any mixture of a specification oriented, prototype
oriented, simulation oriented or some other type of approach as it covers risk factor. For a high risk
Risk analysis
project this might be a preferred model.
Another important feature of this model is that each cycle of the spiral is completed by a review
(under verify category in the figure above) that covers all the products developed during that cycle.
Phases of the spiral model:
1. Determine objective, alternatives, constraints: Specific objectives for the project phase are
identified
Objectives
o Procure software component catalogue
Constraints
o Within a year
o Must support existing component types
o Total cost less than $100, 000
Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
Risks
Risk resolution
Results
Plans
Commitment
2. Evaluate alternatives; identify, resolve risks: Key risks are identified, analysed and
information is sought to reduce these risks
Objectives
Constraints
Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
Risks
o May be impossible to procure within constraints
o Catalogue functionality may be inappropriate
Risk resolution
o Develop prototype catalogue (using existing 4GL and an existing DBMS) to
clarify requirements
o Commission consultants report on existing information retrieval system
capabilities.
o Relax time constraint
Results
Plans
Commitment
3. Develop, verify next-level product: An appropriate model is chosen for the next
phase of development.
Objectives
Constraints
Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
Risks
Risk resolution
Results
o Information retrieval systems are inflexible. Identified requirements cannot be
met.
o Prototype using DBMS may be enhanced to complete system
o Special purpose catalogue development is not cost-effective
Plans
Commitment
Objectives
Constraints
Alternatives
o Buy existing information retrieval software
o Buy database and develop catalogue using database
o Develop special purpose catalogue
Risks
Risk resolution
Results
Plans
o Develop catalogue using existing DBMS by enhancing prototype and improving
user interface
Commitment
o Fund further 12 month development
We have studied four models along with their advantages and disadvantages. Spiral model is the best
model among these if risk is involved in the project. The next topic deals with Boehm’s risk factors
(Risk factors involved in a software project) otherwise iterative enhancement is the best choice.
Barry Boehm has identified the following risk factors that play an important role during the project
duration. It is not necessary that these risks are always associated with the project but more or less
1. Personnel shortfalls: Manpower shortage due to any reason is the biggest risk involved
with a project. Professionals can either leave the job in between or can be fired or there
can be so many reasons for shortfalls.
2. Unrealistic schedules and budgets: Schedules and budgets that have not been properly
made. There can be many reasons for that, for example budget was prepared just to
attract a tender or schedule was wrongly put across.
3. Developing the wrong software functions: Any body can commit an error “To err is
human” so it is always a possibility that software function has been incorrectly
developed.
4. Developing the wrong user interface: Developer may also develop a wrong user
interface (Input/ Output Screens) and this is always a risk.
5. “Gold plating”: It is always possible that a developer may just plat the software product
from outside and re-use a system that is not worthy of being sold.
6. Continuing stream of requirements changes: This again is a big problem encountered
by software professionals. Requirements keep changing and are always a hindrance in
the work of developers. If requirements change midway a project then it is a big loss in
terms of money and time.
7. Real-time performance shortfalls: When the software is ready it is not certain whether
it will perform 100%, there can be real time performance issue bottlenecks
Document driven
Does not scale down – its difficult to go back
up
Spiral Incorporates features of all Can be used only for large-scale products
the above models
Correctness – The software should behave according to specifications specified by the user
Reliability – It is a Statistical property and states the probability that software system will not fail
during a specified period of time under certain constraints.
Performance – It states that the software should use resources economically (in an efficient
manner)
Usability – It states that the software should be easy to use and we should keep in mind the
intended audience who may not be computer literate or expert for that matter.
Maintainability – It states that software should be conducive to corrective, adaptive, and perfective
maintenance
Reusability – It states how easily can we use it for another application (minor modifications allowed)
or use some part ourselves from another software.
Portability – It states how easily can it run on different environments and is one of the most
desirable property these days
Requirement Engineering
The Institute of Electrical and Electronics Engineers (IEEE) defines a requirement as "(1)
A condition or capability needed by a user to solve a problem or achieve an objective. (2)
A condition or capability that must be met or possessed by a system or system component
to satisfy a contract, standard, specification or other formally imposed document.
These are pretty good in that they help balance the fine line of requirements describing the
basics of what needs to be done as opposed to the implementation. What these definitions
do not necessarily answer to, however, is the idea that requirements are a multidimensional
set of product end-state needs, and all such needs are vying for the same project resources.
Anytime you have something vying for the needs of finite resources you run into issues of
risk and prioritization since this applies to requirements. With risk and prioritization comes
the understanding that deriving requirements is an engineering task and, as with any form
of engineering, there should be a stable process in place.
Types of Requirements
There are business requirements and there are user requirements. The former are
generally a high-level set of requirements that pertain to the needs and wants of the
organization such that they can adequately construct, maintain, and support the product(s).
Business requirements can also cover aspects of the business-customer relationship. The
user requirements pertain to the tasks and goals that the user should be able to accomplish
with the product(s). These tasks and goals then feed directly into functional requirements,
which cover what types of functionality must be put into the product in order to allow the
users to accomplish those tasks and goals. In other words, functional requirements cover
the external behavior of the product(s).
Note that functional requirements often cover the type of functionality and not the exact
implementation of the functionality, which is more of a design and implementation issue.
Another way to look at it is that functional requirements define what will be constructed,
but not necessarily how it will be constructed. Relating to this, according to Karl Wiegers, a
feature is "a set of logically related functional requirements that provides a capability to
the user and enables the satisfaction of a business requirement." Wiegers also states that
one of the key things to understand is that user requirements must align with the business
requirements. Not only should that but all functional requirements be traceable to user
requirements.
2. User Requirements: User requirements will describe the needs, goals, and
tasks of the user (or of various users) as they are thought to be by the
business. Usually this will be based on industry statistics for such things or
focus groups that were conducted.
Gathering requirements
Gathering data/information/requirements from the user is an uphill task in life cycle model. There
are a plenty of reasons why gathering requirements is always difficult and it becomes even more
challenging as user’s requirements keep on changing again and again.
After gathering user requirements, they are analysed and then put together in a document called
”Software Requirement Specification” document. This document also serves as a contract between
the developer and the client.
- Interviewing the users/clients: Both the structured and unstructured interview may be planned. In
the case of the structured interview a list of specific closed ended questions are posed and the
answers recorded. E.g. “How long does it take to perform activity X” or “How many people are in the
marketing dept?” or “How much was spent on the current system last month?”
In the case of the unstructured interview open-ended questions are asked which allow the
interviewee to outline broad areas or express views/opinions/convictions that may be hard to
quantify. E.g. “Explain why the current product is unsatisfactory?” or “What are the best features of
the current system?” or “What would be the most effective way to accomplish task X?”
At the end of the interview process the interviewer prepares a written report outlining the results of
the interview. Those interviewed should be given copies and allowed to add/clarify statements
made.
- ”Brainstorming” sessions
- Analysing an existing system to know more about it, its limitations so that they can be rectified.
We must understand how the new system will
differ from any old such system
- Analysing the environment gives insight into the system- e.g. process analysis
- Prototyping can be performed to gather requirements and it gives best feedback and more formal
specifications but can be expensive
Requirements Analysis
This step is also known as problem analysis. Here we are analyzing the problem so that we can
proceed forward towards our goal of software project development. The members of a software
development team must have a clear understanding of what the software product must do. The first
step is to perform a thorough analysis of the client’s current situation, careful to define the situation
as precisely as possible. This analysis may require examination of a current manual system being
operated, or may need an appraisal of some computerized system to be performed. Once a clear
picture of the current situation is obtained, then the question of “What must the new product be
able to do?” may be answered.
Requirements analysis begins with the requirements team meeting with members of the client
organization. The initial meeting may be used to plan subsequent interviews or techniques for
soliciting the relevant information from the client’s organization.
• Domain understanding: The problem domain (area) for which the system is being developed
should be properly studied in detail to pick up each and every requirement of the system so that
none of the requirement is missed out.
• Stakeholders: We should know exactly who are the stakeholders (all persons interested in the
system) and what are their expectations from the system.
Once this much work has been done, we have a lot collected requirements at our disposal which
may be correct, incorrect or duplicated.
Once we have collected the requirements, they are now classified and organized according to some
scheme. Duplicated and incorrect requirements are removed from the list.
• Classification into coherent clusters: Clusters of requirements are made that help in later stages
(e.g., legal requirements)
• Recognize and resolve conflicts: If conflicts occur between requirements then they are recognized
and resolved (e.g., functionality vs. cost vs. timeliness)
3. Model the requirements:
Once we have organized the requirements, they can now be modeled using any of the methods
given below. Modeling of requirements helps in understanding the system in a better way. (Process
modeling will be covered later in analysis chapter using DFD, decision tables, Pseudo code etc.)
• Informal methods:
Prose: Requirements can be stated inform of English prose which everybody can
understand easily
• Systematic methods:
Following points should be kept in mind regarding requirements if we want to succeed in the
development process:
Requirements tell us what the software system should do and not how it should do it. Stress
should be given only on what is to be done to solve the problem and not how?
Requirements are independent of the implementation tools, programming paradigm, etc. They
can therefore be stated in English representation also.
However, the requirements are then analysed with the intended implementation methodology
in mind.
a set of precisely stated properties or constraints which a software system must satisfy.
a software requirements document establishes boundaries on the solution space of the problem
of developing a useful software system.
The task should not be underestimated, e.g. the requirements document for a ballistic missile
defense system (1977) contained over 8000 distinct requirements and support paragraphs and
was 2500 pages in length.
• Requirement specification is generally the most crucial phase of an average software project - if
it succeeds then a complete failure is unlikely.
• The requirements specification can (and should) also be eventually used to evaluate if the
software fulfills the requirements.
• As users generally can not work with formal specifications, natural language specifications must
or should often be used.
• Someone who has experience : Experienced persons should be given the preference as they can
handle the problems faced during requirement specification in a more efficient manner.
• Someone who knows similar systems and/or the application area : Again the stress is on finding
somebody who has got the experience of having worked on similar systems or application areas.
• Someone who knows what is possible and how (and how much work is roughly needed).
• Basic textual document, e.g. according to the ANSI/IEEE Standard 830 and will be discussed next
• A conceptual model of the domain, which may be already available or built separately
• A description of the processes, e.g. a data flow diagram or a system flowchart.
• Noise
- Do not include material which does not contain relevant information
- If the users know some information technology, they want to start solving the problem as they
express it
- Many formal (also graphical) methods tend to direct the process into this.
- Although we model the problem rather than the solution, it is good to have some idea of what is
possible.
1. Introduction
1.1. Purpose
1.2. Scope
1.3. Definitions, Acronyms and Abbreviations
1.4. References
1.5. Overview
2. General Description
2.1. Product Perspective
2.2. Product Functions
2.3. User Characteristics
2.4. General Constraints
2.5. Assumptions and Dependencies
3. Specific Requirements
3.1. Functional Requirements
3.2. External Interface Requirements
3.3. Performance Requirements
3.4 Design Constraints
3.4.1. Standards Compliance
3.4.2. Hardware Limitations …
3.5. Attributes
3.5.1. Security
3.5.2. Maintainability …
3.6. Other Requirements
3.6.1. Data Base …
After we have designed the system on paper (logical design), now it is the time to translate that
paper work into a physical system or a coded software system. There are so many options
available with us as far as choice of computer languages or packages is concerned. The recent
trend of programming (coding) is CLIENT SERVER computing and we are making use of front
end (Visual Basic) and back end (Oracle) equally supported by middleware (ex ODBC). Visual
Basic is the ideal choice for coding the system and is supported by Oracle at the back end. Above
all, we are coding the system in a modular manner. This is also called MODULAR DESIGN or
STRUCTURED DESIGN.
STRUCTURED DESIGN
After the analysis phase is complete, structured design uses DFDs, data dictionaries, flow charts,
etc for design process. The stress here is on 'how' the system will be developed? Design is a
highly creative and challenging phase and it focuses on how to make a system that is fully
functional, reliable and reasonably easy to understand and operate. The purpose of design phase
is to produce a solution to a problem given in SRS document.
D
What How
E
V
Logical E Physical
design design
L
OBJECTIVES OF DESIGN
Design bridges gap between specifications and coding. The design of the system is correct if the system
built precisely according to design satisfies the requirements of that system. Clearly, the goal during
design phase is to produce correct design or in other words the goal is to find the BEST possible design,
within the limitations imposed by requirements and the physical and social environment in which the
system will operate.
Verifiability: is concerned with how easily the corrective ness of a design can be argued.
Traceability: It helps in design verification. It requires that all design elements must be traceable to the
requirements.
Simplicity / Understandability: are related to simple design so that user can easily understand and use it.
Simple design is easily to understand and maintain also, in the long run or through the life cycle of a
software system.
DESIGN PRINCIPLES
There are certain principles that can be used for developing / coding a system. These principles are
meant to effectively handle the complexity of design process. These principles are:
i) Problem Partitioning: If the software is large than we can partition the problem. "Divide and conquer"
is the policy for such system. We can divide the software into modules and go for the development
separately. One module can go to one programmer named A and the other module can go to
programmer B.
Problem Partitioning improves the efficiency of the system. It is necessary that all components /
modules have interaction between them.
ii) Abstraction: Abstraction is an indispensable part of design process and is essential for a problem
partitioning. Abstraction is a tool that permits a designer to consider a component at an abstract level
(outer view), without worrying about the details of implementation of the component.
Abstraction means to look at the software components from outside. The process of establishing
the decomposition of a problem into simpler and more understood primitives is basic to science
and software engineering. This process has many underlying techniques of abstraction.
An abstraction is a model. The process of transforming one abstraction into a more detailed
abstraction is called refinement. The new abstraction can be referred to as a refinement of the
original one. Abstractions and their refinements typically do not coexist in the same system
description. Precisely what is meant by a more detailed abstraction is not well defined. There
needs to be support for substitutability of concepts from one abstraction to another. Composition
occurs when two abstractions are used to define another higher abstraction. Decomposition
occurs when an abstraction is split into smaller abstractions.
Information management is one of the goals of abstraction. Complex features of one abstraction
are simplified into another abstraction. Good abstractions can be very useful while bad
abstractions can be very harmful. A good abstraction leads to reusable components.
Information hiding distinguishes between public and private information. Only the essential
information is made public while internal details are kept private. This simplifies interactions and
localizes details and their operations into well-defined units.
Abstraction, in traditional systems, naturally forms layers representing different levels of
complexity. Each layer describes a solution. These layers are then mapped onto each other. In
this way, high level abstractions are materialized by lower level abstractions until a simple
realization can take place.
In functional abstraction, details of the algorithms to accomplish the function are not visible to
the consumer of the function. The consumer of the function need to only know the correct calling
convention and have trust in the accuracy of the functional results.
In data abstraction, details of the data container and the data elements may not be visible to the
consumer of the data. The data container could represent a stack, a queue, a list, a tree, a graph,
or many other similar data containers. The consumer of the data container is only concerned
about correct behavior of the data container and not many of the internal details. Also, exact
details of the data elements in the data container may not be visible to the consumer of the data
element. An encrypted certificate is the ultimate example of an abstract data element. The
certificate contains data that is encrypted with a key not know to the consumer. The consumer
can use this certificate to be granted capabilities but cannot view nor modify the contents of the
certificate.
Traditionally, data abstraction and functional abstraction combine into the concept of abstract
data types (ADT). Combining an ADT with inheritance gives the essences of an object-based
paradigm.
In process abstraction, details of the threads of execution are not visible to the consumer of the
process. An example of process abstraction is the concurrency scheduler in a database system. A
database system can handle many concurrent queries. These queries are executed in a particular
order, some in parallel while some sequential, such that the resulting database cannot be
distinguished from a database where all the queries are done in a sequential fashion. A consumer
of a query which represents one thread of execution is only concerned about the validity of the
query and not the process used by the database scheduler to accomplish the query.
ABSTRACTION
"A view of a problem that extracts the essential information relevant to a particular purpose and ignores
the remainder of the information." [IEEE, 1983]
"An abstraction denotes the essential characteristics of an object that distinguish it from all other
kinds of object and thus provide crisply defined conceptual boundaries, relative to the perspective of the
viewer." [Booch, 1991]
PAYROLL SYSTEM
In the above software system, programmer coding module "Data Entry" will know the internal details of
his own module. He just knows from outside that there are modules named "Queries", "Processing",
"Reports" and "Quit". Similarly programmer coding "Queries" modules will have complete knowledge
about his own module and so on.
Abstraction is necessary when we are dividing the problem into smaller parts so that we can proceed
with one design process effectively and efficiently.
Abstraction can be functional or data abstraction. In functional abstraction, we specify the module by
the function it performs.
In data abstraction, data is hidden behind functions / operations (remember C++ or Java). Data
abstraction forms the basis for object-oriented design.
A system consists of components, which have components of their own. A system is a hierarchy of
components and the highest-level component corresponds to the total system. To design such a
hierarchy there are two approaches - top down and bottom up.
The top down approach starts from the highest-level component of the hierarchy and proceeds through
to lower level. By contrast, bottom up approach starts with lowest level components and proceeds
through higher levels to the top-level component.
This approach starts by identifying major components of the system and decomposing them into their
own lower level components and iterating until the desired level of detail is achieved. Top down design
methods often results in some form of stepwise refinement starting from an abstract design, in each
step the design is refined to a more concrete level until we reach a level where no more refinement is
needed and the design can be implemented directly.
This is the top (root) of software system. Now it can be further decomposed.
LIBRARY INFO SYSTEM
Now we can move further down and divide the system even further.
STUDENT FACULTY
MEMBER MEMBER
ISSUE RETURN
This iterative process can go on till we have reached a complete software system. A complete software
system is a system that has been coded completely using any front-end tool (ex Java, Visual Basic, VC++,
Power Builder etc).
Top down design strategies work very well for system that are made from the scratch i.e. the developer
has no knowledge of the system prior developments. We can always start from the main menu and
proceed down the hierarchy designing data entry modules, queries modules, etc.
BOTTOM UP STRATEGIES
In bottom strategy we start from the bottom and move upwards towards the top of the software.
The approach leads to a style of design where we decide how to combine these modules to provide
larger ones; to combine these to provide even larger ones and so on till we arrive at one big module,
which is the whole of the desired program.
This method has one weakness - we need to use a lot of intuition to decide exactly what functionality a
module should provide. If we get wrong then at higher level, we will find that this is not as per
requirements, then we have to redesign at a lower level. if a system is to be built from EXISTING
SYSTEM, this approach is more suitable as it starts from some existing modules. For example - suppose a
hospital named Dayanand Nursing Home wants to modify its existing computerized system then we'll
definitely go with bottom up approach.
MODULARITY
By the term modularity, we mean that a system is decomposed into manageable components and these
components can be coded separately. By modularity we do not mean that a system is chopped into
smaller parts. We follow certain concepts like coupling and cohesion while breaking the system into
modular pieces.
There are various variations to the term module. he range from sub routine in Fortran to package in Ada
to functions in C and PASCAL to classes in Java / C++. A modular design consists of well-defined
manageable units with well-defined interfaces among the units.
Desirable properties of modular system include:-
MODULE COUPLING
Coupling is the measure of the degree of interdependence between modules. Two modules with high
coupling are strongly connected and thus dependent on each other. A change in one module can bring
about a change in other module too and this is not a good practice. A software system should support
LOOSE COUPLING and avoid tight coupling.
1) Data coupling
3) Stamp coupling
4) Control coupling
5) Common coupling
The strength of coupling between two modules is influenced by the complexity of the interface, type of
connection and type of communication.
Given two modules A and B we can identify a number of ways in which they can be coupled.
DATA COUPLING
A and B communicate only by passing parameters or data. This is highly desirable but data should not be
passed between procedures unnecessarily. If one module needs only a part of data structure, it should
be passed just that part, not the whole thing.
Data coupling is the most desirable type of coupling in software systems.
STAMP COUPLING
When a data structure is used to pass information from one module to another and the data structure
itself is passed, modules are connected by stamp coupling. Modules A and B make use of common DATA
TYPE but perhaps perform different operations on it.
A combination of DATA and STAMP coupling is sometimes used in our software systems.
CONTROL COUPLING
This type of coupling makes use of control flag for activation purposes. A module A can transfer control
to module B by procedures call. This is the proper way to pass control around the program.
When one module passes parameters to control the activity of another module, we say there is control
coupling between the two.
Module A Module B
.
go_on = .T.
.
.
If go_on = .T.
.
Dataentry();
read (go_on)
Else
Queries();
As seen from the example, go-on is a Boolean variable and its value is read from module A. go-on is
being used in Module B as a control flag. A change of value in module A changes the course of action in
module B.
COMMON COUPLING
Global variable
A1
A2
A3
V1
V2
Change
Increment V1 = V2 + A1
V1
V1 = 0
Module A and Module B when use some shared data area (e.g. global variables). This is quite
undesirable because if we want to change the shared data, we have to find and modify all the
procedures that access it. This is the reason why global variable are avoided now a days with the fear of
common coupling.
CONTENT COUPLING
It occurs when module A modifies B (modifying local data values or instructions). Content coupling is the
least desirable coupling, we never want a module to modify another module.
MODULE COHESION
Cohesion is the measure of the degree to which elements of a module are functionally related.
A strongly cohesive module is the one in which all instructions of a module are related to a single
function. There are many types of cohesion:
2) Sequential cohesion
3) Communication cohesion
4) Temporal Cohesion
5) Logical cohesion
FUNCTIONAL COHESION
This type of cohesion is found when all instructions in a module are performing the same function or
achieving the same goal.
For example A data entry module will contain instructions that will accomplish just one task of entering
data into the database. Similarly a module for processing would contain instructions needed to do the
processing.
Initialize variables
It occurs when output of some instructions forms the input to other instructions. It is seen quite often in
software system.
Update inventory
Update accounts
This is the type of cohesion where output of one is the input to other instructions.
COHESION
It occurs when instructions are operating on the same data or contribute towards the same output data.
TEMPORAL COHESION
Temporal here relates to time. It says that we can group instructions that are performed at the same
time during the day.
All these operations have nothing in common except for the fact that all these operations are “END OF
DAY CLEAN UP ACTIVITIES” and time permitting, they can be performed separately.
LOGICAL COHESION
It is found when module clubs together similar operations. This is not a good reason for being used in a
same module.
CO-INCIDENTIAL COHESION
This type of cohesion occurs when instructions of a module have no apparent relationship between
them. It should never be used.
STRUCTURE CHART
The fundamental tool of structured design is structured chart. It is a widely used tool for designing a top
down modular design DFDs form the basis for drawing structured chart. These are graphic descriptions.
They describe interaction between modules and the data passing between modules. These functional
module specifications can in turn be passed to the programmers prior to writing program code.
NOTATIONS:
Calling module
Called module
Module symbol
Sender receiver
Decision
symbol Loop repetition symbol
For example let's draw a structured chart to get employee details (one module is calling for the details
and other module is giving the details.)
Get employee
details Calling module
Emp_no
Emp is O.K.
Find
employee
details
Called module
Here “get employee details” is the calling module while “find employee details” is the called module.
Arrow with black end circle is the control flag used for message purpose while arrow with white end
circle is used for passing data.
Another example
Calculate net pay
Calculate earnings
& Deductions
Database design in Information systems
In the design of Information systems, the combination of front end tool and back end tool is
used. The front end options are Visual Basic, .Net, Java, C, C++. HTML etc where programmers
develop (code) the Information system. The back end options are Oracle, Sybase, Informix,
Access etc where the database designers design the tables (only data structure in RDBMS). The
front end forms make use of the tables made at the back end to do the complete processing.
The Input forms, Output forms, Reports, Queries and Processing forms are all made in some
front end tool (These forms have been covered in unit no. 5).
Here we are concerned with the database design so we will study what databases are and how are
they created while designing Information systems. DBMS (Database Management Systems)
helps us in database design.
Data Abstraction
1. The major purpose of a database system is to provide users with an abstract view of the
system.
The system hides certain details of how data is stored and created and maintained
We'll see later how well this model works to describe real world situations.
1. The object-oriented model is based on a collection of objects, like the E-R model.
o An object contains values stored in instance variables within the object.
o Unlike the record-oriented models, these values are themselves objects.
o Thus objects contain objects to an arbitrarily deep level of nesting.
o An object also contains bodies of code that operate on the the object.
o These bodies of code are called methods.
o Objects that contain the same types of values and the same methods are grouped
into classes.
o A class may be viewed as a type definition for objects.
o Analogy: the programming language concept of an abstract data type.
o The only way in which one object can access the data of another object is by
invoking the method of that other object.
o This is called sending a message to the object.
o Internal parts of the object, the instance variables and method code, are not visible
externally.
o Result is two levels of data abstraction.
Data Independence
1. The ability to modify a scheme definition in one level without affecting a scheme
definition in a higher level is called data independence.
2. There are two kinds:
o Physical data independence
The ability to modify the physical scheme without causing application
programs to be rewritten
Modifications at this level are usually to improve performance
o Logical data independence
The ability to modify the conceptual scheme without causing application
programs to be rewritten
Usually done when logical structure of database is altered
3. Logical data independence is harder to achieve as the application programs are usually
heavily dependent on the logical structure of the data. An analogy is made to abstract data
types in programming languages.
Database Manager
1. The database manager is a program module which provides the interface between the
low-level data stored in the database and the application programs and queries submitted
to the system.
2. Databases typically require lots of storage space (gigabytes). This must be stored on
disks. Data is moved between disk and main memory (MM) as needed.
3. The goal of the database system is to simplify and facilitate access to data. Performance
is important. Views provide simplification.
4. So the database manager module is responsible for
o Interaction with the file manager: Storing raw data on disk using the file system
usually provided by a conventional operating system. The database manager must
translate DML statements into low-level file system commands (for storing,
retrieving and updating data in the database).
o Integrity enforcement: Checking that updates in the database do not violate
consistency constraints (e.g. no bank account balance below $25)
o Security enforcement: Ensuring that users only have access to information they
are permitted to see
o Backup and recovery: Detecting failures due to power failure, disk crash,
software errors, etc., and restoring the database to its state before the failure
o Concurrency control: Preserving data consistency when there are concurrent
users.
5. Some small database systems may miss some of these features, resulting in simpler
database managers. (For example, no concurrency is required on a PC running MS-DOS.)
These features are necessary on larger systems.
Database Administrator
1. The database administrator is a person having central control over data and programs
accessing that data. Duties of the database administrator include:
o Scheme definition: the creation of the original database scheme. This involves
writing a set of definitions in a DDL (data storage and definition language),
compiled by the DDL compiler into a set of tables stored in the data dictionary.
o Storage structure and access method definition: writing a set of definitions
translated by the data storage and definition language compiler
o Scheme and physical organization modification: writing a set of definitions
used by the DDL compiler to generate modifications to appropriate internal
system tables (e.g. data dictionary). This is done rarely, but sometimes the
database scheme or physical organization must be modified.
o Granting of authorization for data access: granting different types of
authorization for data access to various users
o Integrity constraint specification: generating integrity constraints. These are
consulted by the database manager module whenever updates occur.
Database Users
1. Database systems are partitioned into modules for different functions. Some functions
(e.g. file systems) may be provided by the operating system.
2. Components include:
o File manager manages allocation of disk space and data structures used to
represent information on disk.
o Database manager: The interface between low-level data and application
programs and queries.
o Query processor translates statements in a query language into low-level
instructions the database manager understands. (May also attempt to find an
equivalent but more efficient form.)
o DML precompiler converts DML statements embedded in an application
program to normal procedure calls in a host language. The precompiler interacts
with the query processor.
o DDL compiler converts DDL statements to a set of tables containing metadata
stored in a data dictionary.
In addition, several data structures are required for physical system implementation:
Software Testing
Introduction
Because of the fallibility of its human designers and its own abstract, complex nature, software
development must be accompanied by quality assurance activities. It is not unusual for
developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight
control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities
combined. The destructive nature of testing requires that the developer discard preconceived
notions of the correctness of his/her developed software.
Definitions of ‘‘TESTING’’
According to Hetzel: Any activity aimed at evaluating an attribute or capability of a program or system. It
is the measurement of software quality.
According to Beizer: The act of executing tests. Tests are designed and then executed to demonstrate
the correspondence between an element and its specification.
According to IEEE: The process of exercising or evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to identify differences between
expected and actual results.
According to Myers: The process of executing a program with the intent of finding errors.
1950’s
- Testing is debugging
1960’s
- Compilers developed
1980’s
1990’s
2000’s
Testing should systematically uncover different classes of errors in a minimum amount of time
and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that
the software appears to be working as stated in the specifications. The data collected through
testing can also provide an indication of the software's reliability and quality. But, testing cannot
show the absence of defect -- it can only show that software defects are present.
Unit: testing at the lowest level of functionality (e.g., module, function, procedure, operation, method,
etc.)
Component: testing a collection of units that make up a component (e.g., program, object, package,
task, etc.)
Product: testing a collection of components that make up a product (e.g., subsystem, application, etc.)
Integration: It is the testing that takes place as sub-elements are combined (i.e., integrated) to form
higher-level elements
Regression: It is the testing to detect problems caused by the adverse effects of program change
Acceptance: It is a formal testing conducted to enable the customer to determine whether or not to
accept the system (acceptance criteria may be defined in a contract)
Alpha: It is the actual end-user testing performed within the development environment
Beta: It is the end-user testing performed within the user environment prior to general release
System Test Acceptance: It is the testing conducted to ensure that a system is ‘‘ready’’ for the system-
level test phase
1. Test Planning
2. Test Design
3. Test Implementation
4. Test Execution
5. Execution Analysis
6. Result Documentation
7. Final Reporting
Testing as a Profession
Software testing has become a profession -- a career choice. The testing process has evolved
considerably, and is now a discipline requiring trained professionals.
To be successful today, a SE organization must be adequately staffed with skilled testing professionals
who get proper support from management. Testing requires knowledge, disciplined creativity, and
ingenuity
Testing Techniques
Black Box: Testing based solely on analysis of requirements (specification, user documentation, etc.).
Also know as functional testing.
White Box: Testing based on analysis of internal logic (design, code, etc.). (But expected results still
come from requirements.) Also known as structural testing.
1. Guarantee that all independent paths within a module have been exercised at least once,
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.
This method enables the designer to derive a logical complexity measure of a procedural design
and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis
set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of
the basis set. Each flow graph node represents one or more procedural statements. The edges
between nodes represent flow of control. An edge must terminate at a node, even if the node
does not represent any useful procedural statements. A region in a flow graph is an area bounded
by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic
complexity is a metric that provides a quantitative measure of the logical complexity of a
program. It defines the number of independent paths in the basis set and thus provides an upper
bound for the number of tests that must be performed.
Cause-Effect Analysis
Testing based solely on analysis of requirements (specification, user documentation, etc.). Also know as
functional testing.
Black box testing concerns techniques for designing tests; it is not a level of testing. Black-box testing
techniques apply to all levels of testing (e.g., unit, component, product, and system).
Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:
White box testing should be performed early in the testing process, while black box testing tends
to be applied during later stages. Test cases should be derived which
1. Reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can
be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors
and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence
classes for an input condition. An equivalence class represents a set of valid or invalid states for
input conditions.
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence
class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Can be thought of as exhaustive testing Las Vegas style... Idea is to partition the input space into a small
number of equivalence classes such that, according to the specification,every element of a given class is
‘‘handled’’ (i.e., mapped to an output) ‘‘in the same manner.’’ Assuming the program is implemented in
such a way that being ‘‘handled in the same manner’’ means that either (a) every element of the class
would be mapped to a correct output, or (b) every element of the class would be mapped to an
incorrect output (this may not be the case, of course), testing the program with just one element from
each equivalence class would be tantamount to exhaustive testing.
Two types of classes are identified: valid (corresponding to inputs deemed valid from the specification)
and invalid (corresponding to inputs deemed erroneous from the speci- fication) Technique is also
known as input space partitioning
This method leads to a selection of test cases that exercise boundary values. It complements
equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on
input conditions solely, BVA derives test cases from the output domain also. BVA guidelines
include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above
and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise
the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.
A technique based on identifying, and generating test cases to explore boundary conditions.
Boundary conditions are an extremely rich source of errors. Natural language based specifications of
boundaries are often ambiguous, as in ‘‘for input values of X between 0 and 40,...’’ May be applied to
both input and output conditions. Also applicable to white box testing (as will be illustrated later).
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
The previous section suggests that Cause-Effect Analysis can be viewed as a logical extension of
equivalence partitioning... But it can also be described more simply as a systematic means for generating
test cases to cover different combinations of input ‘‘Causes’’ resulting in output ‘‘Effects.’’
A CAUSE may be thought of as a distinct input condition, or an ‘‘equivalence class’’ of input conditions.
An EFFECT may be thought of as a distinct output condition, or a meaningful change in program state.
Causes and Effects are represented as boolean variables and the logical relationships among them CAN
(but need not) be represented as one or more boolean graphs.
Test Planning
Although documenting the results of this process is obviously very important, relatively speaking, the
plan is nothing
Lower level test plans reflect specific planning for the various levels of testing specified by the MTP.
In general, testing planned first (system) is carried out last; testing planned last (unit) is carried out first.
Note that there could be more than one product test plan. Similarly for component and unit plans.
(Why?)
Test Plan Template
1. Identifier
4. Features to be tested
7. Dependencies
19. Approvals
- Testers
- Marketing reps.
- Usability Experts
- Performance Specialists
- End-user reps.
Test Design Specification: A document specifying the details of the test approach for a software feature
or combination of software features and identifying the associated tests.
Test Case Specification: A document specifying inputs, predicted (expected) results, and a set of
execution conditions for a test item.
Test Procedure Specification: A document specifying a sequence of actions for the execution of a test.
Test Incident Report: A document reporting on any event that occurs during the testing process which
requires investigation.
Test Summary Report: A document summarizing testing activities and results. It also contains an
evaluation of the corresponding test items.
UNIT V
SOFTWARE MAINTENANCE
Software maintenance is an activity that is performed by every development group when the software is
delivered to the customer. Maintenance occurs after the product is delivered at user's site, installed and
is in operational state. Delivery of the software product pens the gate for maintenance process. The
time spent and effort required keeping software operational after release is both very important and
crucial and consumes about 40-70% of the cost of the entire life cycle. It’s a pity that people pay little
attention to this important aspect of life cycle.
Maintenance means fixing things that break or wear out. In software, nothing wears out; it is either
wrong from the beginning or we conclude later that we want to do something different. According to
D.A Lamb, the term is so common that we must live with it. According to S.Y. Stephen, Software
Maintenance is a detailed activity that includes error detection and corrections, enhancements of
capabilities, deletion of obsolete capabilities, and optimization. Since change is unavoidable,
mechanisms must be devised for evaluating, controlling and making modifications. Any work done to
change the software after it is in operation or working state is said to be Maintenance. The aim is to
maintain the value/quality of software over time.
Types of Maintenance
Change is a necessity of life and this change can be due to compulsion or our own choice of modifying
the system. As the user demands and specification of the computer systems change due to changes in
the external world, the software system should also undergo changes. Most of the maintenance
activities are enhancements and modifications requested by the users.
Corrective Maintenance
This type of maintenance refers to modifications initiated by defects in the software product. A problem
can result from design errors, logic errors and coding errors. Defects are also caused by data processing
errors and system performance errors.
In the event of system failure due to an error, steps are taken to restore operation of the software
system. According to K. Bennett, maintenance personnel sometimes resort to emergency fixes known as
patching to reduce pressure from the management. This approach may be simple but gives rise to a
range of problems that include increased program complexity and unforeseen ripple effects. Unforeseen
ripple effects mean that a change to one part of a program may affect other program sections in an
unpredictable and uncontrollable manner leading to complete mess up in the logic of the system. This is
often due to lack of time to carry out a "impact analysis" before effecting the corrective change.
Adaptive Maintenance
This type of maintenance means modifying the software to match changes in the ever changing outside
environment. The term environment refers to the totality of all conditions and influences which act from
outside upon the software. For example, business rules, government policies, software and hardware
operating platforms. According to R. Brooks, a change to the whole or part of this environment will
require a corresponding modification of the software.
This type of maintenance includes any work initiated as a result of moving the software to a different
hardware or software platform-compiler or operating system. Any change in the government policy can
have implications on the software. For example, if government increases the HRA allowance to 40%
from 30% then software systems of Payroll will have to undergo adaptive changes.
Perfective Maintenance
Perfective maintenance means improving efficiency or performance of the software product. When the
software becomes operational, the user prefers to experiment with new cases beyond the scope for
which it was initially designed. For example, sometimes people are not satisfied with the GUI of the
software product so changes can be incorporated to make it more attractive and efficient. Note that this
type of maintenance was not compulsory but just for perfection, we undergo changes in the system.
Problems faced during Maintenance
The most important problem faced by maintenance is that developer must first understand fully the
system to be maintained. Then, the developer must understand the impact of the intended change.
Often the program is written by another person or group of persons working over the years
in isolation from each other.
Often the program is changed by wrong person who did not understand it clearly. That
would result in a deterioration of the program's original efficiency.
Program source code listings, even those that are well organized, are not structured to
support reading for comprehension. Reading or inspecting a program is not like reading an
article or book. With program listing, developer/programmer reads back and forth due to
the flow of instructions.
Some problems become clearer when the system is in use. Many users know what they
want but lack the ability to express it in a form understandable to programmers/analysts.
This is due to communication gap.
Systems are not designed for change. If there is hardly any scope for change, maintenance
will be very difficult. Therefore approach of development should be the production of
maintainable software.
Risk is the possibility of suffering loss. The losses in software development process can be vast
and can bring any business to a standstill. In the chapter on spiral model, we have seen various
risk factors as collected by Barry Boehm. These risk factors play an important role in the success
of any software project.
Here we look onto the process of risk management. The steps are depicted with the help of a
figure given below. These steps are:
International standards are suppose to contribute to making life simpler, and to increasing the
reliability and effectiveness of the goods and services we use.
Product Standards
Product standards define the characteristics which all product components should exhibit.
Process Standards
Why Standards?
Standards encapsulate the best or most appropriate practice.
QA now becomes the activity for ensuring that standards have been followed
over a project’s lifecycle, new team members may be added and standards help in assisting their
useful integration
Using Standards
Standards exist for
o standards documents should include the rationale behind each standardization decision
o this is probably the most important factor with respect to standards acceptance
Many countries have national standards bodies where experts from industry and universities
develop standards for all kinds of engineering problems. Among them are, for instance,
The International Organization for Standardization, ISO, in Geneva is the head organization
of all these national standardization bodies (from some 100 countries, one from each country).
ISO's work results in international agreements which are published as International Standards.
ISO Certification
The International Organizations of Standards body does not itself issue certificates to
organizations. It does certify third-party Certification Bodies who it authorizes to examine
('audit' or 'assess') organizations that wish to apply for ISO 9000 compliance certification. Both
ISO and the Certification Bodies charge fees for their services.
The applying organization will be assessed based on an extensive sample of its sites, functions,
products, services, and processes, and a list of problems ( 'action requests' or 'non-compliances' )
made known to management. Providing there are no major problems on this list, the certification
body will issue an ISO 900x certificate for each geographical site it has visited once it receives a
satisfactory improvement plan from the management showing how the problems will be
resolved.
Under the 1994 standard the auditing process could be adequately addressed by performing
'compliance auditing', which could be characterized simply as:-
ISO 9000 covers the basic language and expectations of the whole ISO 9000 family.
ISO 9001 is intended to be used in organizations who do design, development, installation, and
servicing of their product. It discusses how to meet customer needs effectively. This is the only
implementation for which third party auditors may grant certifications. The latest version is
:2000.
ISO 9002 is nearly identical to 9001, except it does not incorporate design and development.
Cancelled in the ISO 9000:2000 version and replaced by the ISO 9001:2000.
ISO 9003 is intended for organizations whose processes are almost exclusive to inspection and
testing of final products. Cancelled in the ISO 9000:2000 version and replaced by the ISO
9001:2000.
ISO 9004 covers performance improvements. This gives you advice how you should (or could) do
in order to achieve ISO 9001 compliance and customer satisfaction.
There are over 20 different members of the ISO 9000 family, and most of them are not explicitly
referred to as "ISO 900x". For example, parts of the 10,000 range are also considered part of the 9000
family: ISO 10007:1995 talks about how to maintain a large system while changing individual
components. It is highly recommended that a serious look be taken at the ISO website and
documentation for a more in depth look at what each specification entails. Many have seemingly subtle
variations.
To the casual reader however, it is useful to understand that when someone claims to be ISO
9000 compliant, they are probably using a blanket statement meaning they conform to one of the
specifications in the ISO 9000 family. And more often than not, they are referring to ISO 9001,
ISO 9002, or ISO 9003. The certification according to the ISO 9000:1994 can not be valid after
year 2004.
Over time industry sectors have wanted to standardize their interpretations of the guidelines
within their own marketplace.
The TICK-IT standard is an interpretation of ISO 9000 produced by the UK Board of Trade to
suite the processes of the Information Technology industry, especially developing software.
AS9000 is the Aerospace Basic Quality System Standard is an interpretation of ISO 9000
developed by the mutual agreement of major aerospace manufacturers.
QS9000 is an interpretation agreed upon by major automotive manufacturers.
ISO 9000 is more about making sure the product -- any product or service -- has been produced
in the most efficient and effective manner possible.
ISO 14000 exists to ensure the product -- any product or service -- has the lowest possible
environmental ramifications.
CMM level 1 (initial): Software development follows little to no rules. The project may go from
one crisis to the next. The success of the project depends on the skills of individual developers.
They may need to finish the project in an heroic effort.
CMM level 2 (repeatable): Software development successes are repeatable. The organization
may use some basic project management to track cost and schedule. The precise
implementation differs from project to project within the organisation.
CMM level 3 (defined): Software development across the organisation uses the same rules and
events for project management. Crucially, the organization follows this process even under
schedule pressures, ideally because management recognizes that it is the fastest way to finish.
CMM level 4 (managed): Using precise measurements, management can effectively control the
software development effort. In particular, management can identify ways to adjust and adapt
the process to particular projects without measurable losses of quality or deviations from
specifications.
CMM level 5 (optimizing): Quantitative feedback from previous projects is used to improve the
project management, usually using pilot projects, using the skills shown in level 4.
The CMM was invented to give military officers a quick way to assess and describe contractors'
abilities to provide correct software on time. It has been a roaring success in this role. So much
so that it caused panic-stricken salespeople to clamor for their engineering organizations to
"implement CMM."
Drawbacks:
The CMM does not describe how to create an effective software development organization. The
traits it measures are in practice very hard to develop in an organization, even though they are
very easy to recognize.
The CMM has been criticized for being overly bureaucratic and for promoting process over
substance. In particular, for emphasizing predictability over service provided to end users. More
commercially successful methodologies have focused not on the capability of the organization to
produce software to satisfy some other organization or a collectively-produced specification, but
on the capability of organizations to satisfy specific end user "use cases".
The CMM's division into levels has also been criticized in that it ignores the possibility that a
single group may exhibit all of the behaviors and may change from behavior to behavior over
time. There is also the implication that a group must move from step to step and that it is
impossible for a project group to move from one to five without going through intermediate
steps.
In general, the CMM and the ISO 9000 are driven by similar issues and have the common
concern of quality and process management.
CMM Its focus is on the supplier to improve the internal software process.
2. Objective
ISO CMM
It is written for a wide range of industry Written specifically for software industry.
other than software.
ISO 9001 is only 5 pages long. ISO 9000- CMM is over 500 pages long.
3 is 11 pages. l
3. Product Development
3) on-going self-assessment
ISO CMM
It has a broad scope that encompasses It is specific to the software development.
hardware, software, processed materials,
and services.
4. Concept
ISO The ISO 9000’s concept is to follow a set of standards to make success repeatable.
CMM The CMM emphasizes on achieving "maturity" and improving its process
continuously
5. Structure
ISO It means that some basic practices are in place and the challenge is only to
maintain certification
Both have been developed to check the overall capability of a software organization
to produce software in a timely, repeatable fashion.
An organization did a CMM-style self assessment after an ISO 9000 audit and found that the
auditors had mistakenly perceived that certain practices were in place.
Internal Assessment:
ISO This model requires auditors, such that the value of certification depends on the
expertise and experience of the auditors.
From the ISO 9000 information, no such From the CMM’s statistical information, The software
information can be extracted, since industry still needs a lot of improvement.
companies are either certified or not
certified.
A geographical conclusion can be made Many companies are still in level one, and very few
in that Europe has the highest number are in level four and five.
of certified companies.
8. Time needed
ISO CMM
It takes about one and a half years to It takes an average of two years to move between levels
obtain ISO 9000 certification. of the CMM.
It shows that the ISO 9000 is aiming for a The CMM is aiming for a strong basis of software
general improvement. improvement.
9.Benefits
These benefits are often accompanied with great numbers. However, it should be noted
that companies feel more comfortable to report successes rather than failures, and that
numbers are sometimes bias because it depends how those numbers have been calculated.
Increased productivity
Better communications