Anda di halaman 1dari 9

An empirical analysis of risk components and performance

on software projects
Wen-Ming Han, Sun-Jen Huang
*
Department of Information Management, National Taiwan University of Science and Technology, 43, Sec. 4, Keelung Road, Taipei, Taiwan
Received 1 October 2005; received in revised form 18 April 2006; accepted 29 April 2006
Available online 14 June 2006
Abstract
Risk management and performance enhancement have always been the focus of software project management studies. The present
paper shows the ndings from an empirical study based on 115 software projects on analyzing the probability of occurrence and impact
of the six dimensions comprising 27 software risks on project performance. The MANOVA analysis revealed that the probability of
occurrence and composite impact have signicant dierences on six risk dimensions. Moreover, it indicated that no association between
the probability of occurrence and composite impact among the six risk dimensions exists and hence, it is a crucial consideration for pro-
ject managers when deciding the suitable risk management strategy. A pattern analysis of risks across high, medium, and low-perfor-
mance software projects also showed that (1) the requirement risk dimension is the primary area among the six risk dimensions
regardless of whether the project performance belongs to high, medium, or low; (2) for medium-performance software projects, project
managers, aside from giving importance to requirement risk, must also continually monitor and control the planning and control
and the project complexity risks so that the project performance can be improved; and, (3) improper management of the team,
requirement, and planning and control risks are the primary factors contributing to a low-performance project.
2006 Elsevier Inc. All rights reserved.
Keywords: Software project management; Software risk management; Risk exposure; Project performance
1. Introduction
Software development is a highly complex and unpre-
dictable activity. The Standish Group CHAOS Report in
2004 indicated that 53% of software projects were unable
to deliver on schedule, within budget, and with the required
functions, while 18% of the software projects were can-
celled (Standish Group International, 2004). This stresses
the fact that software projects pose various risks and
daunting tasks for many organizations (Charette, 2005;
Chris and Christine, 2003; Homan, 1999; McConnell,
1996). Now, as organizations invest substantial resources
and eort on software development, controlling the risks
associated with software projects becomes crucial (Kumar,
2002). Hence, understanding the nature of the various soft-
ware risks and their relationship with project performance
has become increasingly important since the risk manage-
ment strategy and plan depends on it.
Developing an ecient and suitable risk management
strategy depends on understanding two basic compo-
nentsprobability of occurrence and impact on project
performance (Boehm, 1991). The probability of occurrence
of each software risk is dierent and its degree of impact on
project cost, schedule and quality is also dierent. Hence,
these two software risk components must be taken into
consideration to develop a good software risk management
strategy and plan. If not, the real benets of the risk man-
agement activity may be lower than the resources invested
(Boehm, 1991; Longsta et al., 2000; Kumar, 2002; DeM-
arco and Lister, 2003).
Although several articles have already identied various
software risks, we currently lack an understanding of the
0164-1212/$ - see front matter 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.jss.2006.04.030
*
Corresponding author. Tel.: +886 2 27376779; fax: +886 2 27376777.
E-mail address: huang@cs.ntust.edu.tw (S.-J. Huang).
www.elsevier.com/locate/jss
The Journal of Systems and Software 80 (2007) 4250
relative probability of occurrence and the various impacts
of dierent software risks across projects in general. If pro-
ject managers do not realize the natures of the dierent
kinds of software risks and their relationship to project per-
formance, they cannot develop an eective and appropriate
strategy to mitigate or control them.
Many studies have proven that proper management of
software risks aects the success of software development
projects (Jiang and Klein, 2000; Wallace and Keil, 2004).
However, previous studies on software risk management
failed to analyze the gap or pattern between software risks
and project performance. For example, we can identify the
software risks that negatively aect the project perfor-
mance and thus, should be well-controlled in order to
improve the project performance. Such insights regarding
the patterns between software risks and project perfor-
mance can assist project managers in developing a software
risk management strategy that can mitigate the risks asso-
ciated with software development projects.
The present paper explores the relationships between
software risks and their impact on project performance.
The term project performance is dened as the degree
to which the software project achieves success in the per-
spective of process and product (Nidumolu, 1996). In
the present study, we began to address this issue by analyz-
ing the probability of occurrence and impact on project
performance of 27 software risks. These risks are classied
into six dimensions namely, user, requirement, project
complexity, planning and control, team, and organiza-
tional environment. A dataset from 115 historical software
projects were analyzed using the MANOVA and Two-Step
Cluster Analysis methods. These analyses addressed the
following questions: (1) What are the dierent patterns
for both the probability of occurrence and the impact in
six software risk dimensions? (2) What relationships exist
among the software risks in the three project performance
clusters (high, medium, and low)? (3) What are the top 10
software risks and what are their associated probability of
occurrence and impact?
2. Background
This section discusses the various risks aecting the
performance of software projects. Next, three of the well-
known risk assessment methods in the literature are intro-
duced and compared. The Department of Defense (DoD)
Risk Management Guide that was adopted to perform this
empirical study is illustrated in this section.
2.1. Software risks
Software risks are multifaceted and thus dicult to mea-
sure even though several studies have already been con-
ducted since 1981. McFarlan (1981) identied three
dimensions of software risks namely, project size, technol-
ogy experience, and project structure. He suggested that
project managers need to develop an aggregated software
risk prole for a software project. Boehm (1991) surveyed
several experienced project managers and developed a top
10 risk identication checklist for TRW. Barki et al.
(1993) conducted a comprehensive review of software risk
related studies from 120 projects in 75 organizations, then
proposed 35 measures for software risks which were further
classied into ve dimensions: technological newness,
application size, expertise, application complexity and
organizational environment.
Sumner (2000) used a structured interview to compare
the dierences between software risks within MIS and
ERP projects, and proposed nine risks that were unique
in ERP projects. Longsta et al. (2000) proposed a frame-
work named Hierarchically Holographic Modeling (HHM)
and identied seven visions in systems integration that
included 32 risks. Kliem (2001) identied 38 risks in BPR
projects which were classied into four categories: people,
management, business, and technical Addison (2003) used
a Delphi technique to collect the opinion of 32 experts
and proposed 28 risks for E-commerce projects. Mean-
while, Carney et al. (2003) designed a tool called COTS
Usage Risk Evaluation (CURE) to predict the risk areas
of COTS products in which he identied four categories
comprising 21 risk areas. A summary of the software risks
identied in the literature is shown in Table 1. Wallace
et al.s work (2004) dened 27 software risks which were
classied into six dimensions as shown in Table 2. This
structure was utilized in this study to investigate the eects
of risks on project performance.
2.2. Risk assessment method
A risk assessment method is used to quantify the degree
of importance of software risks to project performance.
The importance of software risks usually is expressed as
both the probability of occurrence and the impact on pro-
ject performance. Three well-known software risk assess-
ment methods in the literature are SRE, SERIM and
DoD Guide.
Table 1
Summary of risks in software development
Author (year) Research area Dimensions Software
risks
McFarlan (1981) Common 3 54
Boehm (1991) Common 0 10
Barki et al. (1993) Common 5 35
Sumner (2000) ERP 6 19
Longsta et al. (2000) Systems integration 7 32
Cule et al. (2000) Common 4 55
Kliem (2001) BPR 4 38
Schmidt et al. (2001) Common 14 33
Houston et al. (2001) Common 0 29
Murthi (2002) Common 0 12
Addison (2003) E-commerce 10 28
Carney et al. (2003) COTS 4 21
Wallace et al. (2004) Common 6 27
W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250 43
The Software Risk Evaluation (SRE) method was devel-
oped by the Software Engineering Institute (SEI) to identify,
analyze, and develop mitigation strategies for controlling
risks (Williams et al., 1999). Version 2.0 of the SREMethod,
released in 1999, describes the rating of the probability of
occurrence on a scale of one to three, while the impact was
dened by four components: cost, schedule, support and
technical performance. The SRE impact components, how-
ever, did not consider the team issue which in turn, has
become a critical factor in modern software projects (Jiang
et al., 2000).
Karolak (1997) proposed a method called Software
Engineering Risk Management (SERIM) to predict risks
and to provide corrective action in the software develop-
ment phase. The SERIM identied 10 software risks and
came up with 81 related questions that contained metrics
to measure the software risks. The relationship between
software risks were identied using three of the risk ele-
ments: cost, schedule and technical performance. However,
as in the case of the SRE Method, SERIM did not include
the team issue.
The Risk Management Guide for DOD Acquisition
(2003), released by the Secretary of Defense and the
Defense Acquisition University (DAU), proposed a risk
process model for how an organization identies, assesses,
and manages risks during software development. In this
method, ve levels were identied to assess the probability
of occurrence of software risks namely, Remote, Unli-
kely, Likely, Highly likely and Near certainty. The
impact of software risks represents the degree of inuence
on project performance. The types of impact of software
risks on project performance in the DoD Guide includes
technical performance, cost, schedule and team.
A comparison of the various impacts for the above-men-
tioned software risk assessment methods is shown in Table
3. Since the DoD method contains diverse types of impact
as well as the clear assessment criteria and description for
each probability scale of occurrence and impacts, the
method was adopted in the present study. Hence, the four
types of impact dened by the DoD Risk Management
Guide were adopted in this study. The assessment of the
probability of occurrence and impact of software risks based
on the DoD Risk Management Guide is shown in Table 4.
3. Data collection
This study is based on data collected via a web-based
survey. To increase the friendliness and usability of the sur-
vey, the 11 design principles proposed by Dillman et al.
(2002) were adopted. For example, Use a scrolling design
that allows respondents to see all questions unless skip pat-
terns are important is one of them. The survey contained
four sections. The rst section explained the purpose of this
study and encouraged project managers to participate in it.
The second section requested the respondents to provide
general information on their most recently completed soft-
ware project. The third section listed 27 software risks and
asked the respondents to assess the probability of occur-
rence and impact for each risk according to the DoD Risk
Assessment Method in Table 4. Finally, the fourth section
listed seven measures used to evaluate the project perfor-
mance. There are organized into process performance
Table 2
Software risks considered in this study
Risk dimension Abbreviation Software risk
User User1 Users resistant to change
User2 Conict between users
User3 Users with negative attitudes
toward the project
User4 Users not committed to the project
User5 Lack of cooperation from users
Requirement Reqm1 Continually changing system
requirements
Reqm2 System requirements not
adequately identied
Reqm3 Unclear system requirements
Reqm4 Incorrect system requirements
Project complexity Comp1 Project involved the use of new
technology
Comp2 High level of technical complexity
Comp3 Immature technology
Comp4 Project involves use of technology
that has not been used in prior
projects
Planning & Control P & C1 Lack of an eective project
management methodology
P & C2 Project progress not monitored
closely enough
P & C3 Inadequate estimation of required
resources
P & C4 Poor project planning
P & C5 Project milestones not clearly
dened
P & C6 Inexperienced project manager
P & C7 Ineective communication
Team Team1 Inexperienced team members
Team2 Inadequately trained development
team members
Team3 Team members lack specialized
skills required by the project
Organizational
environment
Org1 Change in organizational
management during the project
Org2 Corporate politics with negative
eect on project
Org3 Unstable organizational
environment
Org4 Organization undergoing
restructuring during the project
Table 3
Various impacts of the risk assessment
Impact SRE SERIM DoD Guide
Cost X X X
Schedule X X X
Technical performance X X X
Team X
Support X
44 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250
and product performance dimensions (Nidumolu, 1996;
Rai and Al-Hindi, 2000). Due to space limitations, the
whole questionnaire could not be included in the present
paper. However, selected parts of the questionnaire are
presented in Appendix A. The whole questionnaire can
be made available upon request.
A pretest was conducted to verify content validity and
readability through personal interviews with ve domain
experts from academia and the software industry. Based
on their feedback, the initial questionnaire was modied
to improve the wording. A total of 300 project managers
were invited and 135 surveys were returned within a span
of two weeks. After checking the data received, 20 incom-
plete questionnaires were discarded. As a result, only 115
became valid survey questionnaires yielding a response rate
of 38.33%. The number of years of work experience of the
respondents ranged from 1 to 29 years, and the average was
10 years. To test the potential bias of the responses, the
t-test method was used to compare the demographics of
the early respondents versus the late ones (Armstrong
and Overton, 1997). No signicant dierence was found
(p = 0.983). Thus, the result indicated that the collected
software projects can be combined for further analysis.
Table 5 summarizes the 115 collected software projects.
4. Data analysis and results
4.1. The relationship of the probability of occurrence and
impact in six risk dimensions
It is important for project managers to explore patterns
in the probability of occurrence and impact among risk
dimensions. For example, do dierent risk dimensions
show similar probability of occurrence? If not, what risk
dimensions have a higher probability of occurrence? The
MANOVA method is a widely-used statistical technique
for verifying if the samples from two or more populations
have equal means. This method was adopted to test
whether or not the means of the risk components (proba-
bility of occurrence and composite impact) of the six risk
dimensions are statistically dierent. The value of the com-
posite impact of software risks was computed by averaging
the impacts of software risks on the technical performance,
cost, schedule and team. The verication for the assump-
tions of the MANOVA method (i.e., normality and homo-
geneity) was also performed and no violation of the
assumptions was found in this empirical study.
The results of the MANOVA indicated that the proba-
bility of occurrence and composite impact of the software
risks of the six risk dimensions were signicantly dierent.
1
The means of the probability of occurrence and composite
impact among the six risk dimensions are shown in Table 6.
A higher mean for a risk dimension indicates a higher pos-
sibility of occurrence or degree of inuence. The baseline
threshold, which was computed by averaging the six risk
dimensions for each risk component, is a measure of the
comparison criterion. Risk exposure was computed by
multiplying the probability of occurrence with the compos-
ite impact (Boehm, 1991).
Project managers are often asked to indicate what type
of software risks frequently occur. As shown in Table 6,
the baseline threshold of the probability of occurrence is
Table 4
DoD risk assessment method
Level Probability of
occurrence
Impact
Technical performance Cost Schedule Team
1 Remote Minimal or no impact Minimal or
no impact
Minimal or no impact None
2 Unlikely Acceptable with some reduction
in margin
<5% Additional resources required. Able to meet
need dates
Some impact
3 Likely Acceptable with signicant
reduction in margin
57% Minor slip in key milestone. Not able to meet
need dates
Moderate impact
4 Highly likely Acceptable; no remaining margin 710% Major slip in key milestone or critical path impacted Major impact
5 Near certainty Unacceptable >10% Cannot achieve key team or major program milestone Unacceptable
Table 5
Prole of the collected 115 software projects
Project attribute Mean Standard
deviation
Minimum Maximum
Project duration (months) 12 10.84 1 65
Delay time (percentage) 25.42 63.10 0 73.85
Cost overlay (percentage) 12.29 2.44 3 20
Sta turnover (percentage)
N = 115 projects
12.14 17.80 0 100
Table 6
The probability of occurrence and composite impact among risk
dimensions
Risk dimension Probability of
occurrence
Composite
impact
Risk
exposure
User 2.25 2.27 5.11
Requirement 2.76 2.78 7.67
Project complexity 2.42 2.30 5.57
Planning and control 2.36 2.51 5.92
Team 2.37 2.35 5.57
Organization environment 2.09 2.33 4.87
Baseline threshold 2.38 2.42 5.79
1
The MANOVA analysis results can be provided upon request.
W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250 45
2.38. Only the two risk dimensions of requirement and
project complexity exceeded the baseline threshold. This
indicates that these two dimensions of software risks occur
more frequently than the others. They are followed by the
dimensions of team, planning and control and user
risk dimensions. The organizational environment risk
dimension does not occur as often as the other dimensions.
In contrast to the probability of occurrence of software
risks, the information about composite impact can assist
the project managers in understanding the degree of nega-
tive eects on the project performance when these risks
occur. When the composite impact of each risk dimension
was compared with the baseline threshold (2.42) in Table
6, the requirement and planning and control risk
dimensions showed a higher impact on the project perfor-
mance. The risk dimensions of team, organizational
environment, project complexity and user posed a
lesser degree of impact on the project performance. Further
analysis of Table 6 reveals that the orderings of the proba-
bility of occurrence and composite impact among the six
risk dimensions were inconsistent. To conrm this point,
Kendalls W rank test was used to verify whether these
two orderings were signicantly dierent. The result
(p = 0.191) indicated that there is no signicant association
between the probability of occurrence and composite
impact among the six risk dimensions.
Risk exposure represents the overall importance of soft-
ware risks and thus, helps the project managers to priori-
tize them and to develop an appropriate risk mitigation
plan accordingly. Table 6 indicates that the requirement
risk dimension has the highest probability of occurrence
and composite impact and hence, is the principal source
of software risks. The planning and control risk dimen-
sion has the second highest composite impact but has a
lower probability of occurrence. In third place are the
project complexity and team risk dimensions, both
of which have the same value of risk exposure. However,
the degrees of their probability of occurrence and compos-
ite impact are dierent. The Project complexity risk
dimension has a higher probability of occurrence but a les-
ser composite impact as compared to the team risk
dimension. This implies that the project risk managers need
to employ dierent risk mitigation plans or strategies for
these two risk dimensions accordingly. The remaining risk
dimensionsuser and organizational environment
exhibit even lesser probability of occurrence and composite
impact.
An organization or project manager cannot avoid all
software risks since the resources of an organization or a
project are limited. Hence, project managers must concen-
trate on those software risks with a higher risk exposure,
and adopt a cost eective risk management strategy. An
understanding of the nature of software risksthe proba-
bility of occurrence and composite impact on project per-
formance, can assist the project managers in adopting
appropriate risk mitigation strategies for each of the dier-
ent dimensions of software risk. Take the Project com-
plexity risk dimension as an example. It has a higher
probability of occurrence but a lower composite impact.
Hence, it would be better for the project managers to adopt
a strategy that reduces its probability of occurrence rather
than lowers its degree of impact on the project
performance.
4.2. The relationship between risk dimensions and impact
The second topic of the present investigation focuses on
examining the relationship between each risk dimension
and its impact on the project performance. This addresses
the question of whether or not there is a dierence among
the four types of impact caused by each risk dimension.
Such information can assist the project managers in creat-
ing an appropriate risk management strategy. The MAN-
OVA method was used to analyze the dierences in the
degree of impact of each risk dimension. The results show
that the ve risk dimensions have statistically signicant
dierences (p < 0.05) with the four types of impact, except
for the organizational environment (p = 0.550). The
probable reason is that the concept of organizational
environment is more abstract and broader and hence,
the project managers had diculty in distinguishing the
degree of dierences from the various impacts.
Table 7 shows that the degrees of inuence of require-
ment and planning and control risk dimensions are
above the baseline threshold for all of the various impacts,
while the rest are all below the baseline threshold. This
reveals that if the project managers do not employ the
appropriate risk management strategy for each of the risk
dimensions, the eciency or possible benets from the soft-
ware risk management will be negatively aected.
Table 8 depicts the detailed information regarding the
probability of occurrence and impact among the 27 soft-
ware risks. This can be used as baselines to assist an orga-
nization in triggering risk-handling activities and to
understand the strengths and weaknesses of its software
development capability.
4.3. Patterns in risk dimensions across the levels of project
performance
The third topic of the present investigation concerns the
relationship between risk dimension and project perfor-
Table 7
The composite impact of four types among risk dimensions
Risk dimension Technical
performance
Cost Schedule Team
User 2.27 2.36 2.37 2.07
Requirement 2.82 2.90 2.90 2.50
Project complexity 2.27 2.40 2.33 2.18
Planning and control 2.50 2.57 2.59 2.39
Team 2.37 2.39 2.40 2.25
Organization environment 2.32 2.34 2.38 2.27
Baseline threshold 2.43 2.49 2.50 2.28
46 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250
mance. Project performance is determined by averaging the
ve product performance measures and two process perfor-
mance measures. In order to have an objective classica-
tion of the levels of project performance, the Two-Step
Cluster Analysis, a tree-based technique in SPSS package,
was used (SPSS, 2003) in this study. The optimal number
of clusters was decided through Schwarzs Bayesian Crite-
rion. It resulted in three performance clusters which were
divided into low (n = 16), medium (n = 45) and high
(n = 54) project performance as shown in Table 9. The val-
ues in Table 9 represent the risk exposure of a specic pro-
ject performance cluster by averaging the risk exposures of
all the projects within this cluster.
Several interesting ndings could be observed from two
points of view namely, the pattern of the software risk
dimensions and the individual project performance cluster.
For the pattern of the software risk dimensions, the radar
chart, as shown in Fig. 1, provides a clear graphical
description showing that all risk dimensions obviously
increased from the low to high-performance clusters. This
nding provides empirical evidence that software risks
decrease the degree of project performance (Jiang and
Klein, 2000; Wallace and Keil, 2004). The area enclosed
in the radar chart represents the risk exposure in six risk
dimensions for one of three software project performance
clusters. The radar chart shows that the dierence between
the regions of low-performance and medium-performance
software projects is much larger than the dierence
between the medium-performance and the high-perfor-
mance software projects. This suggests that there is an
inverse and nonlinear relation between software risks and
project performance.
In the high-performance cluster of the software projects,
the requirement risk dimension is the primary factor for
lowering the project performance since the values of the
other ve risk dimensions do not exceed the baseline
threshold (5.34). The result suggests that managing the
requirement risks is critical to achieving the best project
performance because even the high-performance projects
still have a high degree of the requirement risk dimension.
For the medium-performance project, the requirement,
project complexity and planning and control risk
Table 8
The probability of occurrence and impact among software risks
Software risk Probability of
occurrence
Impact
Technical
performance
Cost Schedule Team
User risk dimension
Users1
a
2.21 2.58 2.67 2.48 2.08
Users2 2.35 2.27 2.40 2.44 2.06
Users3 2.04 2.17 2.21 2.30 2.05
Users4 2.41 2.08 2.24 2.17 2.00
Users5 2.26 2.25 2.29 2.44 2.15
Requirement risk dimension
Reqm1 3.13 3.12 3.30 3.27 2.67
Reqm2 2.77 2.73 2.90 2.88 2.46
Reqm3 2.75 2.72 2.74 2.77 2.42
Reqm4 2.41 2.69 2.68 2.70 2.45
Project complexity risk dimension
Comp1 2.64 2.29 2.52 2.25 2.19
Comp2 2.44 2.31 2.46 2.43 2.23
Comp3 2.02 2.22 2.24 2.33 2.08
Comp4 2.57 2.28 2.38 2.31 2.21
Planning & Control risk dimension
P & C1 2.57 2.70 2.81 2.78 2.52
P & C2 2.43 2.47 2.53 2.61 2.37
P & C3 2.43 2.48 2.71 2.50 2.39
P & C4 2.38 2.66 2.70 2.71 2.43
P & C5 2.13 2.38 2.25 2.48 2.27
P & C6 2.15 2.41 2.48 2.52 2.32
P & C7 2.41 2.42 2.50 2.52 2.43
Team risk dimension
Team1 2.35 2.34 2.48 2.42 2.30
Team2 2.46 2.39 2.37 2.41 2.27
Team3 2.30 2.37 2.34 2.37 2.19
Organization environment risk dimension
Org1 2.18 2.37 2.42 2.39 2.28
Org2 2.40 2.47 2.48 2.57 2.43
Org3 2.00 2.17 2.21 2.25 2.14
Org4 1.77 2.27 2.27 2.32 2.23
a
Please refer to Table 2 for the meaning of the abbreviations in the
column of this table.
Table 9
Cluster means for the six risk dimensions
Risk dimension High
performance
(n = 54)
Medium
performance
(n = 45)
Low
performance
(n = 16)
User 4.51 5.65 7.66
Requirement 6.96 8.61 12.63
Project complexity 5.21 6.79 7.32
Planning and control 5.30 6.54 10.97
Team 5.21 6.04 9.70
Organization environment 4.85 5.42 7.00
Baseline threshold 5.34 6.51 9.22
Project performance 5.94 4.41 2.81
10.97
9.70
7.00
7.66
12.63
7.32
User
Requirement
Project Complexity
Planning & Control
Team
Organization
Environment
Low performance Medium performance High performance
Fig. 1. Risk radar chart.
W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250 47
dimensions are the factors that decrease the project perfor-
mance because all of their values are above the baseline
threshold (6.51). Hence, controlling these risks will help
to improve the project performance. For the low-perfor-
mance project, the major risk dimensions are the require-
ment, planning and control, and team because all
their values are above the baseline threshold (9.22). Thus,
continually monitoring these risk dimensions is a basic
requirement for an eective software risk management.
Further analysis of the data in Table 9 yields four
important ndings. Firstly, the requirement risk dimen-
sion of all the three project performance clusters exceeds
the baseline threshold. This means that successfully manag-
ing the requirement risk is a basic requirement for
achieving the desired software project performance. Sec-
ondly, the project complexity risk dimension exceeds
the baseline threshold only for the medium-performance
projects. Other clusters, however, do not have the same sit-
uation. Thirdly, only the planning and control risk
dimension in the high-performance project does not exceed
the baseline threshold. This suggests that the medium and
low-performance projects need to carefully manage the
planning and control risk so that they can have chances
of improving their project performance. Lastly, only the
team risk dimension in the low-performance project
exceeds the baseline threshold. This suggests that not
addressing well the risks associated with the development
team can negatively aect project performance.
4.4. A list of the top 10 risks
To determine the top 10 risks which lead to failure in
software projects, the risk exposure of all the risks within
the six dimensions were computed by multiplying the value
of probability of occurrence with the various degrees of
composite impact. Table 10 presents a list of the top 10
software risks and their associated probability of occur-
rence, degrees of the four various impacts, and risk expo-
sure. The past studies (Boehm, 1991; Mursu et al., 2003;
Wallace and Keil, 2004) emphasized the top 10 risks by
simply listing them. The present study not only presents
the top 10 software risks but also includes the information
about their probability of occurrence and degree of impact.
In this way, the project managers can have a better idea of
how to manage these software risks.
5. Conclusion
Achieving eective software risk management requires
project managers to understand the nature of software
risks. Thus, information about the probability of occur-
rence and impact of software risks on project performance
can help the project managers to develop a better risk man-
agement strategy.
This empirical study considered risk information on 115
software projects. The results indicate that a positive corre-
lation does not exist between the probability of occurrence
and impact among the six risk dimensions. The relation-
ship between software risks and project performance was
also examined in the high, medium, and low-performance
projects. The results show that the requirement risk
dimension is the principal factor aecting the project per-
formance. Aside from this, one of the ways to improve pro-
ject performance is by properly planning the development
activities and reducing the project complexity. Likewise,
if the project manager is unable to eectively manage the
requirements of the whole project life cycle, and does not
well-plan nor monitor the software risk management plan,
the software projects is likely to perform poorly.
The performance of the software projects could also
benet from exploring the relationship between risk com-
ponents and some important attributes of software projects
such as the type of software systems and project duration.
These are the possible research topics for the future.
Appendix A
Selected parts of the questionnaire
To what degree do you believe the probability of occur-
rence and four impacts of Team risks exist in your most
recently completed software project? Please circle the
response that best represents your judgment on the follow-
ing scales based on the DoD Risk Management Assess-
ment Method which is provided in the attached sheet.
Table 10
The top 10 software risks
Rank Software risk Probability of
occurrence
Technical
performance
Cost Schedule Team Risk
exposure
1 Continually changing system requirements (Reqm1) 3.13 3.12 3.30 3.27 2.67 38.68
2 System requirements not adequately identied (Reqm2) 2.77 2.73 2.90 2.88 2.46 30.32
3 Unclear system requirements (Reqm3) 2.75 2.72 2.74 2.77 2.42 29.25
4 Lack of an eective project management methodology (P & C1) 2.57 2.70 2.81 2.78 2.52 27.75
5 Incorrect system requirements (Reqm4) 2.41 2.69 2.68 2.70 2.45 25.32
6 Poor project planning (P & C4) 2.38 2.66 2.70 2.71 2.43 25.05
7 Inadequate estimation of required resources (P & C3) 2.43 2.48 2.71 2.50 2.39 24.56
8 Project involved the use of new technology (Comp1) 2.64 2.29 2.52 2.25 2.19 24.46
8 Project progress not monitored closely enough (P & C2) 2.43 2.47 2.53 2.61 2.37 24.28
10 Corporate politics with negative eect on project (Org2) 2.40 2.47 2.48 2.57 2.43 23.85
48 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250
References
Addison, T., 2003. E-commerce project development risks: evidence from
a Delphi survey. International Journal of Information Management 23
(1), 2540.
Armstrong, J., Overton, T., 1997. Estimating non-response bias in mail
survey. Journal of Marketing Research 15, 396402.
Barki, H., Rivard, S., Talbot, J., 1993. Toward an assessment of software
development risk. Journal of Management Information Systems 10 (2),
203225.
Boehm, B., 1991. Software risk management: principles and practices.
IEEE Software 8 (1), 3241.
Carney, D., Morris, E., Patrick, R., 2003. Identifying commercial o-the-
shelf (COTS) product risks: the COTS usage risk evaluation. Technical
Report, CMU/SEI-2003-TR-023.
Charette, R., 2005. Why software fails. IEEE Spectrum 42 (9), 42
49.
Chris, S., Christine, C., 2003. The State of IT Project Management in the
UK 20022003. University of Oxford, Templeton College.
Cule, P. et al., 2000. Strategies for heading o IS project failure.
Information Systems Management 17 (2), 6573.
DeMarco, T., Lister, T., 2003. Waltzing with Bears: Managing Risk on
Software Projects. Dorset House Publishing Company.
Dillman, D. et al., 2002. Inuence of plain vs. fancy design on response
rates for web surveys. Joint Statistical Meetings. Available from:
<http://survey.sesrc.wsu.edu/dillman/papers/asa98ppr.pdf>.
Homan, T., 1999. 85% of IT departments fail to meet business needs.
Computerworld 33 (41), 24.
Houston, D., Mackulak, G., Collofello, J., 2001. Stochastic simulation of
risk factor potential eects for software development risk management.
Journal of Systems and Software 59 (3), 247257.
Jiang, J., Klein, G., 2000. Software development risks to project
eectiveness. The Journal of Systems and Software 52, 310.
Jiang, J., Klein, G., Means, T., 2000. Project risk impact on software
development team performance. Project Management Journal 31 (4),
1926.
Karolak, D., 1997. Software Engineering Risk Management. IEEE
Computer Society.
Kliem, R., 2001. Risk management for business process reengineering
projects. Information Systems Management 17 (4), 7173.
Kumar, R., 2002. Managing risks in IT projects: an options perspective.
Information and Management 40, 6374.
Longsta, T. et al., 2000. Are we forgetting the risks of information
technology? IEEE Computer 33 (12), 4351.
McConnell, S., 1996. Rapid Development: Taming Wild Software
Schedules. Microsoft Press.
Inexperienced team members (Team1)
Probability of occurrence Remote 1 2 3 4 5 Near certainty
Technical performance Minimal or no impact 1 2 3 4 5 Unacceptable
Cost Minimal or no impact 1 2 3 4 5 >10%
Schedule Minimal or no impact 1 2 3 4 5 Cannot achieve key team or major program
milestone
Team None 1 2 3 4 5 Unacceptable
Inadequately trained development team members (Team2)
Probability of occurrence Remote 1 2 3 4 5 Near certainty
Technical performance Minimal or no impact 1 2 3 4 5 Unacceptable
Cost Minimal or no impact 1 2 3 4 5 >10%
Schedule Minimal or no impact 1 2 3 4 5 Cannot achieve key team or major program
milestone
Team None 1 2 3 4 5 Unacceptable
Team members lack specialized skills required by the project (Team3)
Probability of occurrence Remote 1 2 3 4 5 Near certainty
Technical performance Minimal or no impact 1 2 3 4 5 Unacceptable
Cost Minimal or no impact 1 2 3 4 5 >10%
Schedule Minimal or no impact 1 2 3 4 5 Cannot achieve key team or major program
milestone
Team None 1 2 3 4 5 Unacceptable
To what degree do you believe the product and process performance measures exist in your most recently completed soft-
ware project? Please circle the response that best represents your judgment on the following scales.
Strongly
disagree
Strongly agree
The application developed is reliable 1 2 3 4 5 6 7
The application is easy to maintain 1 2 3 4 5 6 7
The users perceive that the system meets intended functional requirements 1 2 3 4 5 6 7
The system meets user expectations with respect to response time 1 2 3 4 5 6 7
The overall quality of the developed application is high 1 2 3 4 5 6 7
The system is completed within budget 1 2 3 4 5 6 7
The system is completed within schedule 1 2 3 4 5 6 7
W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250 49
McFarlan, F., 1981. Portfolio approach to information systems. Harvard
Business Review 59 (5), 142150.
Mursu, A. et al., 2003. Identifying software project risks in Nigeria: an
international comparative study. European Journal of Information
Systems 12 (3), 182194.
Murthi, S., 2002. Preventive risk management for software projects. IT
Professional 4 (5), 915.
Nidumolu, S.R., 1996. Standardization, requirements uncertainty and
software project performance. Information and Management 31 (3),
135150.
Rai, A., Al-Hindi, H., 2000. The eects of development process modeling
and task uncertainty on development quality performance. Informa-
tion and Management 37 (6), 335346.
Schmidt, R. et al., 2001. Identifying software project risks: an interna-
tional Delphi study. Journal of Management Information Systems 17
(4), 536.
SPSS Inc., 2003. SPSS 12.0 User Guide and Tutorial.
Standish Group International, Inc., 2004. 2004 Third Quarter Research
Report.
Sumner, M., 2000. Risk factors in enterprise-wide/ERP projects. Journal
of Information Technology 15 (4), 317327.
US Department of Defense, 2003. Risk management guide for DOD
acquisition fth edition. Defense Acquisition University, Defense
Systems Management College.
Wallace, L., Keil, M., 2004. Software project risks and their eect on
outcomes. Communications of the ACM 47 (4), 6873.
Wallace, L., Keil, M., Arun, R., 2004. How software project risk aects
project performance: an investigation of the dimensions of risk and an
exploratory model. Decision Sciences 35 (2), 289321.
Williams, R. et al., 1999. SRE Method Description & SRE Team
Members Notebook Version 2.0. Software Engineering Institute,
Carnegie Mellon University CMU/SEI-99-TR-029.
50 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 4250

Anda mungkin juga menyukai