Anda di halaman 1dari 9
The Journal of Systems and Software 80 (2007) 42–50 www.elsevier.com/locate/jss An empirical analysis of risk componentshuang@cs.ntust.edu.tw (S.-J. Huang). 0164-1212/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2006.04.030 2002 ). Hence, understanding the nature of the various soft- ware risks and their relationship with project performance has become increasingly important since the risk manage- ment strategy and plan depends on it. Developing an efficient and suitable risk management strategy depends on understanding two basic compo- nents—probability of occurrence and impact on project performance ( Boehm, 1991 ). The probability of occurrence of each software risk is different and its degree of impact on project cost, schedule and quality is also different. Hence, these two software risk components must be taken into consideration to develop a good software risk management strategy and plan. If not, the real benefits of the risk man- agement activity may be lower than the resources invested ( Boehm, 1991; Longstaff et al., 2000; Kumar, 2002; DeM- arco and Lister, 2003 ). Although several articles have already identified various software risks, we currently lack an understanding of the " id="pdf-obj-0-2" src="pdf-obj-0-2.jpg">
The Journal of Systems and Software 80 (2007) 42–50 www.elsevier.com/locate/jss An empirical analysis of risk componentshuang@cs.ntust.edu.tw (S.-J. Huang). 0164-1212/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2006.04.030 2002 ). Hence, understanding the nature of the various soft- ware risks and their relationship with project performance has become increasingly important since the risk manage- ment strategy and plan depends on it. Developing an efficient and suitable risk management strategy depends on understanding two basic compo- nents—probability of occurrence and impact on project performance ( Boehm, 1991 ). The probability of occurrence of each software risk is different and its degree of impact on project cost, schedule and quality is also different. Hence, these two software risk components must be taken into consideration to develop a good software risk management strategy and plan. If not, the real benefits of the risk man- agement activity may be lower than the resources invested ( Boehm, 1991; Longstaff et al., 2000; Kumar, 2002; DeM- arco and Lister, 2003 ). Although several articles have already identified various software risks, we currently lack an understanding of the " id="pdf-obj-0-4" src="pdf-obj-0-4.jpg">

The Journal of Systems and Software 80 (2007) 42–50

The Journal of Systems and Software 80 (2007) 42–50 www.elsevier.com/locate/jss An empirical analysis of risk componentshuang@cs.ntust.edu.tw (S.-J. Huang). 0164-1212/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2006.04.030 2002 ). Hence, understanding the nature of the various soft- ware risks and their relationship with project performance has become increasingly important since the risk manage- ment strategy and plan depends on it. Developing an efficient and suitable risk management strategy depends on understanding two basic compo- nents—probability of occurrence and impact on project performance ( Boehm, 1991 ). The probability of occurrence of each software risk is different and its degree of impact on project cost, schedule and quality is also different. Hence, these two software risk components must be taken into consideration to develop a good software risk management strategy and plan. If not, the real benefits of the risk man- agement activity may be lower than the resources invested ( Boehm, 1991; Longstaff et al., 2000; Kumar, 2002; DeM- arco and Lister, 2003 ). Although several articles have already identified various software risks, we currently lack an understanding of the " id="pdf-obj-0-8" src="pdf-obj-0-8.jpg">

www.elsevier.com/locate/jss

An empirical analysis of risk components and performance on software projects

Wen-Ming Han, Sun-Jen Huang *

Department of Information Management, National Taiwan University of Science and Technology, 43, Sec. 4, Keelung Road, Taipei, Taiwan

Received 1 October 2005; received in revised form 18 April 2006; accepted 29 April 2006 Available online 14 June 2006

Abstract

Risk management and performance enhancement have always been the focus of software project management studies. The present paper shows the findings from an empirical study based on 115 software projects on analyzing the probability of occurrence and impact of the six dimensions comprising 27 software risks on project performance. The MANOVA analysis revealed that the probability of occurrence and composite impact have significant differences on six risk dimensions. Moreover, it indicated that no association between the probability of occurrence and composite impact among the six risk dimensions exists and hence, it is a crucial consideration for pro- ject managers when deciding the suitable risk management strategy. A pattern analysis of risks across high, medium, and low-perfor- mance software projects also showed that (1) the ‘‘requirement’’ risk dimension is the primary area among the six risk dimensions regardless of whether the project performance belongs to high, medium, or low; (2) for medium-performance software projects, project managers, aside from giving importance to ‘‘requirement risk’’, must also continually monitor and control the ‘‘planning and control’’ and the ‘‘project complexity’’ risks so that the project performance can be improved; and, (3) improper management of the ‘‘team’’, ‘‘requirement’’, and ‘‘planning and control’’ risks are the primary factors contributing to a low-performance project. 2006 Elsevier Inc. All rights reserved.

Keywords: Software project management; Software risk management; Risk exposure; Project performance

1. Introduction

Software development is a highly complex and unpre- dictable activity. The Standish Group CHAOS Report in 2004 indicated that 53% of software projects were unable to deliver on schedule, within budget, and with the required functions, while 18% of the software projects were can- celled (Standish Group International, 2004). This stresses the fact that software projects pose various risks and daunting tasks for many organizations (Charette, 2005; Chris and Christine, 2003; Hoffman, 1999; McConnell, 1996). Now, as organizations invest substantial resources and effort on software development, controlling the risks associated with software projects becomes crucial (Kumar,

* Corresponding author. Tel.: +886 2 27376779; fax: +886 2 27376777. E-mail address: huang@cs.ntust.edu.tw (S.-J. Huang).

0164-1212/$ - see front matter 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.jss.2006.04.030

2002). Hence, understanding the nature of the various soft- ware risks and their relationship with project performance has become increasingly important since the risk manage- ment strategy and plan depends on it. Developing an efficient and suitable risk management strategy depends on understanding two basic compo- nents—probability of occurrence and impact on project performance (Boehm, 1991). The probability of occurrence of each software risk is different and its degree of impact on project cost, schedule and quality is also different. Hence, these two software risk components must be taken into consideration to develop a good software risk management strategy and plan. If not, the real benefits of the risk man- agement activity may be lower than the resources invested (Boehm, 1991; Longstaff et al., 2000; Kumar, 2002; DeM- arco and Lister, 2003). Although several articles have already identified various software risks, we currently lack an understanding of the

W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

43

relative probability of occurrence and the various impacts of different software risks across projects in general. If pro- ject managers do not realize the natures of the different kinds of software risks and their relationship to project per- formance, they cannot develop an effective and appropriate strategy to mitigate or control them. Many studies have proven that proper management of software risks affects the success of software development projects (Jiang and Klein, 2000; Wallace and Keil, 2004). However, previous studies on software risk management failed to analyze the gap or pattern between software risks and project performance. For example, we can identify the software risks that negatively affect the project perfor- mance and thus, should be well-controlled in order to improve the project performance. Such insights regarding the patterns between software risks and project perfor- mance can assist project managers in developing a software risk management strategy that can mitigate the risks asso- ciated with software development projects. The present paper explores the relationships between software risks and their impact on project performance. The term ‘‘project performance’’ is defined as ‘‘the degree to which the software project achieves success in the per- spective of process and product’’ (Nidumolu, 1996). In the present study, we began to address this issue by analyz- ing the probability of occurrence and impact on project performance of 27 software risks. These risks are classified into six dimensions namely, user, requirement, project complexity, planning and control, team, and organiza- tional environment. A dataset from 115 historical software projects were analyzed using the MANOVA and Two-Step Cluster Analysis methods. These analyses addressed the following questions: (1) What are the different patterns for both the probability of occurrence and the impact in six software risk dimensions? (2) What relationships exist among the software risks in the three project performance clusters (high, medium, and low)? (3) What are the top 10 software risks and what are their associated probability of occurrence and impact?

risk profile for a software project. Boehm (1991) surveyed several experienced project managers and developed a top 10 risk identification checklist for TRW. Barki et al. (1993) conducted a comprehensive review of software risk related studies from 120 projects in 75 organizations, then proposed 35 measures for software risks which were further classified into five dimensions: technological newness, application size, expertise, application complexity and organizational environment. Sumner (2000) used a structured interview to compare the differences between software risks within MIS and ERP projects, and proposed nine risks that were unique in ERP projects. Longstaff et al. (2000) proposed a frame- work named Hierarchically Holographic Modeling (HHM) and identified seven visions in systems integration that included 32 risks. Kliem (2001) identified 38 risks in BPR projects which were classified into four categories: people, management, business, and technical Addison (2003) used a Delphi technique to collect the opinion of 32 experts and proposed 28 risks for E-commerce projects. Mean- while, Carney et al. (2003) designed a tool called COTS Usage Risk Evaluation (CURE) to predict the risk areas of COTS products in which he identified four categories comprising 21 risk areas. A summary of the software risks identified in the literature is shown in Table 1. Wallace et al.’s work (2004) defined 27 software risks which were classified into six dimensions as shown in Table 2. This structure was utilized in this study to investigate the effects of risks on project performance.

2.2. Risk assessment method

A risk assessment method is used to quantify the degree of importance of software risks to project performance. The importance of software risks usually is expressed as both the probability of occurrence and the impact on pro- ject performance. Three well-known software risk assess- ment methods in the literature are SRE, SERIM and DoD Guide.

2. Background

This section discusses the various risks affecting the performance of software projects. Next, three of the well- known risk assessment methods in the literature are intro- duced and compared. The Department of Defense (DoD) Risk Management Guide that was adopted to perform this empirical study is illustrated in this section.

2.1. Software risks

Software risks are multifaceted and thus difficult to mea- sure even though several studies have already been con- ducted since 1981. McFarlan (1981) identified three dimensions of software risks namely, project size, technol- ogy experience, and project structure. He suggested that project managers need to develop an aggregated software

Table 1 Summary of risks in software development

Author (year)

Research area

Dimensions Software risks

McFarlan (1981)

Common

3

54

Boehm (1991)

Common

0

10

Barki et al. (1993)

Common

5

35

Sumner (2000)

ERP

6

19

Longstaff et al. (2000) Systems integration

7

32

Cule et al. (2000)

Common

4

55

Kliem (2001)

BPR

4

38

Schmidt et al. (2001)

Common

14

33

Houston et al. (2001)

Common

0

29

Murthi (2002)

Common

0

12

Addison (2003)

E-commerce

10

28

Carney et al. (2003)

COTS

4

21

Wallace et al. (2004)

Common

6

27

  • 44 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

Table 2 Software risks considered in this study

Risk dimension

Abbreviation Software risk

User

User1

Users resistant to change

User2

Conflict between users

User3

Users with negative attitudes

User4

toward the project Users not committed to the project

User5

Lack of cooperation from users

Requirement

Reqm1

Continually changing system

Reqm2

requirements System requirements not

Reqm3

adequately identified Unclear system requirements

Reqm4

Incorrect system requirements

Project complexity Comp1

Project involved the use of new

 

Comp2

technology High level of technical complexity

Comp3

Immature technology

Comp4

Project involves use of technology that has not been used in prior projects

Planning & Control P & C1

Lack of an effective project

 

P & C2

management methodology Project progress not monitored

P & C3

closely enough Inadequate estimation of required

P & C4 P & C5

resources Poor project planning Project milestones not clearly

P & C6 P & C7

defined Inexperienced project manager Ineffective communication

Team

Team1

Inexperienced team members

Team2

Inadequately trained development

Team3

team members Team members lack specialized skills required by the project

Organizational

Org1

Change in organizational

environment

management during the project

 

Org2

Corporate politics with negative

Org3

effect on project Unstable organizational

Org4

environment Organization undergoing restructuring during the project

The Software Risk Evaluation (SRE) method was devel- oped by the Software Engineering Institute (SEI) to identify, analyze, and develop mitigation strategies for controlling risks (Williams et al., 1999). Version 2.0 of the SRE Method, released in 1999, describes the rating of the probability of occurrence on a scale of one to three, while the impact was defined by four components: cost, schedule, support and technical performance. The SRE impact components, how- ever, did not consider the ‘‘team’’ issue which in turn, has become a critical factor in modern software projects (Jiang et al., 2000). Karolak (1997) proposed a method called Software Engineering Risk Management (SERIM) to predict risks and to provide corrective action in the software develop-

Table 3 Various impacts of the risk assessment

Impact

SRE

SERIM

DoD Guide

Cost

XX

X

Schedule

XX

X

Technical performance

XX

X

Team

X

Support

X

ment phase. The SERIM identified 10 software risks and came up with 81 related questions that contained metrics to measure the software risks. The relationship between software risks were identified using three of the risk ele- ments: cost, schedule and technical performance. However,

as in the case of the SRE Method, SERIM did not include the ‘‘team’’ issue.

The Risk Management Guide for DOD Acquisition (2003), released by the Secretary of Defense and the Defense Acquisition University (DAU), proposed a risk process model for how an organization identifies, assesses, and manages risks during software development. In this

method, five levels were identified to assess the probability of occurrence of software risks namely, ‘‘Remote’’, ‘‘Unli- kely’’, ‘‘Likely’’, ‘‘Highly likely’’ and ‘‘Near certainty’’. The impact of software risks represents the degree of influence on project performance. The types of impact of software risks on project performance in the DoD Guide includes technical performance, cost, schedule and team. A comparison of the various impacts for the above-men- tioned software risk assessment methods is shown in Table

  • 3. Since the DoD method contains diverse types of impact

as well as the clear assessment criteria and description for each probability scale of occurrence and impacts, the method was adopted in the present study. Hence, the four types of impact defined by the DoD Risk Management Guide were adopted in this study. The assessment of the probability of occurrence and impact of software risks based on the DoD Risk Management Guide is shown in Table 4.

  • 3. Data collection

This study is based on data collected via a web-based survey. To increase the friendliness and usability of the sur- vey, the 11 design principles proposed by Dillman et al. (2002) were adopted. For example, ‘‘Use a scrolling design that allows respondents to see all questions unless skip pat- terns are important’’ is one of them. The survey contained four sections. The first section explained the purpose of this study and encouraged project managers to participate in it. The second section requested the respondents to provide general information on their most recently completed soft- ware project. The third section listed 27 software risks and asked the respondents to assess the probability of occur- rence and impact for each risk according to the DoD Risk Assessment Method in Table 4. Finally, the fourth section listed seven measures used to evaluate the project perfor- mance. There are organized into process performance

W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

Table 4 DoD risk assessment method

45

Level Probability of occurrence

1

Remote

2

Unlikely

3

Likely

4

5

Impact

Technical performance Minimal or no impact

Cost

Schedule Minimal or no impact

Team

Minimal or

None

no impact

Acceptable with some reduction in margin

Acceptable with significant reduction in margin

<5%

5–7%

Additional resources required. Able to meet need dates Minor slip in key milestone. Not able to meet

Some impact

Moderate impact

need dates Major slip in key milestone or critical path impacted Major impact

Cannot achieve key team or major program milestone Unacceptable

Highly likely Acceptable; no remaining margin 7–10%

>10%

Near certainty Unacceptable

and product performance dimensions (Nidumolu, 1996; Rai and Al-Hindi, 2000). Due to space limitations, the whole questionnaire could not be included in the present paper. However, selected parts of the questionnaire are presented in Appendix A. The whole questionnaire can be made available upon request. A pretest was conducted to verify content validity and readability through personal interviews with five domain experts from academia and the software industry. Based on their feedback, the initial questionnaire was modified to improve the wording. A total of 300 project managers were invited and 135 surveys were returned within a span of two weeks. After checking the data received, 20 incom- plete questionnaires were discarded. As a result, only 115 became valid survey questionnaires yielding a response rate of 38.33%. The number of years of work experience of the respondents ranged from 1 to 29 years, and the average was 10 years. To test the potential bias of the responses, the t-test method was used to compare the demographics of the early respondents versus the late ones (Armstrong and Overton, 1997). No significant difference was found (p = 0.983). Thus, the result indicated that the collected software projects can be combined for further analysis. Table 5 summarizes the 115 collected software projects.

4. Data analysis and results

4.1. The relationship of the probability of occurrence and impact in six risk dimensions

It is important for project managers to explore patterns in the probability of occurrence and impact among risk dimensions. For example, do different risk dimensions

Table 5 Profile of the collected 115 software projects

Project attribute

Mean Standard Minimum Maximum deviation

Project duration (months) 12 10.84

1

65

Delay time (percentage)

25.42

63.10

0

73.85

Cost overlay (percentage) 12.29

2.44

3

20

Staff turnover (percentage) N = 115 projects

12.14

17.80

0

100

Table 6 The probability of occurrence and composite impact among risk dimensions

Risk dimension

Probability of

Composite

Risk

occurrence

impact

exposure

User

2.25

2.27

5.11

Requirement

2.76

2.78

7.67

Project complexity

2.42

2.30

5.57

Planning and control

2.36

2.51

5.92

Team

2.37

2.35

5.57

Organization environment 2.09

2.33

4.87

Baseline threshold

2.38

2.42

5.79

show similar probability of occurrence? If not, what risk dimensions have a higher probability of occurrence? The MANOVA method is a widely-used statistical technique for verifying if the samples from two or more populations have equal means. This method was adopted to test whether or not the means of the risk components (proba- bility of occurrence and composite impact) of the six risk dimensions are statistically different. The value of the com- posite impact of software risks was computed by averaging the impacts of software risks on the technical performance, cost, schedule and team. The verification for the assump- tions of the MANOVA method (i.e., normality and homo- geneity) was also performed and no violation of the assumptions was found in this empirical study. The results of the MANOVA indicated that the proba- bility of occurrence and composite impact of the software risks of the six risk dimensions were significantly different. 1 The means of the probability of occurrence and composite impact among the six risk dimensions are shown in Table 6. A higher mean for a risk dimension indicates a higher pos- sibility of occurrence or degree of influence. The baseline threshold, which was computed by averaging the six risk dimensions for each risk component, is a measure of the comparison criterion. Risk exposure was computed by multiplying the probability of occurrence with the compos- ite impact (Boehm, 1991). Project managers are often asked to indicate what type of software risks frequently occur. As shown in Table 6, the baseline threshold of the probability of occurrence is

1 The MANOVA analysis results can be provided upon request.

  • 46 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

2.38. Only the two risk dimensions of ‘‘requirement’’ and ‘‘project complexity’’ exceeded the baseline threshold. This indicates that these two dimensions of software risks occur more frequently than the others. They are followed by the dimensions of ‘‘team’’, ‘‘planning and control’’ and ‘‘user’’ risk dimensions. The ‘‘organizational environment’’ risk dimension does not occur as often as the other dimensions. In contrast to the probability of occurrence of software risks, the information about composite impact can assist the project managers in understanding the degree of nega- tive effects on the project performance when these risks occur. When the composite impact of each risk dimension was compared with the baseline threshold (2.42) in Table 6, the ‘‘requirement’’ and ‘‘planning and control’’ risk dimensions showed a higher impact on the project perfor- mance. The risk dimensions of ‘‘team’’, ‘‘organizational environment’’, ‘‘project complexity’’ and ‘‘user’’ posed a lesser degree of impact on the project performance. Further analysis of Table 6 reveals that the orderings of the proba- bility of occurrence and composite impact among the six risk dimensions were inconsistent. To confirm this point, Kendall’s W rank test was used to verify whether these two orderings were significantly different. The result (p = 0.191) indicated that there is no significant association between the probability of occurrence and composite impact among the six risk dimensions. Risk exposure represents the overall importance of soft- ware risks and thus, helps the project managers to priori- tize them and to develop an appropriate risk mitigation plan accordingly. Table 6 indicates that the ‘‘requirement’’ risk dimension has the highest probability of occurrence and composite impact and hence, is the principal source of software risks. The ‘‘planning and control’’ risk dimen- sion has the second highest composite impact but has a lower probability of occurrence. In third place are the ‘‘project complexity’’ and ‘‘team’’ risk dimensions, both of which have the same value of risk exposure. However, the degrees of their probability of occurrence and compos- ite impact are different. The ‘‘Project complexity’’ risk dimension has a higher probability of occurrence but a les- ser composite impact as compared to the ‘‘team’’ risk dimension. This implies that the project risk managers need to employ different risk mitigation plans or strategies for these two risk dimensions accordingly. The remaining risk dimensions—‘‘user’’ and ‘‘organizational environment’’ exhibit even lesser probability of occurrence and composite impact. An organization or project manager cannot avoid all software risks since the resources of an organization or a project are limited. Hence, project managers must concen- trate on those software risks with a higher risk exposure, and adopt a cost effective risk management strategy. An understanding of the nature of software risks—the proba- bility of occurrence and composite impact on project per- formance, can assist the project managers in adopting appropriate risk mitigation strategies for each of the differ- ent dimensions of software risk. Take the ‘‘Project com-

plexity’’ risk dimension as an example. It has a higher probability of occurrence but a lower composite impact. Hence, it would be better for the project managers to adopt a strategy that reduces its probability of occurrence rather than lowers its degree of impact on the project performance.

  • 4.2. The relationship between risk dimensions and impact

The second topic of the present investigation focuses on examining the relationship between each risk dimension and its impact on the project performance. This addresses the question of whether or not there is a difference among the four types of impact caused by each risk dimension. Such information can assist the project managers in creat- ing an appropriate risk management strategy. The MAN- OVA method was used to analyze the differences in the degree of impact of each risk dimension. The results show that the five risk dimensions have statistically significant differences (p < 0.05) with the four types of impact, except for the ‘‘organizational environment’’ (p = 0.550). The probable reason is that the concept of ‘‘organizational environment’’ is more abstract and broader and hence, the project managers had difficulty in distinguishing the degree of differences from the various impacts. Table 7 shows that the degrees of influence of ‘‘require- ment’’ and ‘‘planning and control’’ risk dimensions are above the baseline threshold for all of the various impacts, while the rest are all below the baseline threshold. This reveals that if the project managers do not employ the appropriate risk management strategy for each of the risk dimensions, the efficiency or possible benefits from the soft- ware risk management will be negatively affected. Table 8 depicts the detailed information regarding the probability of occurrence and impact among the 27 soft- ware risks. This can be used as baselines to assist an orga- nization in triggering risk-handling activities and to understand the strengths and weaknesses of its software development capability.

  • 4.3. Patterns in risk dimensions across the levels of project

performance

The third topic of the present investigation concerns the relationship between risk dimension and project perfor-

Table 7 The composite impact of four types among risk dimensions

Risk dimension

Technical

Cost

Schedule

Team

performance

User

2.27

2.36

2.37

2.07

Requirement

2.82

2.90

2.90

2.50

Project complexity

2.27

2.40

2.33

2.18

Planning and control

2.50

2.57

2.59

2.39

Team

2.37

2.39

2.40

2.25

Organization environment 2.32

2.34

2.38

2.27

Baseline threshold

2.43

2.49

2.50

2.28

W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

47

Table 8 The probability of occurrence and impact among software risks

Table 9 Cluster means for the six risk dimensions

Software risk Probability of Impact

Technical

 

Risk dimension

High

Medium

Low

occurrence

Cost Schedule Team

 

performance performance performance

 

performance

 

(n = 54)

(n = 45)

(n = 16)

 

User risk dimension

 

User

4.51

5.65

7.66

Users1 a

2.21

2.58

2.67 2.48

2.08

Requirement

6.96

8.61

12.63

Users2

2.35

2.27

2.40 2.44

2.06

Project complexity

5.21

6.79

7.32

Users3

2.04

2.17

2.21 2.30

2.05

Planning and control

5.30

6.54

10.97

Users4

2.41

2.08

2.24 2.17

2.00

Team

5.21

6.04

9.70

Users5

2.26

2.25

2.29 2.44

2.15

Organization environment 4.85

5.42

7.00

Requirement risk dimension

 

Baseline threshold

5.34

6.51

9.22

Reqm1

3.13

3.12

3.30 3.27

2.67

Project performance

5.94

4.41

2.81

Reqm2

2.77

2.73

2.90 2.88

2.46

Reqm3

2.75

2.72

2.74 2.77

2.42

Reqm4

2.41

2.69

2.68 2.70

2.45

Project complexity risk dimension

 

Comp1

2.64

2.29

2.52 2.25

2.19

User

Comp2

2.44

2.31

2.46 2.43

2.23

Organization 12.63 7.66 Requirement Environment 7.00 7.32 Team 9.70 Project Complexity 10.97
Organization
12.63
7.66
Requirement
Environment
7.00
7.32
Team
9.70
Project Complexity
10.97

Comp3

2.02

2.22

2.24 2.33

2.08

Comp4

2.57

2.28

2.38 2.31

2.21

Planning & Control risk dimension

 

P & C1

2.57

2.70

2.81 2.78

2.52

P & C2

2.43

2.47

2.53 2.61

2.37

P & C3

2.43

2.48

2.71 2.50

2.39

P & C4

2.38

2.66

2.70 2.71

2.43

P & C5

2.13

2.38

2.25 2.48

2.27

P & C6

2.15

2.41

2.48 2.52

2.32

P & C7

2.41

2.42

2.50 2.52

2.43

Team risk dimension

 

Planning & Control

Team1

2.35

2.34

2.48 2.42

2.30

   
 

Team2

2.46

2.39

2.37 2.41

2.27

 

Team3

2.30

2.37

2.34 2.37

2.19

Fig. 1. Risk radar chart.

 

Organization environment risk dimension

 

Org1

2.18

2.37

2.42 2.39

2.28

Org2

2.40

2.47

2.48 2.57

2.43

Org3

2.00

2.17

2.21 2.25

2.14

description showing that all risk dimensions obviously

Org4

1.77

2.27

2.27 2.32

2.23

increased from the low to high-performance clusters. This

a Please refer to Table 2 for the meaning of the abbreviations in the column of this table.

 

finding provides empirical evidence that software risks decrease the degree of project performance (Jiang and

mance. Project performance is determined by averaging the five product performance measures and two process perfor- mance measures. In order to have an objective classifica- tion of the levels of project performance, the Two-Step Cluster Analysis, a tree-based technique in SPSS package, was used (SPSS, 2003) in this study. The optimal number of clusters was decided through Schwarz’s Bayesian Crite- rion. It resulted in three performance clusters which were divided into low (n = 16), medium (n = 45) and high (n = 54) project performance as shown in Table 9. The val- ues in Table 9 represent the risk exposure of a specific pro- ject performance cluster by averaging the risk exposures of all the projects within this cluster. Several interesting findings could be observed from two points of view namely, the pattern of the software risk dimensions and the individual project performance cluster. For the pattern of the software risk dimensions, the radar chart, as shown in Fig. 1, provides a clear graphical

Klein, 2000; Wallace and Keil, 2004). The area enclosed in the radar chart represents the risk exposure in six risk dimensions for one of three software project performance clusters. The radar chart shows that the difference between the regions of low-performance and medium-performance software projects is much larger than the difference between the medium-performance and the high-perfor- mance software projects. This suggests that there is an inverse and nonlinear relation between software risks and project performance. In the high-performance cluster of the software projects, the ‘‘requirement’’ risk dimension is the primary factor for lowering the project performance since the values of the other five risk dimensions do not exceed the baseline threshold (5.34). The result suggests that managing the ‘‘requirement’’ risks is critical to achieving the best project performance because even the high-performance projects still have a high degree of the requirement risk dimension. For the medium-performance project, the ‘‘requirement’’, ‘‘project complexity’’ and ‘‘planning and control’’ risk

  • 48 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

Table 10 The top 10 software risks

Rank

Software risk

Probability of

Technical

Cost

Schedule

Team

Risk

 

occurrence

performance

exposure

  • 1 Continually changing system requirements (Reqm1)

3.13

3.12

3.30

3.27

2.67

38.68

  • 2 System requirements not adequately identified (Reqm2)

2.77

2.73

2.90

2.88

2.46

30.32

  • 3 Unclear system requirements (Reqm3)

2.75

2.72

2.74

2.77

2.42

29.25

  • 4 Lack of an effective project management methodology (P & C1) 2.57

2.70

2.81

2.78

2.52

27.75

  • 5 Incorrect system requirements (Reqm4)

2.41

2.69

2.68

2.70

2.45

25.32

  • 6 Poor project planning (P & C4)

2.38

2.66

2.70

2.71

2.43

25.05

  • 7 Inadequate estimation of required resources (P & C3)

2.43

2.48

2.71

2.50

2.39

24.56

  • 8 Project involved the use of new technology (Comp1)

2.64

2.29

2.52

2.25

2.19

24.46

  • 8 Project progress not monitored closely enough (P & C2)

2.43

2.47

2.53

2.61

2.37

24.28

10

Corporate politics with negative effect on project (Org2)

2.40

2.47

2.48

2.57

2.43

23.85

dimensions are the factors that decrease the project perfor- mance because all of their values are above the baseline threshold (6.51). Hence, controlling these risks will help to improve the project performance. For the low-perfor- mance project, the major risk dimensions are the ‘‘require- ment,’’ ‘‘planning and control,’’ and ‘‘team’’ because all their values are above the baseline threshold (9.22). Thus, continually monitoring these risk dimensions is a basic requirement for an effective software risk management. Further analysis of the data in Table 9 yields four important findings. Firstly, the ‘‘requirement’’ risk dimen- sion of all the three project performance clusters exceeds the baseline threshold. This means that successfully manag- ing the ‘‘requirement’’ risk is a basic requirement for achieving the desired software project performance. Sec- ondly, the ‘‘project complexity’’ risk dimension exceeds the baseline threshold only for the medium-performance projects. Other clusters, however, do not have the same sit- uation. Thirdly, only the ‘‘planning and control’’ risk dimension in the high-performance project does not exceed the baseline threshold. This suggests that the medium and low-performance projects need to carefully manage the ‘‘planning and control’’ risk so that they can have chances of improving their project performance. Lastly, only the ‘‘team’’ risk dimension in the low-performance project exceeds the baseline threshold. This suggests that not addressing well the risks associated with the development team can negatively affect project performance.

4.4. A list of the top 10 risks

To determine the top 10 risks which lead to failure in software projects, the risk exposure of all the risks within the six dimensions were computed by multiplying the value of probability of occurrence with the various degrees of composite impact. Table 10 presents a list of the top 10 software risks and their associated probability of occur- rence, degrees of the four various impacts, and risk expo- sure. The past studies (Boehm, 1991; Mursu et al., 2003; Wallace and Keil, 2004) emphasized the top 10 risks by simply listing them. The present study not only presents the top 10 software risks but also includes the information

about their probability of occurrence and degree of impact. In this way, the project managers can have a better idea of how to manage these software risks.

5. Conclusion

Achieving effective software risk management requires project managers to understand the nature of software risks. Thus, information about the probability of occur- rence and impact of software risks on project performance can help the project managers to develop a better risk man- agement strategy. This empirical study considered risk information on 115 software projects. The results indicate that a positive corre- lation does not exist between the probability of occurrence and impact among the six risk dimensions. The relation- ship between software risks and project performance was also examined in the high, medium, and low-performance projects. The results show that the ‘‘requirement’’ risk dimension is the principal factor affecting the project per- formance. Aside from this, one of the ways to improve pro- ject performance is by properly planning the development activities and reducing the project complexity. Likewise, if the project manager is unable to effectively manage the requirements of the whole project life cycle, and does not well-plan nor monitor the software risk management plan, the software projects is likely to perform poorly. The performance of the software projects could also benefit from exploring the relationship between risk com- ponents and some important attributes of software projects such as the type of software systems and project duration. These are the possible research topics for the future.

Appendix A

Selected parts of the questionnaire To what degree do you believe the probability of occur- rence and four impacts of Team risks exist in your most recently completed software project? Please circle the response that best represents your judgment on the follow- ing scales based on the DoD Risk Management Assess- ment Method which is provided in the attached sheet.

W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

Inexperienced team members (Team1)

49

Probability of occurrence Remote

1

2

3

4

5

Near certainty

Technical performance

Minimal or no impact

1

2

3

4

5

Unacceptable

Cost

Minimal or no impact

1

2

3

4

5

>10%

Schedule

Minimal or no impact

1

2

3

4

5

Cannot achieve key team or major program milestone

Team

None

1

2

3

4

5

Unacceptable

Inadequately trained development team members (Team2)

 

Probability of occurrence Remote

1

2

3

4

5

Near certainty

Technical performance

Minimal or no impact

1

2

3

4

5

Unacceptable

Cost

Minimal or no impact

1

2

3

4

5

>10%

Schedule

Minimal or no impact

1

2

3

4

5

Cannot achieve key team or major program milestone

Team

None

1

2

3

4

5

Unacceptable

Team members lack specialized skills required by the project (Team3)

 

Probability of occurrence Remote

1

2

3

4

5

Near certainty

Technical performance

Minimal or no impact

1

2

3

4

5

Unacceptable

Cost

Minimal or no impact

1

2

3

4

5

>10%

Schedule

Minimal or no impact

1

2

3

4

5

Cannot achieve key team or major program milestone

Team

None

1

2

3

4

5

Unacceptable

To what degree do you believe the product and process performance measures exist in your most recently completed soft- ware project? Please circle the response that best represents your judgment on the following scales.

 

Strongly

Strongly agree

disagree

The application developed is reliable

1

2 34567

The application is easy to maintain

1

2 34567

The users perceive that the system meets intended functional requirements 1

2

34567

The system meets user expectations with respect to response time

1

2

34567

The overall quality of the developed application is high

1

2

34567

The system is completed within budget

1

2 34567

The system is completed within schedule

1

2 34567

References

Addison, T., 2003. E-commerce project development risks: evidence from a Delphi survey. International Journal of Information Management 23 (1), 25–40. Armstrong, J., Overton, T., 1997. Estimating non-response bias in mail survey. Journal of Marketing Research 15, 396–402. Barki, H., Rivard, S., Talbot, J., 1993. Toward an assessment of software development risk. Journal of Management Information Systems 10 (2),

203–225.

Boehm, B., 1991. Software risk management: principles and practices. IEEE Software 8 (1), 32–41. Carney, D., Morris, E., Patrick, R., 2003. Identifying commercial off-the- shelf (COTS) product risks: the COTS usage risk evaluation. Technical Report, CMU/SEI-2003-TR-023. Charette, R., 2005. Why software fails. IEEE Spectrum 42 (9), 42–

49.

Chris, S., Christine, C., 2003. The State of IT Project Management in the UK 2002–2003. University of Oxford, Templeton College. Cule, P. et al., 2000. Strategies for heading off IS project failure. Information Systems Management 17 (2), 65–73. DeMarco, T., Lister, T., 2003. Waltzing with Bears: Managing Risk on Software Projects. Dorset House Publishing Company.

Dillman, D. et al., 2002. Influence of plain vs. fancy design on response rates for web surveys. Joint Statistical Meetings. Available from:

Hoffman, T., 1999. 85% of IT departments fail to meet business needs. Computerworld 33 (41), 24. Houston, D., Mackulak, G., Collofello, J., 2001. Stochastic simulation of risk factor potential effects for software development risk management. Journal of Systems and Software 59 (3), 247–257. Jiang, J., Klein, G., 2000. Software development risks to project effectiveness. The Journal of Systems and Software 52, 3–10. Jiang, J., Klein, G., Means, T., 2000. Project risk impact on software development team performance. Project Management Journal 31 (4),

19–26.

Karolak, D., 1997. Software Engineering Risk Management. IEEE Computer Society. Kliem, R., 2001. Risk management for business process reengineering projects. Information Systems Management 17 (4), 71–73. Kumar, R., 2002. Managing risks in IT projects: an options perspective. Information and Management 40, 63–74. Longstaff, T. et al., 2000. Are we forgetting the risks of information technology? IEEE Computer 33 (12), 43–51. McConnell, S., 1996. Rapid Development: Taming Wild Software Schedules. Microsoft Press.

  • 50 W.-M. Han, S.-J. Huang / The Journal of Systems and Software 80 (2007) 42–50

McFarlan, F., 1981. Portfolio approach to information systems. Harvard Business Review 59 (5), 142–150. Mursu, A. et al., 2003. Identifying software project risks in Nigeria: an international comparative study. European Journal of Information Systems 12 (3), 182–194. Murthi, S., 2002. Preventive risk management for software projects. IT Professional 4 (5), 9–15. Nidumolu, S.R., 1996. Standardization, requirements uncertainty and software project performance. Information and Management 31 (3),

135–150.

Rai, A., Al-Hindi, H., 2000. The effects of development process modeling and task uncertainty on development quality performance. Informa- tion and Management 37 (6), 335–346. Schmidt, R. et al., 2001. Identifying software project risks: an interna- tional Delphi study. Journal of Management Information Systems 17 (4), 5–36.

SPSS Inc., 2003. SPSS 12.0 User Guide and Tutorial. Standish Group International, Inc., 2004. 2004 Third Quarter Research Report. Sumner, M., 2000. Risk factors in enterprise-wide/ERP projects. Journal of Information Technology 15 (4), 317–327. US Department of Defense, 2003. Risk management guide for DOD acquisition fifth edition. Defense Acquisition University, Defense Systems Management College. Wallace, L., Keil, M., 2004. Software project risks and their effect on outcomes. Communications of the ACM 47 (4), 68–73. Wallace, L., Keil, M., Arun, R., 2004. How software project risk affects project performance: an investigation of the dimensions of risk and an exploratory model. Decision Sciences 35 (2), 289–321. Williams, R. et al., 1999. SRE Method Description & SRE Team Members Notebook Version 2.0. Software Engineering Institute, Carnegie Mellon University CMU/SEI-99-TR-029.