Anda di halaman 1dari 5

Planning for and Mitigating Risk

September 1999

An effective technique to plan and mitigate risk is constructing a


spreadsheet of all the risks, the likelihood of their occurrence, and
potential consequences.
by Warren Keuffel
Risk. My American Heritage dictionary tells me that the primary definition of this four-letter
word is, the possibility of suffering harm or loss; danger. Software project managers, like
most engineers, tend to be optimists: Yes, we can build this application. No problem, it will
only take a little while longer. (We have all heard that promise too many times.) Measuring
risk for software projects is an attempt to measure the probability that specified tasks will or
will not be completed on specified dates, or that specified functionality will or will not be
contained within the product. But because no project has ever run exactly as planned, every
project carries with it some degree of risk. That risk must be managed if your organization
wants to avoid a chaotic development model.
But riskparticularly to outsidersis often the horse on the dining room table, as a former
minister of mine once described it: a situation where everyone can see something out of the
ordinary, yet no one is willing to bring up the subject. One reason for this avoidance, of
course, is that by acknowledging that developers even considered specified risks while building
a failed application, could make you liable during subsequent litigation. For this reason, and
also because highlighting your failures usually is not perceived as a career-enhancing move,
case studies of both successful and failed projects seldom discuss risk management. As a
result, most available information on software development risk managementincluding this
articleusually focuses on the writers opinion of how you should manage risk.
Despite the paucity of gory details in most risk management discussions, there is enough
information about risk to draw some reasonable conclusions.
Risk Variables
Ill start with a macro and micro view of risk management. A macro view of risk management
attempts to measure the probability that specified tasks will or will not be completed on
specified dates, or that specified functionality will or will not be contained within the product
being constructed. It compares the projects potential benefits with the overall costs and risks
required to reach completion.
Then what is a micro view of risk management? When you break a project into its component
parts and identify each variable, you can consider the normal distribution range that each of
those variables may occupy. Note: the normal distribution range, not the possible range. An
example of the former might be the likelihood of your lead programmer leaving to work for a
competitor; the latter might be the likelihood that a falling safe lands on the head of said
programmer. Yes, the latter is a possible occurrence, but the former can be more accurately
specified.
Separating risks into macro and micro may seem like an obvious exercise, but the link
between them is not always additive. While each software project is different, there are almost
always variables common to all projects of the same size. Unless you capture the history of
those variables on an organization-wide basis, you cant take advantage of statistical
information that you can glean from examining several projects of the same size. But if you
capture those variables, you can evaluate the risks inherent in each. For example, one variable

(safe falling on the head of your lead programmer) may demonstrate a wide range of values
while in another (the project lead leaving the project) the range of values may be packed quite
compactly. The latter variable is therefore more predictable, so there is less risk in estimating
what the value of that variable will be in a new project.
Other variables are unique to the current project, and you must evaluate them on an ad-hoc
basis. While it helps to have a baseline of the same or similar variables to compare with,
sometimes you must evaluate unknown variables as well. The essence of risk management is
determining what is unique about the project youre evaluating. Its just as important to
realize that you cant remove all risk.
Some activities are inherently more risky than others. These activities include introducing new
technologies, assembling an unproven development team, and so on. The Software
Engineering Institute (http://www.sei.cmu.edu/) has devoted considerable attention to risk
management and has developed a checklist of about 100 different possible risk factors, each
of which you can evaluate to include in your current projects risk portfolio. As with many
other checklist-based evaluations, it introduces an additional risk: overlooking a potential
problem that doesnt fit neatly into one of the provided risk categories. For that reason,
although prepared lists of risks can be useful as a starting point for discussion, I would
encourage organizations to develop their own lists of perceived and actual risks. And as with
process improvement, a bottom-up approach almost always yields superior results.
Overrun Insurance
Its interesting to note a corollary to the bottom-up vs. top-down models of risk assessment.
As you go further up the food chain, the individual risks get toned down, often because the
individuals presenting the project to those with approval authority have a vested interest in
getting the project approved. As a consequence, the executives furthest from project
management are the least likely to have adequate risk data available to them, yet they are the
ones charged with the responsibility of deciding whether or not to proceed with the project.
One way top management can help manage risk is by establishing a cost-overrun insurance
policy. Just as homeowners insurance spreads the risk of any client suffering a fire, a
corporate risk insurance policy helps spread the inherent risk measurement costs among
several projects. If a given project manager consistently makes claims against the policy, that
project manager will probably get demoted, terminated, or reassigned to other duties.
Identifying Risks
Remember that Murphy was an optimist. Its too easy for project managers to construct their
models hoping that the things that could go wrong wont for their project. But that is not a
level-headed way of looking at risk. If youre going to fudge the risk factors, you might as well
channel your energy into completing the project and hope that divine intervention will guide
your team to success. But if you want to manage your risks fairly, there are some steps that
can help.
First, name your risks. One of the best ways to do this is to hold team sessions inspections,
if you willto identify possible risks. Once you have a list of candidate risks, you should
evaluate each risk and assign it a weight representing its likelihood. Following that, you must
determine what will happen to the project if each risk becomes an actuality.
Given that its impossible to reduce the likelihood of all risks to zero, you must identify the
most significant risks in terms of severity, likelihood, and potential consequences. Once you
prioritize your risks, develop an action plan for dealing with each of the most important ones.
Finally, your team needs to plan actions to take if risk mitigation doesnt work.

The Change Constant


The biggest risk management mistake isnt failing to identify risks. Its failing to continually
revise the risk management plan. A software development projects environment is constantly
changing, as a result of competitive pressures, organizational strategy and personnel changes,
and technical challenges. Too frequently, a risk management plan, like a system architecture
design, is prepared at the beginning of the project and then shelved. To be valuable, risk
management needs to be revisited as frequently as schedule and technical issues.
An effective tool Ive found for facilitating this process is constructing a spreadsheet into which
you can enter all risks. The results of the identification, likelihood, and consequence
evaluations are entered into the spreadsheet. But dont get ahead of me here and assume that
the spreadsheet does the work.
One of the biggest risk analysis mistakes is confusing ordinal rankings with actual
measurements. That is, if you assign a high, medium, or low ranking to a risk variable, its
inappropriate to construct a mathematical model of probability based on those rankings. A
rank of medium cannot be interpreted as statistically significant. If, however, you assert that
a function point count greater than 100 will trigger a large project risk, you are using a
measurement. Maybe a function point count of 100 translates into a medium-level risk, but
you cant say a medium risk that your project will be canceled is equivalent to the medium risk
that your project will be too big. All you can say is that the risk is greater than low and less
than high.
Another common mistake is confusing risk with uncertainty. Both may find a place in your
spreadsheet, but you usually calculate risk. Uncertainty, by definition, negates any effort to
assign a certain value to an uncertain variable.
Any note or mechanical method of assessing risk is bound to fail. Thats why a simple
spreadsheet serving as a record-keeping toolbut not as an analytical toolis sufficient when
used with a conscientiously applied program of risk hygiene. What are the details of that
program? Think bubbles. Think bubble sort, that inefficient but lovable sorting technique you
learned in your first programming class. Think bubbling risks to the top, as in comparing two
risks adjacent to each other in your spreadsheet. Compare the combined values given to
severity, likelihood, and consequence for each, and decide which risk should be ranked above
the other. Repeat until all risks are sorted. Then repeat again whenever you review your
projects milestones.
Regret Matrix
Theres a related investment technique worth exploring as a method for understanding a
projects risks. As Ron Dembo and Andrew Freeman describe in Seeing Tomorrow: Rewriting
the Rules of Risk (John Wiley and Sons, 1998), instead of quantifying risk variables, change
your focus to what the authors call regret. Dont confuse regret with disappointment. Regret is
dictated by your expectations: if you have high expectations that are not met, you will likely
have a high degree of regret. If, on the other hand, your expectations are not high, failure will
not produce as much regret.
Regret is not a new concept. It is a part of a discipline known as decision theory, which is
closely related to risk management. But how can you use regret to make decisions? By
attaching probabilities to future eventswhich changes uncertainty into riskyou can calculate
your net present benefit and use that as the basis for your decision. But, you may be asking,
how can you attach probabilities to uncertainty? Heres an example:
Assume youre predicting what would happen to your project if your project lead were to
leave. Create a matrix of four columns: zero, low, medium, and high, each representing an

increasingly negative effect on the project. Enter three rows describing the actions you could
take in response to the event, ranging from weakest response to strongest: do nothing,
promote another team member to project lead, and hire another project lead from outside, as
shown in Table 1. In each of the 12 cells, assign a percent value to the cost you would suffer if
that intersection of column and row were to occur. For example, if there is zero effect on the
project if the lead departs, and you do nothing, then the cost is zero. If you do nothing, and
the effect is high, then your cost is 100%. If there is a low effect and you promote someone
from the team, the cost might be 10%. And so on until you fill the grid with these 12
percentages, representing the damaging effect on the project for each combination of severity
and action.
Table 1: Assumptions About Risk if Project Lead Quits
Cost Matrix

Zero

Low

Medium

High

Do nothing

20

50

100

Promote from within

10

20

35

60

Hire from outside

40

42

45

50

Using the cost matrix you just constructed, build a similar 12-cell regret matrix, as shown in
Table 2. The headings for the columns and the names of the rows remain the same. But for
each cell in the matrix, use the values entered to determine which policy (do nothing, promote
from team, and so on) produces the best result for each of the severity columns (zero, low,
and so forth).
Table 2: Assumptions About Risk if Project Lead Quits (with Regret)
Cost Matrix

Zero

Low

Medium

High

Do nothing

20

50

100

Promote from within

10

10

Hire from outside

40

22

10

Then, perform a key calculation to arrive at regret. Take the best result for each column and
subtract it from the other values in that column to arrive at a measure of regret. For example,
in row one, column one, where both values are zero, subtracting zero from zero gives you zero
regret. But look at the possible medium cost values for each of the actions. Assume that if you
do nothing, the cost is 50%; if you promote from within, the cost is 35%; and if you bring a
lead in from outside, the cost is 45%. Since 35% is the best result of the three, you would
assign a regret value of zero to the intersection of promoting a lead and medium cost. To
arrive at the regret value for the other two actions, subtract 35% from 50% to arrive at a
regret of 15% for doing nothing and subtract 35% from 45% to arrive at a regret of 10%.
After completing the regret matrix, its obvious which course of action produces the least
regret for each cost factor. But more important, you can say with some assurance how much
regret we would feel if we follow any particular course of action. Regret analysis is a powerful
tool, but the authors dont stop there. They show you how to construct graphs that visually
show you how to assess the results from two situations where regret is the same but the
effects on the project are very different.
Tools

Programmers are used to thinking that a tool can solve their problems, even problems that
dont lend themselves to tool-based solutions. That doesnt stop people from trying, and
doesnt stop pundits from calling them silver bullets. That said, if you want a silver bullet, here
are a couple leads to help you find one. At the Software Program Managers Network web site
(http://spmn.com/), you can find a downloadable Access database called Risk Radar. If you
want to use COCOMO for estimating, a version that incorporates risk evaluation is available at
http://sunset.usc.edu/ (follow the Tools link). In the commercial-product arena, risk
management forms a natural fit with project management tools. Primaveras P3 is available
with an optional add-on for risk assessment called Monte Carlo. The company also offers
TeamPlay, a project management suite. Get the details at http://www.primavera.com/. ABT
Corp.s Results Management Suite (http://www.abtcorp.com/) also includes a risk
management module.
A Final Note
Risk management is a complex and broad field. The kinds and severity of risk vary from
industry to industry and market sector to market sector. The risks facing a game software
company are significantly different from the risks facing a Department of Defense contractor.
The risks facing the group developing new software are different from the risks facing the
group maintaining legacy systems. Thats why getting simple answers to complex questions
about risk can be exceedingly difficult. The best source for risk data in your shop is historical
data captured from previous projects. One source for that information belongs on the shelf of
any serious risk management practitioner: Robert Charettes Software Engineering Risk
Analysis and Management (McGraw-Hill, 1989). Unfortunately, this title is out of print. Also
useful is Capers Jones Assessment and Control of Software Risks (Yourdon Press, 1994).
Understanding the language and concepts of risk is a good place to start deciding what kinds
of data you need to capture.

Anda mungkin juga menyukai