Anda di halaman 1dari 19

Decision Analysis in 2005

Lawrence D. Phillips London School of Economics & Political Science Catalyze Limited

Abstract This state-of-the-art review finds decision analysis continuing to grow both geographically and in the scope of activities. More of everything: courses, consultancies, conferences, software, books, even a new journal, Decision Analysis. The seven model types I first surveyed in my 1989 review have shifted in their frequency of use. Many practitioners begin decision modelling with relevance (influence) diagrams in preference to decision trees. As corporate responsibility for risk awareness becomes mandatory, analysing and managing risks has become a hot topic. Bayesian belief networks are becoming commonplace. Multi-criteria evaluation continues to grow, while multi-criteria prioritisation and resource allocation are increasingly popular. Only negotiation modelling based on utility theory fails as yet to live up to its promise. Current trends show an increasing use of decision analysis in clinical decision making, which has spawned its own literature. The use of decision analysis as a framework for clear thinking is evident in popular treatments of the discipline and attempts to include decision analysis as topics to be taught in schools. The public sector is showing increasing use of decision analysis on both sides of the Atlantic as an aid to appraising public policy options. Decision analysis seems to be moving from a largely technical discipline to a more socio-technical approach, which sees its use as a decision support system for creating aligned commitment within an organisation.

Introduction
Not for many years has the OR Societys annual conference seen a substantial stream of papers devoted to decision analysis, so it seems appropriate to mark this event in the UK with a review of the current status of this branch of operational research. Not that decision analysis has been ignored here; in 1982 an entire issue of JORS was devoted to decision analysis, and I reviewed its status in 1989. Most OR courses include at least a module on decision analysis, new practitioners are appearing, and established consultancies like Strategic Decisions Group in Richmond are flourishing. Software supporting decision analysis is now more readily available than it was even ten years ago, and many of us are members of the Decision Analysis Society, receiving and contributing to the new INFORMS journal, Decision Analysis. Overall, the discipline continues to grow, as evidenced by a 69% increase in the average number of decision analysis applications published per year from 1970-1989 to 1990-2001 (Keefer, Kirkwood, & Corner, 2004). However that growth appears to be focussed in just a few journals published in the United States. The figures for the European Journal of OR, the Journal of the OR Society and Omega, reveal reductions of 100, 91 and 44 percent, respectively, for the same period, raising a serious question about the future of decision analysis in Europe. Membership figures support the view that decision analysis is mainly practiced in North America: 660 of the 815 members of the Decision Analysis Society (DAS) reside in the United Sates, while 155 reside outside the US. But these figures may not reveal the full extent of interest in decision analysis. For example, for many years over 20 academics at the University of Buenos Aires have been teaching decision theory to over 2,000 students each year. Perhaps the discipline is more widespread than is revealed by membership in the DAS or by publications, for most active practitioners do not publish at all. The purpose of this review is to summarise the state of the discipline world-wide, and point out issues, future research directions and implications for practice.

Seven model types


My earlier review(Phillips, 1989) suggested that decision analysis is characterised by seven models types, distinguishable by their relative handling of uncertainty and multiple criteria. With little modification, the taxonomy is reproduced in Figure 1. Most readers will be familiar with payoff matrices and decision trees, the centrallylocated model type, and many would add influence diagrams, but they are here labelled relevance diagrams, for reasons I will discuss below. To the right are shown three model types that focus mainly on multi-criteria decision analysis, first introduced by Keeney and Raiffa (1976) as an extension of decision theory, which until then placed the emphasis on decision trees and modelling uncertainty. The three model types on the left could be considered as extensions of decision trees and

Uncertainty

Problem dominated by Multiple Objectives

EXTEND conversation
Event tree Fault tree

EVALUATE options
Payoff matrix Decision tree Relevance diagram Multi-criteria decision analysis

CHOOSE option

REVISE opinion

Bayesian nets Bayesian statistics

ALLOCATE resources
Multi-criteria commons dilemma

SEPARATE into components


Credence decomposition Risk analysis

NEGOTIATE

Multi-criteria bargaining

Figure 1: Seven model types in decision analysis relevance diagrams, but I separated them in 1989 to reflect the distinctions becoming apparent in software products. Of course, uncertainty can be managed in multi-criteria models, just as multiple objectives can be accommodated in the models that focus on uncertainty. But it would be misleading to suggest that decision analysts have only one tool in their kit. The new software tools reflect the biases of their originators toward modelling uncertainty or multiple objectives, and also the type of problem as indicated by the capitalised title for each model. This simple two-dimensional taxonomy aids understanding of distinctions that are helpful in matching a particular approach to a real-world problem. The seven sub-sections to follow briefly describe each type of model, with illustrative problems, often copied from the software appropriate to the model type. I also list software that can be used in implementing the model type, though the listings are not exhaustive. For more complete listings, the reader is referred to the periodic surveys of decision analysis software that appear in ORMS Today, the most recent appearing in the October 2004 issue. I assume the reader is acquainted with the fundamental mathematical underpinning of decision theory, the expected utility model, with its reliance on the probability calculus and with multi-criteria utility and value calculus.

Choose option Payoff matrices appear in most OR textbooks to illustrate a decision problem. Rows are choices, columns are outcomes, and cells show the consequences of a choice and the subsequently-realised outcome expressed as a number. Figure 2 shows a simple payoff matrix describing the consequences after one year of investing 1,000 in the stock market, say, in a fund that tracks the FTSE 250, or putting 1,000 into a savings account paying a guaranteed 5% return.

Table 1: Possible consequences of investing 1,000 in either the FTSE 250 or in a bank providing a guaranteed 5% return over the year. Outcome Market stalls 1,000 1,050

Choice Invest 1,000 Bank 1,000

Market rises 1,150 1,050

Market declines 800 1,050

If you find such an example in an OR textbook, it will usually describe the outcome columns as states of nature, a term that is guaranteed to puzzle students who rightly complain that nature is only one relevant factor affecting stock market. Decision analysts prefer the term outcome, reserving the term consequence for the result of an outcome following a choice, here, the financial payoff. A decision tree representation, using DPL software, is shown in Figure 2.

Figure 2: A decision tree representation of the investment problem given in Table 1.

The square represents a choice node whose branches are under the control of the decision maker, while the circle represents an event node with branches showing the

uncertain outcomes of the decision. The triangle attached at the end of the Bank branch indicates a single consequence, while the unattached triangle following the Market branches indicates consequences that depend on which outcome occurs. From the late 1970s, another representation, called an influence diagram, has gained popularity. Howard (2004) now suggests using relevance diagram as a more precise descriptor, a view with which I have some sympathy because influence suggests causality, which may not be the case for a particular problem. The relevance of A to B indicates only that knowing something about A affects knowledge about B, just as knowing the time on my watch is relevant to my knowing the time on your watch, though there is no suggestion of causality or influence. Three types of relevance are common. First, knowing which state A is in might change my uncertainty about the state of B; I am more certain a new product will be a success if I know that it will be the first to market, an example of knowledge that is relevant to my uncertainty expressed as a probability distribution. Second, knowing A might affect the value I assign to states of B; if I subsequently decide to market the product, then I know one component of subsequent profit, namely the cost of marketingknowledge affects a value. Third, B may follow A in time sequence. Each of these is represented in a relevance diagram by an arc, or arrow from A to B, creating a directed graph. Figure 3 shows an influence diagram for the investment problem.

Figure 3: A relevance diagram representation of the investment problem given in Table 1.

Once again, decision nodes are represented by squares or rectangles, uncertain events by circles or ovals, and, here, consequences by rounded rectangles. Hidden from view are the choice alternatives, the outcomes of the uncertain event and the values associated with the consequences, though they can be displayed in tabular or tree form by a mouse click or two. Not displaying all those details emphasises the structure of the problem. Here, the lack of an arc from the choice node to the market

node shows that the choice does not affect the market. For large problems this parsimonious representation enables the decision maker to see the entire structure without becoming bogged down in detail. Furthermore, the decision tree representation grows larger exponentially as more nodes are entered, whereas the relevance diagram grows linearly. Thus, problems whose decision tree would cover a wall can be represented as a relevance diagram on a single A3 sheet. But there is a downside to influence diagrams: they imply a symmetrical decision tree. I first created the above influence diagram in DPL, which automatically draws the corresponding decision tree, and it placed the market event node at the end of the Bank branch. I had to modify the tree with a couple of mouse clicks and a drag just to obtain the decision tree in Figure 2. Even for moderately-sized problems whose decision trees are asymmetrical, modifying the influence diagram can become tedious, depending on the functionality, or lack of it, in the software. I discovered another difficulty in assigning to my students a simple problem, Prescribed Fire, drawn from Clemens textbook (Clemen, 1996). The students task is to draw both an influence diagram and a decision tree for the problem. Nearly all students draw the same tree, but nearly every influence diagram is different. Many influence diagrams are perfectly satisfactory representations of the problem, and I find that much time can be taken up in class discussing those differences and explaining why fewer arcs are needed. The simple admonition to represent in a decision tree the order in which things will become known to the decision maker is much simpler than interpreting the notion of relevance. Research is needed to test the claim of champions of relevance diagrams that they are a simpler, more easily understood representation than decision trees. Extend Conversation For some types of problem, modelling uncertainty about an event E is best done indirectly by extending the conversation to include other events, such as F and G. Mathematically, the representation takes advantage of the multiplication and addition laws of probability. These two laws are used, along with their consequence, Bayes theorem, in solving relevance diagrams. Two special cases are worthy of separate mention, event trees and fault trees. An event tree is simply a decision tree without any decisions; it represents only uncertain events. Probabilistic risk analyses are often of this type, representing combinations of all the imaginable events that could occur in the future about some undertaking. Fault trees begin with an initial system problem, then represent all the corrective actions or system events that can be taken to correct the fault. Both the tree and relevance diagram representations are possible, so the software for these model types can also be used for event and fault trees. Revise Opinion Bayesian belief networks (BBNs), or Bayesian nets, are another special case of relevance diagrams, in which the main purpose is to revise the probability about some event or uncertain quantity as data become available. As an example, consider the nasogastric tube problem, knowing the location of a feeding tube inserted in the nose

of a patient, who might be a child or an unconscious adult. It is essential to know whether the tube has ended up in the lung or the gut before commencing feeding, for although more than 90% of the time the tube ends up in the stomach, on the few occasions when it has not, the resulting complications can end in the patients death. Observing the acidity level of liquid extracted through the tube can reduce uncertainty about its location: if the tube is in the stomach, a low pH (high acidity) is most likely, and if in the lung, a high pH (high alkalinity) is usually obtained. If the tube is in the intestine, a high pH is most likely, but a moderate reading is also a possibility. A Bayesian belief network of the problem is shown in Figure 4, using Netica software.

Figure 4: A Bayesian belief network of the nasogastric tube placement problem.

The Location of tube rectangle shows the three possible locations with their prior probabilities. The pH rectangle shows the possible data, ranges of pH values. The probabilities shown there represent probabilities of observing those pH levels; they are calculated by Netica from the input data, which consist of the prior probabilities and the following likelihoods, both of which are taken from (Metheny & Meert, 2004).

Each row shows the probability of observing the data given the location of the tube (the likelihoods in Bayes theorem); row totals therefore sum to 100. When each column of likelihoods is multiplied by the prior probabilities associated with the locations, and the products summed, the results are the three probabilities shown in the pH rectangle, the probabilities of the data (the normalising constant in Bayes theorem). If an observed pH of 7 or more is obtained, then instantiating that datum by clicking on the bottom row in the pH box gives the result shown in Figure 5.

Figure 5: New, posterior probabilities, if a pH of 7 or more is obtained.

Now a lung location has increased from 3% to 12%, although stomach is still favoured. The single datum about pH is insufficient to overcome the high prior probability associated with a stomach location. By adding more data to the BBN, such as the appearance of the fluid withdrawn from the feeding tube, a more definitive result might be possible. Best policies for feeding tube placement could be developed by incorporating the uncertainty into a decision tree that also models the possible consequences of correct or incorrect placement. Over the past 10 years, the artificial intelligence community has recognised the value of BBNs for capturing the expertise and knowledge of specialists, revised and improved as more and more data about repeated situations becomes available. An example is the troubleshooter wizards in Microsoft products. A BBN lurks behind each troubleshooter, with the questions at any stage determined by the BBN to be the best ones for solving the problem given the data to that point. Several academic groups in the UK are pursuing questions relating to the assessment of probabilities, the nature of evidence and the role for models such as BBNs for improving inferences that can be drawn from fallible data. Examples include the Bayesian Expert Elicitation Project (BEEP) at the University of Sheffield, the Evidence, Inference and Enquiry project at University College London, and the Evidence Seminars at the London School of Economics. Recent world terrorism events have increased interest in BBNs within the intelligence community Separate into Components Rex Brown once coined the term credence decomposition to describe situations in which uncertainty about a target event or uncertain quantity can be modelled by writing the target as a function of other events and uncertain quantities, assessing probabilities about the components, and using Monte Carlo analysis to determine the associated probability distribution for the target. We now know this as risk analysis as carried out by software such as @Risk or Crystal Ball. Our OR colleagues call it simulation. It is another version of extending the conversation or relevance diagrams, but Ive given it a separate section mainly because the software is different. Imagine the relevance diagram shown in Figure 6 for deciding whether or not to launch a new product. Profits depend on the difference between revenues and costs, while revenues depend on the size of the market and the companys share of the

market. These uncertain quantities, which statisticians call random variables, but as Howard point out (Howard, 2004) is misleading to clients because the quantities are neither random nor variable, can be written in an equation with profit a function of the other quantities, assuming the product is launched. Implemented in a spreadsheet, the cells containing the best guess quantities are replaced with probability distributions, and the @Risk or Crystal Ball software sitting in the spreadsheet conducts a Monte Carlo analysis to determine the probability distribution associated with the Profit uncertain quantity. The result, using @Risk sitting in Excel, might be like that shown in Figure 7.

Figure 6: A relevance diagram for the possible launch of a new product.

The distribution for net cash flow, representing profit, shows roughly a 35% chance of making a loss, suggesting there is a fair amount of risk in launching the new product. In my experience, risk analysis by itself often gives rise to unfavourable probability distributions on the target uncertain quantity. The decision maker then considers how the risk could be reduced. The risk analysis turns into a decision analysis, so it would be easier to have used decision analysis software to start. The best decision analysis software does everything that can be done with risk analysis software, with the added advantage of relevance diagram or decision tree representations, which provide a structured way of incorporating initial decisions and decisions subsequent to receiving further information (options analysis), thereby allowing the decision maker to compare the resulting risk profiles conditional on the decisions.

Evaluate Options Decisions are taken to create value, so perhaps it would be wise to focus on those values, whether they are financial or non-financial, and explore how decisions can be more or less effective in creating value. Decision analysis traditionally started with alternatives, but Keeney (1992) argued for a shift to modelling value. As a result of this new emphasis, multi-criteria or multi-attribute modelling has seen a substantial rise over the past ten years, particularly in the public sector, in the military, and in

Figure 7: The probability distribution for Profit, expressed as net cash flow, for the new product launch problem, assuming the product is launched. strategic management. Publication in the UK of Multi-Criteria Analysis: A Manual (Dodgson, Spackman, Pearman, & Phillips, 2000) was motivated by the recognition that the Treasurys Green Book (HM Treasury, 2003), showing how cost-benefit analysis should be carried out for major central government projects, could not deal adequately with criteria that resist turning into monetary values, which can often involve modelling with hidden or dubious assumptions. The manual proposed using multi-criteria decision analysis (MCDA) (Keeney & Raiffa, 1976), and this has now been picked up by several UK government departments. One example can be found at the website for the Committee on Radioactive Waste Management (www.corwm.org). CoRWM is using MCDA to appraise thirteen options for dealing with the UKs high- and medium-level radioactive waste, on 26 criteria that differentiate the options in important ways. The MCDA will then form one input into CoRWMs recommendations to government in 2006. A similar exercise has already been carried out in Canada (NWMO_Assessment_Team, 2004) and is now awaiting public response to its recommendations. In the private sector, MCDA is often applied on projects where the conflict of objectives prevents agreement amongst key players about the best way forward. The main conflict is typically between costs and benefits, or risks and benefits, with the more beneficial alternatives characterised by higher costs and risks. As an example, a financial services organisation in the UK wished to introduce a new e-commerce service. They hired specialist consultants to develop alternatives, but were unable to agree which of the proposed three would be best, A, B or C, though the four consultants and six senior executives from the organisation favoured alternative C.

An exploration of the organisations objectives for the e-commerce business led to the value tree shown in Figure 8. Costs included both monetary values and five risk criteria, while the benefit criteria were clustered under four main objectives: to be financially rewarding, to establish and maintain leadership in the particular business, to be doable and to be aligned with both UK and Global strategy.

Figure 7: Value tree for the e-commerce business case.

The value tree represents a mixture of fundamental and means objectives, rather than just fundamental objectives, which I find is typical of MCDA problems, because information resides in the means objectives that does not seem to appear in the fundamental objectives. This is particularly the case when fundamental objectives are financial in nature, for measures of profit, economic value, or net present value seem not to accommodate adequately important information residing in the means objectives that help to distinguish the alternatives, a theme that will arise again in a later section. Another characteristic of the value tree is to accommodate risk as criteria rather than as probabilities. Here risk is seen as concern over whether the project overall is doable and sustainable, how quickly the service can be brought to market, how adaptable and flexible it will be to subsequent changes in circumstances, and whether it will encounter regulatory problems. Each of these defined a criterion on which the alternatives could be judged by the group using a simple relative scale, with the best criterion scoring 100, the least preferred a zero, and the third somewhere between

those limits. Note, too, that doability appears again on the benefits side, but here it was concerned with the ease of branding and marketing, and of managing the alliance with another partner, making it preference independent of the doability risk, which was about the management effort, apart from the benefit aspects, required to make the venture work. After scoring the alternatives on the criteria and weighting the criteria to ensure the comparability of the units of value from one criterion to the next, the overall results were displayed in two-dimensional space, overall benefits versus overall costs, shown in Figure 8. Note that the x-axis represents preference for costs, so lower costs are to the right and higher costs to the left. Alternatives A and C are on the efficient frontier, with C only preferred overall if much more weight is given to cost than to benefits. Clearly, option A is overall the best option, a shift from the majority opinion at the start of the analysis. Extensive sensitivity analyses supported the superiority of option A, which became the recommendation to the Board. Unfortunately, the global Board did not feel that ecommerce businesses were consistent Figure 8: Overall benefits versus overall with the direction of the organisation, costs for the e-commerce problem so the proposition was rejected. It appeared that the local managers had mis-judged the strategic implications of the proposed e-commerce offering for the international business, though they had recognised in one of the Strategic Alignment criteria that global approval would be required. Allocate Resources Perhaps the most useful MCDA model helps organisations to overcome a problem faced by all organisations: how to create portfolios of projects or programmes that make the best use of limited resources. OR practitioners will recognise the problem: even though individual projects or programmes are optimising the local resource, the collective result is far from optimal. This is the consequence of failing to come to grips with trade-offs between the projects and programmes, a particularly serious problem when cut-backs are required and managers impose equal misery on all parts of the organisation, such as requiring every department to sustain a 10% cut.

MCDA modelling, allied to decision conferencing, as discussed below, attempts to overcome this problem. In OR, the problem is known as the knapsack problem, a mathematical programming approach to finding the most beneficial combination of items to be placed in a back-pack whose volume is necessarily limited. In the MCDA solution, very simple mathematics is used, along with a structural representation of the problem that imposes just a few realistic constraints (Phillips & Bana e Costa, 2005). The solution is based on the recognition, found in cost-benefit analysis and in textbooks of corporate finance (Brealey, Myers, & Marcus, 1995), that when budgets are limited, resources should be allocated according to a priority index formed by the ratio of a projects benefits to its costs. More accurately, the benefits should really be expected (probability weighted) benefits, as represented by the prioritisation triangle. Benefits then become the weighted average of various criteria, typically assuming mutual preference independence so that an additive representation can be Value applied. A risk criterion, defined as the probability of Riskfor adjusted realising the benefits and incorporated into the additive money benefit model using a proper scoring rule (Bernardo & Smith, 1994) provides the risk-adjustment. Each project team scores the projects within their budget category against criteria established by senior The prioritization triangle management. The scores across all areas are reviewed by a separate panel of honest brokers to ensure realism and consistency, and senior managers assess the swing weights that equate the units of value from one project to the next. To assess those weights, managers are required to make judgements of trade-offs between budget categories, the first time that they have been provided with a structured way of thinking about the trade-offs. With scores and weights complete, computer programs such as Equity compute prioritisation triangles for all projects, stacking the triangles in order of decreasing slope, giving an efficient frontier of cumulative benefits versus cumulative costs, as shown in Figure 9.
Cost

The figure shows the triangles from 22 projects, some of which are in the current portfolio, and some being new, proposed projects. The location of the current portfolio is shown at point P; it could only be at point B if all the projects to the left of B were the current projects and those beyond B were new projects. But, inevitably, new and current projects are intermingled on the efficient frontier, forcing the current portfolio to be inside the frontier. Moving to point B would require giving up some current projects, which would entail a potential loss of benefit, and using the freed resource to fund new projects, which are expected to more than recover the lost benefit. By reallocating the existing resource from projects with lesser potential to projects with greater potential, added benefit is created.

Figure 9: An efficient frontier of projects, showing the overall risk-adjust benefits and costs of the current portfolio of projects, P, a better portfolio, B, at the same cost, and a less costly portfolio, C, at the same benefits as P.

Of course, it may not be possible to reallocate the existing resource, in which case the MCDA model is used to show priorities, with the decision maker using the model to find a solution that is closer to the efficient frontier. The remaining gap then provides guidance about strategies to be followed in the future to close the gap. Organisations using this model on an annual basis have managed to improve the efficient frontier by eliminating poor value-for-money projects from the portfolio, and by shifting resources to the projects with greater opportunities. Subsequent improvements in value can be substantial, with average increases of about 30%. Negotiate The MCDA negotiation model, as I reported in my 1989 review, was used extensively by an American consulting company, Decisions and Designs, Inc., for several years in the late 1970s and early 1980s. Since then, despite further developments in the theoretical and practical underpinnings by Howard Raiffa and his colleagues, (Raiffa, 1982; Raiffa, Richardson, & Metcalfe, 2002), little of practical import has been reported in the literature. This seems a pity; the potential for improving negotiations between groups whose objectives are in partial conflict seems great. Perhaps the gap between theory and application requires further research.

Trends
Where is decision analysis heading? In Europe, Im not so sure, but growth in North America seems assured. Are Europeans suffering from a Not Invented Here syndrome? The European-based Journal of Multi-Criteria Decision Analysis, which embraces a broader church of multi-criteria approaches than Keeney-Raiffa MCDA, suggests a greater interest in multi-criteria analysis on this side of the Atlantic (Belton & Stewart, 2002). But the crystal ball is cloudy. Some trends are clear. The medical profession has long embraced decision analysis, as evidenced by the Society for Medical Decision Making, its annual conference, and its associated journal, Medical Decision Making. The US-based Decision Education Foundation aims initially to bring decision analysis into the high school curriculum, but their broader objective is to bring decision education into prominence globally as equally important to literacy and numeracy. Accessible treatments of decision analysis such as Hammond, Keeney, & Raiffa (1999) support what may emerge as a trend to educate people about how to make smart decisions. Other trends seem fairly obvious, so despite my general scepticism about predicting the future, I think it is safe to identify three trends, growth in software, increasing use of group-oriented modelling approaches, and greater emphasis on value-focussed thinking and MCDA. The three trends are related. Software Five classes of software are now evident. First, decision tree software, such as DPL, Data and Precision Tree, has expanded to include influence diagrams, with Analytica dispensing altogether with decision trees. Hotlinks using Microsofts DDE or OLE can be created with DPL or Analytica, enabling those programs to communicate with a spreadsheet, reading it and creating an influence diagram automatically. Precision Tree sits in Excel. Second, Bayesian belief networks created using Analytica, Hugin or Netica, differ considerably in their front-end displays. Im partial to Neticas graphic display of bar graphs, with clicking on an event state to instantiate it showing how uncertainty propagates throughout the entire model, particularly informative for work with groups. Third, risk analysis software like @Risk or Crystal Ball, and, yes, Analytica once again, are learned relatively quickly, and all provide either Monte Carlo or Latin Hypercube sampling as options. Unlike DPL, whose tornado diagrams are generated on a deterministic model to see which uncertain events or quantities should be turned into chance nodes, tornado diagrams in risk analysis software are generated, if at all and less usefully, after the probability distributions have been input.

Fourth, MCDA software for appraising and evaluating options, such as Logical Decisions, VISA, On Balance and Hiview3, provide rather different features, reflecting the different perspectives on MCDA of their originators. For example, Logical Decisions includes an Analytical Hierarchy Process capability, VISA provides dynamic visual displays of sensitivity analyses and Hiview3 has recently added M-Macbeth (Bana e Costa & Vansnick, 1999) to enable an MCDA model to be built with entirely qualitative judgements. Fifth, MCDA software for resource allocation, such as Logical Decisions, Equity3 and High Priority, provide different structures and computational algorithms for creating an efficient frontier of the best options for a given level of resource. Equity provides for the options to be organised in different areas, and handles either mutually exclusive options within an area or cumulative options. High Priority models interactions between options. Both types of software require training to use them confidently; they are more complex than MCDA software for appraisal and evaluation. Most decision analytic software is designed to be used for back-room modelling. As the use in group settings grows, it will become necessary to modify the software to make projected displays more readable. Too often current software does not provide for re-sizing windows, or changing fonts and font sizes on input or output displays. Displays are often cluttered, with unnecessary boxes around every scrap of text, failing to heed Edward Tuftes admonition that Ink should be reserved for data (Tufte, 2001). Worse still, little account seems to be taken of the users mental models; the software seems only to be consistent with the mental models of the designers, not with experienced decision analysts. Research is badly needed on the mental models users bring to the software tools so that the programs are designed to suit the users rather than the users having to learn the arcane structures that were in the heads of the designers. Decision conferencing. A division of Westinghouse Corporation accidentally created the first decision conference in 1979. An entire Westinghouse team showed up at Decisions and Designs, Inc. (DDI), for a meeting at which only a few senior managers had been expected. By the end of the two-day meeting, the entire team had bought in to the recommendations that emerged from the decision analytic modelling. Dr Cameron Peterson, who had facilitated the meeting, realised that the model had provided structure to participants thinking and enabled them to engage in a quality conversation about the issues, resulting in aligned commitment to the way forward. Extensive data collection and complex back-room modelling were not required; everything needed to resolve the issues was in the heads of participants. The MCDA model served to bring together disparate points of view, to test the sensitivity of results to imprecision in the input data and to differences of opinion, to reveal features of the problem that couldnt be seen in the detail, and to enable participants to generate new insights.

When I heard about decision conferences in 1981 I could see the potential for applying my understanding of group processes, which I had studied for some years at the Tavistock Institute for Human Relations, to decision analytic modelling. I immediately shifted from the more traditional doctor-patient role of the consultant, who diagnoses the clients problem, engages in some modelling and prescribes the solution, to a process consultancy role, in which the consultant engages the client in a helping relationship, together identifying the issues, using modelling to organise thinking about the issues, and finding solutions that are owned by the client (Schein, 1999). As interest in decision conferencing grew, I started in 1989 the International Decision Conferencing Forum, an organisation of decision conferencing practitioners and clients, which meets about every year. Thousands of decision conferences have now been conducted in over 15 countries. Decision conference facilitators realised some years ago that many problems are too complex to be dealt with in just two days, that sustained working with a client over a longer period is required. Decision conferencing describes that engagement, which may require a combination of interviews, workshops, decision conferences and other activities to complete an assignment. In general, a trend toward working with teams of key players, rather than individual decision makers, is evident, not just in decision conferencing, but in other decision analytic activities, such as the Dialogue Decision Process used by Strategic Decisions Group. Working with clients rather than for them is the common theme. The role for decision analytic modelling changes from achieving better decisions to creating shared understanding amongst key players, developing a sense of common purpose, and gaining commitment to the way forward. MCDA and Value-Focussed Thinking A final trend is identified in the survey by Keefer et al. (2004): increasing application of value-focussed thinking in a variety of applications. This trend reflects a shift from modelling decisions and the uncertainties encountered along the way to achieving financial consequences, to modelling the multiple objectives characterising the consequences of decisions, with risk and uncertainty handled more flexibly as confidence or risk criteria, as proper scoring rules, as scenarios, or in sensitivity analysis. No longer is shareholder value or some other measure of financial consequences seen as the sole criterion. Instead, means objectives are seen as more directly achievable and under the control of decision makers; financial consequences are viewed as flowing from achieving the means objectives. Decision makers drive their decisions to achieve the means objectives, and use the financial objectives (in for-profit organisations) to keep score on how well they are doing. This perspective is consistent with the findings that visionary organisations drive their decisions on their core values, not on expected financial results, and as a result achieve higher shareholder value than those companies that drive their decisions to realise financial value (Collins & Porras, 1996; Kay, 2004). The trend for decision analysis is a shift to multi-criteria decision analysis, helping decision makers to deal with conflicting objectives, which are often more troubling than difficulties in managing uncertainty.

References
Bana e Costa, C. A., & Vansnick, J.-C. (1999). The Macbeth approach: basic ideas, software and application. In N. Meskens & M. Roubens (Eds.), Advances in Decision Analysis (pp. 131-157). Dordrecht: Kluwer Academic Publishers. Belton, V., & Stewart, T. J. (2002). Multiple Criteria Decision Analysis: An Integrated Approach. Boston/Dordrecht/London: Kluwer Academic Publishers. Bernardo, J. M., & Smith, A. F. M. (1994). Bayesian Theory. Chichester: John Wiley & Sons. Brealey, R. A., Myers, S. C., & Marcus, A. J. (1995). Fundamentals of Corporate Finance. New York: McGraw Hill. Clemen, R. T. (1996). Making Hard Decisions; An Introduction to Decision Analysis (2nd ed.). Belmont, CA: Duxbury Press. Collins, J. C., & Porras, J. I. (1996). Built to Last: Successful Habits of Visionary Companies. London: Century Limited. Dodgson, J., Spackman, M., Pearman, A., & Phillips, L. (2000). Multi-Criteria Analysis: A Manual. London: Department of the Environment, Transport and the Regions. Hammond, J. S., Keeney, R. L., & Raiffa, H. (1999). Smart Choices: A Practical Guide to Making Better Decisions. Boston, MA: Harvard University Press. HM Treasury. (2003). The Green Book: Appraisal and Evaluation in Central Government. London: The Stationery Office. Howard, R. A. (2004). Speaking of decisions: Precise decision language. Decision Analysis, 1(2), 71-78. Kay, J. (2004, 17 January 2004). Forget how the crow flies. Financial Times magazine, pp. 17-21. Keefer, D. L., Kirkwood, C. W., & Corner, J. L. (2004). Perspective on decision analysis applications, 1990-2001. Decision Analysis, 1(1), 4-22. Keeney, R. L. (1992). Value-Focused Thinking: A Path to Creative Decisionmaking. Cambridge, MA: Harvard University Press. Keeney, R. L., & Raiffa, H. (1976). Decisions With Multiple Objectives: Preferences and Value Tradeoffs. New York: John Wiley. Metheny, N. A., & Meert, K. L. (2004). Monitoring feeding tube placement. Nutrition in Clinical Practice, 19, 487-495. NWMO_Assessment_Team. (2004). Assessing the Options - Future Management of Used Nuclear Fuel in Canada.: NWMO. Phillips, L. D. (1989). Decision analysis in the 1990s. Birmingham: The Operational Research Society. Phillips, L. D., & Bana e Costa, C. (2005). Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Annals of Operations Research, accepted. Raiffa, H. (1982). The Art and Science of Negotiation. Cambridge, MA: The Belknap Press of Harvard University Press. Raiffa, H., Richardson, J., & Metcalfe, D. (2002). The Science and Art of Collaborative Decision Making. Cambridge, MA: The Belknap Press of Harvard University Press.

Schein, E. H. (1999). Process Consultation Revisited: Building the Helping Relationship. Reading, MA: Addison-Wesley. Tufte, E. R. (2001). The visual display of quantitative information (2nd ed.). Cheshire, CT: The Graphics Press.

Anda mungkin juga menyukai