Anda di halaman 1dari 25

Ann Oper Res (2012) 195:163187

DOI 10.1007/s10479-011-0945-9
An introduction and survey of the evidential reasoning
approach for multiple criteria decision analysis
Dong-Ling Xu
Published online: 6 September 2011
Springer Science+Business Media, LLC 2011
Abstract The Evidential Reasoning (ER) approach is a general approach for analyzing
multiple criteria decision problems under various types of uncertainty using a unied
frameworkbelief structure. In this paper, the ER approach is surveyed from two aspects:
theoretical development and applications. After a brief outline of its development and ex-
tension over a twenty year period, the ER approach is outlined with a focus on the links
among its various developments. Future research directions in the area are also explored in
the survey.
Keywords Multiple criteria decision analysis Uncertainty Evidential reasoning
Evidence theory Belief structure Belief decision matrix
1 Introduction
The Evidential Reasoning (ER) approach is a general approach for analyzing multiple cri-
teria decision making (MCDM) problems under uncertainties. Traditionally, MCDM prob-
lems are represented or modeled by decision matrices, including pairwise comparison ma-
trices used in AHP (Saaty 1988; Farkas and Rzsa 2001), in which exact numbers without
uncertainties are frequently used as their elements and are incapable of explicitly model-
ing uncertainties like ignorance. The subsequent outcomes from analyses based on such
models appear to be free of uncertainties, which could be misleading to the inexperienced.
Even to the experienced, although further sensitivity analysis can be carried out to reveal
some of the effects of uncertainties which were not modeled in the rst place, the an-
choring effects (Bazerman 2005) of the outcomes could be signicant and lead to biased
decisions. At the same time, sensitivity analysis is by far from ideal for indentifying the
combined effects of various types of uncertainty which often co-exist in a decision making
problem.
D.-L. Xu ()
Manchester Business School, The University of Manchester, Manchester M15 6PB, UK
e-mail: L.Xu@mbs.ac.uk
164 Ann Oper Res (2012) 195:163187
The ER approach is developed on the basis of Dempster-Shafer evidence theory (Shafer
1976) and decision theory. By introducing the concepts of belief structure (Yang and Xu
2002a; Yang and Singh 1994; Zhang et al. 1989) and belief decision matrix (Xu and Yang
2003), for the rst time it becomes possible to model uncertainties of various types of nature
in a unied format for further analysis without resorting to sensitivity analysis.
This initiative provides a new avenue for exploring how various types of uncertainty
can be handled in an integrated way. Since the introduction of the modeling technique using
belief structure in 1989 (Zhang et al. 1989) and the development of the ER approach in 1994
(Yang and Singh 1994), signicant amounts of work in this area have emerged in literature,
including,
the extension of the modelling technique to model other types of uncertainty, such as
interval uncertainties (Fan and Deer 2008; Xu et al. 2006b), fuzzy uncertainties (Yang et
al. 2006b) and uncertainties in other parameters of a decision problem such as criterion
weights and belief degrees (Guo et al. 2007),
the use of the belief structure to extend traditional rule based expert systems to belief
rule based expert systems which not only allows uncertainties to be explicitly modelled
in rules, but also equips a belief rule based system with the capability to learn and model
complex casual relationships (Yang et al. 2006a; Xu et al. 2007),
applications of the ER approach to decision problems in various areas (more about this in
Sect. 2.4),
incorporation of the ER algorithm in various decision support tools, such as IDS (Intel-
ligent Decision System
1
) (Xu and Yang 2003, 2005), a web based assessment tool (Xu
2010) and groupware (Iourinski and Ramalingam 2005), and
comparison studies of the approach with other approaches such as AHP (Saaty 1988) and
neural networks (Xu and Yang 2001; Wang and Elhag 2007).
This survey aims to provide readers with a holistic view of the development of the ER ap-
proach and its related research in theory and applications so far, and explore future research
directions in the areas. As a result of the exploration, six indicative directions are suggested,
which may inspire readers to identify more areas for further exploration.
The survey will be organized as follows. In Sect. 2, the development stages of the ER
approach and the composition of the approach are outlined, which includes problem model-
ing with belief structures and belief decision matrix, the ER algorithm (Yang and Xu 2002a,
2002b), the information transformation techniques (Yang 2001) and its software implemen-
tation, the Intelligent Decision System. Applications of the approach and its various exten-
sions developed over the past ten years or so to handle more complex uncertainties in various
parameters are also briey listed in this section. In the nal section, the limitations of the
ER approach and further research directions are explored.
2 The ER approach and its development process
The ER approach constitutes the modelling framework of MCDM problems using belief
structures and belief decision matrices (Xu and Yang 2003), the ER algorithm for informa-
tion aggregation (Yang and Singh 1994; Yang and Xu 2002a, 2002b), and the rule and utility
based information transformation techniques for dealing with various types of information
1
A student version of IDS which handles up to 50 criteria without other restrictions can be downloaded from
www.e-ids.co.uk.
Ann Oper Res (2012) 195:163187 165
of both a quantitative and qualitative nature under the necessary conditions of utility and
value equality (Yang 2001). In this section, some of the background information of those
developments and the links among those developments are discussed after a brief outline of
the timeline of the developments.
2.1 The development of the ER approach
The timeline of the development of the ER approach, including some of the milestones over
the past 20 years are outlined below.
1989: The concept of belief degrees was rst introduced to MCDM for describing the
performances of an alternative decision over qualitative attributes (Zhang et al.
1989) while the authors were working with practitioners in a large oil renery
on its production planning and strategic decision making, in responding to the
needs and requirements arising from the increasingly competitive oil market.
1994: The 1
st
version of the Evidential Reasoning approach was published (Yang and
Singh 1994; Yang and Sen 1994a), providing the rst innovative link of its kind
between MCDM and the theory of evidence.
This version of the ER approach provides the original concept of evidential
reasoning for criteria aggregation but the reasoning process is approximate
and not exact, and the generated outcomes assumes limited compensation
among criteria, which may not be appropriate in certain decision environ-
ments where more complete compensation among criteria is required and
multiple pieces of evidence are in conict with each other.
As Huynh et al. (2006) point out, the ER algorithm does not satisfy all the
four synthesis axioms proposed by Yang and Xu (2002a) to check the ra-
tionality of an information aggregation rule. Those axioms are outlined in
Sect. 2.3.2.
This version of the ER approach does not handle incomplete information
explicitly in a sense that it does not separate an unassigned probability mass
into the part caused by incompleteness and that by criterion weight and the
ignorance may be exaggerated in the aggregated outcome.
In this version of the ER approach, it is assumed that performances of al-
ternatives are measured qualitatively only. Quantitative criteria are not mod-
elled within the same belief framework.
In this version of the ER approach random numbers or interval uncertainty
are not handle.
It should be noted that without the further developments in the subsequent
years, this version of the ER approach would be of limited practical use.
More detailed comparison of this version of the ER approach and the one
published in 2002 (Yang and Xu 2002a) can be found in Huynh et al. (2006).
2001: A set of rule and utility based information transformation techniques are devel-
oped (Yang 2001). The techniques play an essential role in enabling the ER ap-
proach to handle different types of information, including a mix of quantitative
and qualitative information with uncertainties. A new normalisation process
was also introduced to normalise weights. The new normalisation provides a
basis for a rigorous reasoning process.
166 Ann Oper Res (2012) 195:163187
2002: The original Evidential Reasoning algorithm is modied so that the reasoning
process is rigorous and rational for MCDM where appropriate compensation
among criteria is necessary (Yang and Xu 2002a). To check the rationality of
a reasoning process, four synthesis axioms are proposed and it is proved that
the ER algorithm satises them all. The nonlinearity property of the ER ag-
gregation process is also studied (Yang and Xu 2002b). The new ER approach
equipped with the new ER algorithm and the information transformation tech-
niques is capable of properly handling
both qualitative and quantitative information,
probabilistic uncertainty,
incomplete information and
complete ignorance in some assessments.
19982006: A software tool, IDS (Intelligent Decision System), is developed, tested and
applied to facilitate the applications of the ER approach. A demonstration and
student version is available from http://www.php.portals.mbs.ac.uk/Portals/49/
docs/IDS50StudentVersion_000.rar).
20062007: The ER approach is extended to handle uncertainties of interval or fuzzy na-
ture, or the co-existence of those uncertainties in various parameters, such as
when
(a) a belief degree is assigned to an interval including several grades instead
of to a single grade; in terms of evidence theory, the focal elements in
this extension can be singleton, and any subsets of a number of adjacent
singletons in a pre-dened order such as ordered according to their utility
values (Xu et al. 2006b),
(b) assessment grades are dened as fuzzy linguistic terms instead of crisp
terms (Yang et al. 2006b),
(c) belief degrees in a belief structure can be intervals (Wang et al. 2006b,
2007),
(d) interval uncertainties co-exist in both weights of criteria and belief degrees
(Guo et al. 2007),
(e) assessment grades are dened as fuzzy linguistic terms and a belief degree
is assigned to a grade interval, which is the combination of (a) and (b),
(Guo et al. 2009).
(f) The analytical format (vs. recursive format) of the new ER algorithm (Yang
and Xu 2002a) is obtained (Wang et al. 2006a), which enables researchers
to further explore the mathematical properties of the algorithm and use it
for nonlinear optimization and system modeling with both quantitative and
qualitative parameters (Zhou et al. 2010; Guo et al. 2007).
2.2 MCDM problem modelling in the ER approachbelief decision matrix
2.2.1 Inadequacy of using averages for MCDM problem modelling
Decision making is a most common human activity. In many situations, decision makers
may have to consider a range of different criteria, which are often of both a quantitative and
qualitative nature. In addition to the hybrid nature of decision criteria, information collected
to measure decision alternatives against the criteria are often plagued with uncertainties and
unknown.
Ann Oper Res (2012) 195:163187 167
Table 1 Example of decision matrix
Car assessment
Price Engine quality
Quietness Fuel consumption Vibration Responsiveness
Car 1 8000 Very Quiet 40 mile/gallon Normally Good
Car 2 9000 Quiet 45 mile/gallon Lightly Excellent
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Car N 7000 Noisy 47 mile/gallon Heavily Good
MCDM problems can be modelled using decision matrices in which each element repre-
sents the outcome of an alternative course of action (or simply an alternative or a decision)
measured against a criterion. In most applications, the element is simply an average value,
such as a score, a grade or a rating. For example, the decision matrix for a car purchase
problem may look like Table 1.
Many methods have been developed to deal with MCDM problems based on such de-
cision matrices. Additive utility (value) function methods (Keeney and Raiffa 1976) and
outranking approaches (Guitouni et al. 2008; Roy 1990, 1996; Brans and Mareshal 1990;
Vincke 1992; Hwang and Yoon 1981) are among the commonly used.
Using decision matrices with single valued elements is a relatively simple way to model
MCDM problems. Average numbers, however, are incapable of modelling uncertainties in
a decision problem. For example, the fuel consumption of a car could be a random variable
depending on road, weather and trafc conditions. An average consumption may have to
be calculated for the analysis when a decision matrix is used, which could lead to a awed
decision (Savage 2009). Another commonly encountered problem is missing data (Kabak
and Ruan 2010; Yang et al. 2011), such as unanswered questions in responses to question-
naire surveys. Although there are many approaches for handling missing data such as those
proposed by Guitouni et al. (2008) and Greco et al. (2011), in practices two most common
approaches are either replacing them with some estimated values or discarding the whole
set of data. However, both approaches are not ideal. The former may introduce distortion
and the latter leads to loss of valuable information (Nishisato and Ahn 1995).
As uncertainties cannot be modelled explicitly and taken into account wholly in an anal-
ysis using average numbers, methods based on such a modelling technique cannot be used
to analyse and reveal the true impact of uncertainties on decision outcomes. As recognised
by many decision analysts and practitioners, one of the most important purposes of decision
analysis is to understand the problem and the effects of any uncertainties associated with
the problem (Phillips 1984; French 1988, 1989; Tsoukis 2007). Hence the very purpose of
decision analysis may not be achieved satisfactorily by using average numbers even with
partial sensitivity analysis.
2.2.2 Belief structure for MCDM problem modelling
To overcome the limitations of modelling MCDM problems using average numbers as
elements of decision matrices, other types of data are introduced. Two most frequently
mentioned data types in literature are fuzzy and interval numbers (Figueira et al. 2005;
Tavana et al. 2010; Deng et al. 2011) used to model fuzzy and interval uncertainties respec-
tively. Although in decision theory probabilities or probability intervals are generally used
168 Ann Oper Res (2012) 195:163187
to model uncertainties (Gonzales and Jaffray 1998), they are not widely applied to model
uncertainties in MCDM problems (Stewart 2005 in Figueira et al. 2005, p. 445). Among
the ve MCDM software packages studied by French and Xu (2005), only in one package
probability uncertainty is dealt with through Mont Carlo simulation.
In the ER approach, MCDM problems are modelled using belief structures. A belief
structure is an extension to traditional probability distribution (Shafer 1976; Beynon et al.
2000). The main difference between the two is that in a belief structure, probability can be
assigned not only to a single event, but also to a sub set of events. From this point of view,
the ER approach can be viewed as a probabilistic approach.
The idea of using belief structure for modelling uncertainties in MCDM problems was
initially conceived and proposed in response to the needs arising from the strategic planning
processes in a large oil renery (Zhang et al. 1989). Subsequent research found that it is
a powerful and unied tool to model various types of uncertainties, such as probability
uncertainty and uncertainties caused due to partial or missing information (Yang and Singh
1994; Yang 2001).
Concept of belief structure A belief structure is a distributed assessment using belief de-
grees. It is used to represent the performance of an alternative assessed against a criterion.
To illustrate the concept, suppose there is a MCDM problem in which M cars (alternatives)
are evaluated against L criteria and one of which is Engine Quality.
The engine quality of the m
th
(m = 1, . . . , M) car may be assessed to be Excellent to
some degree (e.g. 0.6) due to its low fuel consumption, low vibration and high responsive-
ness. At the same time, the quality may also be assessed to be only Poor to some degree
(e.g. 0.4 or less) because its quietness and starting quality can still be improved. Such an
assessment can be modeled as:
a
ml
= {(Excellent, 0.6), (Poor, 0.4)} (1)
where {(Excellent, 0.6), (Poor, 0.4)} is referred to as a belief structure in which Excellent
and Poor are assessment grades, whilst 0.6 and 0.4 are degrees of belief.
More generally, suppose a MCDM problem has M alternatives assessed on L criteria.
Let
H = {H
1
, . . . , H
N
} (2)
be a collectively exhaustive and mutually exclusive set of assessment grades where N is the
number of grades in the set. Then a belief structure can be expressed as
a
ml
= {(H
1
,
l,1
), . . . , (H
N
,
l,N
), (H
1N
,
l,1N
)} (3)
where m = 1, . . . , M, and
l,i
0 (i = 1, . . . , N; l = 1, . . . , L) is a belief degree to which
the performance of the alternative is assessed to the grade H
i
on criterion l,
l,1N
= 1

N
i=

l,i
0 and H
1N
is the set of grades from H
1
to H
N
. H
1N
is used to represent unknown
in the assessment. When

N
i=1

l,i
= 1 or
l,1N
= 0, the assessment is said to be complete;
otherwise it is incomplete.
Uncertainty represented by belief structure Using a belief structure, whether the perfor-
mance of an alternative on a criterion is measured by precise data or data with uncertainties,
it can be modelled as follows.
Ann Oper Res (2012) 195:163187 169
Precise data
If the performance can be precisely assessed to a grade, such as Excellent, with-
out any doubt, then it can be represented by the belief structure a
ml
= {(1.0, Excellent)}.
Therefore precise data (including qualitative data such as an assessment grade) can be
seen as a special case of a belief structure. This will lead to a conclusion later in this sub
section that decision matrix is a special case of belief decision matrix.
Absence of data
Absence of data is used to describe a situation where there is no data available to assess
the performance of an alternative against a criterion. Such a case can be represented by a
belief structure in which the sumof total belief degrees is 0, i.e.

N
i=1

l,i
= 0 or
l,1N
= 1,
Partial data or incomplete data
This is a situation where data for measuring the performance of an alternative against
a criterion are partially available. If this is the case, the sum of total belief degrees
in the distributed assessment for that attribute will be between 0% and 100%, i.e.
0 <

N
i=1

l,i
<1.
Probability uncertainty
Some outcomes measured against a criterion may be of random nature. For example,
the fuel consumption of a car in mile per gallon is not a deterministic number. Depending
on road conditions, trafc conditions and seasons of a year, the gure can vary. The nature
of fuel consumption can be described by a probability distribution, which is a belief struc-
ture in nature. Other common sources of uncertain data can be subjective judgements or
questionnaire surveys. For example, in a customer satisfaction survey, if 20% of the cus-
tomers evaluate the after sale service of a computer shop to be excellent, 30% good, 40%
average and 10% with no opinions, this piece of evidence can then be represented by a
belief structure as follows:
{(Excellent, 0.2), (Good, 0.5), (Average, 0.3), (unknown, 0.1)}.
It should be noted that a belief structure can be a continuous probability or belief distri-
bution theoretically. Practically, such as in IDSthe software implementation of the ER
approach, the continuous distribution is approximated by a discrete one of up to 20 data
points.
Concept of belief decision matrix In a decision matrix 2, if each element a
ml
is a belief
structure, then it will be called a belief decision matrix (Table 2).
As discussed earlier, precise data are special cases of belief structures. It can be easily
seen that a decision matrix using only average numbers as its elements is a special case of
belief decision matrix when all belief degrees in each belief structure is either 1 or 0 subject
to the condition that the sum of belief degrees in each belief structure is 1.
Table 2 Belief decision matrix
Attribute
Alternative 1 . . . l . . . L
1 a
11
a
1l
a
1L
. . .
m a
m1
a
ml
= {(H
1
,
l,1
), . . . , (H
N
,
l,N
), (H
1N
,
l,1N
)} a
ml
. . .
M a
M1
a
Ml
a
ML
170 Ann Oper Res (2012) 195:163187
2.3 The ER approach (Yang and Xu 2002a)
To synthesize the distributed assessment information, the ER approach was rst published in
1994 (Yang and Singh 1994; Yang and Sen 1994a). It was further developed and published
in 2001 and 2002 (Yang 2001; Yang and Xu 2002a). The earlier version of the ER approach
employs the Dempsters Evidence Combination rule (Shafer 1976) for criteria aggregation
with the introduction of criteria weights in the probability mass assignment, while the later
version employs a new evidence combination rule established by revising Dempsters com-
bination rule. Different from Dempsters rule, the new rule allows normal compensation
among criteria, whilst both rules employs orthogonal sum as a basis for evidence aggre-
gation. The outline of this new rule is based on a paper published in 2002 (Yang and Xu
2002a).
The ER approach for MCDM problem analysis involves the following steps:
(1) Identifying and analysing a decision problem having multiple criteria in conict, in-
cluding the identication of its stakeholders and decision makers, and modelling the
MCDM problem using a belief decision matrix, including the acquisition of decision
makers preferences in terms of criteria weight and utility or value
(2) Transforming various belief structures into a unied belief structure using rule or utility
based information transformation techniques
(3) Information aggregation using the ER algorithm. Note that the ER algorithm referred to
in literature after 2002 is normally the version published in 2002 (Yang and Xu 2002a)
unless otherwise specied
(4) Sensitivity analysis, and generation of distributed assessment outcomes, utility scores
or utility intervals if there is missing information, and analysis reports
In this section, the example shown in Table 3 is frequently referred to in order to illustrate
the ER approach. In other words, it is assumed that step (1) has already been performed and
the belief decision matrix of the problem is given. The focus of the sub section is therefore
on steps (2) to (4).
2.3.1 Rule and utility based information transformation
A decision is normally made by aggregating various types of data collected for different al-
ternatives such as those shown in Table 3. The data could be either quantitative or qualitative
Table 3 Example of belief decision matrix
Car assessment
Price Engine quality
Quietness Fuel consumption Vibration Responsiveness
Car 1 8000 Very Quiet
(100%)
35 (50%), 40 (50%) Heavily (50%),
Normally (50%)
Good (75%)
Excellent (25%)
Car 2 9000 Quiet
(100%)
40 (33%), 45 (33%),
50 (33%)
Lightly
(100%). . . . . .
Good (35%)
Excellent (65%)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Car N 7000 Noisy
(100%)
45 (25%), 46 (25%),
48 (25%), 49 (25%)
Heavily (80%),
Normally (20%)
Average (15%)
Good (70%)
Excellent (5%)
Unknown (10%)
Ann Oper Res (2012) 195:163187 171
in different formats. For instance, Fuel Consumption is quantitative and can be measured by
the number of miles a car can travel per gallon of fuel (mpg). On the other hand, it is natural
to measure a qualitative criterion using a specic set of grades appropriate for the criterion,
which does not necessarily need to be the same as the one used for assessing another cri-
terion. In terms of Quietness, for example, it is natural to use Very Quiet, Quiet, Normal,
Noisy, and Very Noisy to measure it; in terms of Engine Quality and Responsiveness, it is
common to use Excellent, Good, Average, Below Average and Poor to measure them.
Step (2) is necessary because the various types of raw data collected and recorded in
Table 3 cannot be aggregated directly using the ER algorithmwithout being rst transformed
into a unied belief structure under certain equivalence conditions. To maintain a sense of
equivalence between the original and the transformed information, the relationships among
various sets of grades have to be properly interpreted and the grades must be used in a
consistent manner. For instance, the performance of a car engine is said to be Good if it is
Quiet, its Responsiveness is Good, Vibration is Lightly, and its Fuel Consumption is quite
low (44 mpg for example).
In the above statement, it is implied that a Quiet engine means that Engine Quality is
Good as far as Quietness is concerned. In other words, the grade Quiet in the Quietness
assessment is equivalent to the grade Good in quality assessment. Similarly, in the above
statement if the Fuel Consumption of an engine is 44 mpg then its Engine Quality is judged
to be Good as far as Fuel Consumption is concerned and so on.
In general, if both quantitative and qualitative data are involved in a decision problem it
is necessary to transform various sets of assessment grades or quantitative information to
a common framework of discernment so that they can be compared and aggregated con-
sistently. In the following sections, four techniques are described to facilitate the transfor-
mation. Those are listed and discussed in Yang (2001) and summarised in this sub section
below.
(i) rule based qualitative information transformation technique,
(ii) rule based quantitative information transformation technique,
(iii) utility based qualitative information transformation technique, and
(iv) utility based quantitative information transformation technique
2.3.1.1 Rule-based qualitative information transformation In Table 3, different sets of lin-
guistic evaluation grades (or framework of discernment) are used and each grade in one set
may be wholly or partially equivalent to a grade in another set in terms of decision makers
preferences towards the assessment outcomes represented by the grade. If such equivalence
relationship can be established, then performances represented by one set of grades can then
be represented by a common set. For sub criteria (sometimes referred to as basic criteria)
that share the same father criterion (sometimes referred to as general criterion), a common
set can be the one used for the general criterion. One way to establish the relationships is
through rules given by decision makers.
For example, to transform Quietness assessment to Engine Quality assessment, the fol-
lowing equivalence rule could be established:
If evaluation grade Very Noisy in Quietness assessment is equivalent to a grade
Poor in Engine Quality assessment, Noisy equivalent to Below Average, Nor-
mal to Average, Quiet to Good, and Very Quiet to Excellent, then one
could say that the set of grades {Very Noisy, Noisy, Normal, Quiet, Very Quiet} in
Quietness assessment is equivalent to the set {Poor, Below Average, Average, Good,
Excellent} in Engine Quality assessment.
172 Ann Oper Res (2012) 195:163187
The above equivalence rule is based on the fact that individual grades in the two sets
are judged to be equivalent on the one-to-one basis. In the case of Vibration measured by
Heavily, Normally or Lightly, however, the grade Heavily for the Vibration criterion may
imply a Poor grade of engine quality to some degree such as 80% and an Average grade
to the remaining degree (i.e. 20%). In general, a grade for a basic criterion may imply several
grades for a general criterion to certain degrees, and vice versa. Suppose
H
n
: the n
th
grade for assessment of a general criterion
H
m,l
: the m
th
grade of the l
th
basic criterion

n,m
: the degree to which H
m,l
implies H
n

m,l
: the belief degree to which the l
th
basic criterion is assessed to H
m,l

n,l
: the belief degree to which the l
th
basic criterion is assessed to H
n
Then, an assessment {(H
m,l
,
m,l
), m = 1, . . . , M} under a set of grades {H
m,l
, m =
1, . . . , M} can be equivalently transformed to an assessment {(H
n
,
n,l
), n = 1, . . . , N} un-
der another set of grades {H
n
, n = 1, . . . , N} using the following matrix equation

1,l

2,l
.
.
.

N,l

1,1

1,2

1,M

2,1

2,2

2,M
.
.
.
.
.
.
.
.
.
.
.
.

N,1

N,2

N,M

1,l

2,l
.
.
.

M,l

(4)
The values of
n,m
(m = 1, . . . , M and n = 1, . . . , N) should satisfy the equation

N
n=1

n,m
= 1 and are determined by the following rules extracted from decision makers:
A grade H
m,l
implies
a grade H
1
to a degree of
1,m
;
a grade H
2
to a degree of
2,m
;
. . . ; and
a grade H
N
to a degree of
N,m
with m= 1, . . . , M.
Because the values of
n,m
(m= 1, . . . , M and n = 1, . . . , N) are determined using rules,
this technique is named rule-based information transformation technique. There is another
method called utility based method to determine the
n,m
(m= 1, . . . , M and n = 1, . . . , N)
as described in the following section.
2.3.1.2 Utility-based qualitative information transformation In the transformation tech-
nique described in the previous section it was implicitly assumed that the original assess-
ment is equivalent to the transformed assessment in terms of value (or utility) to decision
makers, though the values or utilities of both assessments were not explicitly expressed.
Note that there is a subtle difference between the concepts of value and utility in MCDM
literature, which can lead to different processes for their estimation. Whilst both value and
utility are used to measure the preferences of decision makers towards outcomes, utility is
also used to gauge the decision makers risk attitudes toward uncertain outcomes. In this pa-
per, however, such difference is not emphasized. The utility of an assessment is given by the
weighted sum of the utilities of grades using the degrees of belief as weights. The utility of a
grade is a real number that is normally between 0 for the most unfavourable grade and 1 for
the most favourable grade. It is used to measure the decision makers preferences towards a
grade. Therefore there is an element of subjectivity in utility estimation.
Ann Oper Res (2012) 195:163187 173
Suppose the utilities of all grades are already given by a decision maker for both sets of
grades {H
m,l
, m= 1, . . . , M} and {H
n
, n = 1, . . . , N}, denoted by u(H
m,l
) and u(H
n
). Then,
an assessment {(H
m,l
,
m,l
), m = 1, . . . , M} under the set of grades {H
m,l
, m = 1, . . . , M}
can be equivalently transformed to another assessment {(H
n
,
n,,l
), n = 1, . . . , N} under the
set of grades {H
n
, n = 1, . . . , N} using the following matrix equation

1,l

2,l
.
.
.

N,l

1,1

1,2

1,M

2,1

2,2

2,M
.
.
.
.
.
.
.
.
.
.
.
.

N,1

N,2

N,M

1,l

2,l
.
.
.

M,l

where

n,m
=
u(H
n+1
) u(H
m,l
)
u(H
n+1
) u(H
n
)
,
n+1,m
= 1
n,m
,

i,m
= 0 (i = 1, . . . , N, i =n, n +1)
if u(H
n
) u(H
m,l
) u(H
n+1
) for n = 1, . . . , N 1; m= 1, . . . , M (5)
If the utilities of both sets of grades are not given, they could be determined using the
following equal distance scaling equations which represent a risk neutral attitude or pseudo-
linear value or utility function of the decision maker:
u(H
n
) =
n 1
N 1
for n = 1, . . . , N if H
n+1
is preferred to H
n
(6)
u(K
m,l
) =
m1
M 1
for m= 1, . . . , M if K
m+1,l
is preferred to K
m,l
(7)
2.3.1.3 Quantitative including random data transformation A quantitative criterion is
best assessed using numerical values initially. To aggregate a quantitative criterion together
with other qualitative criteria, equivalence rules also need to be extracted to transform a
value to an equivalent belief structure using a chosen set of grades. For instance, Fuel Con-
sumption of 50 mpg of a car engine may mean that Engine Quality of the car is Excellent as
far as Fuel Consumption is concerned. In other words, the value 50 mpg Fuel Consumption
is equivalent to Excellent Engine Quality as far as Fuel Consumption is concerned. Simi-
larly, Fuel Consumption of 44, 38, 32 and 25 mpg may be equivalent to Good, Average,
Below Average and Poor, respectively. Any other numbers between 25 and 50 mpg can
be made to be equivalent to a few grades with different degrees of belief. For example, Fuel
Consumption of 42 mpg is equivalent to Good to a degree of belief of 67% and Average
to a degree of belief of 33%.
In general, to use a quantitative criterion in an assessment, we can chose a set of referen-
tial values k
m
(m = 1, . . . , M) between the best and worst outcomes that an alternative can
take on the criterion and a set of grades H
m,l
(m= 1, . . . , M) so that k
m
is equivalent to the
grade H
m,l
, or
k
m
means H
m,l
(m= 1, . . . , M). (8)
Without loss of generality, we can even dene a set of numerical grades H
m,l
(m =
1, . . . , M) so that H
m,l
= k
m
(m = 1, . . . , M), with H
M,l
being the most favourable feasi-
ble value of the criterion and H
1,l
the least. If this is the case, then any value k
j
(H
m,l
k
j

174 Ann Oper Res (2012) 195:163187
H
m+1,l
, m = 1, . . . , M 1) of the quantitative criterion, assuming that it is the l
th
criterion,
can be equivalently expressed as:
k
j
=
M

m=1
(H
m,l
s
m,j
) (9)
where
s
m,j
=
H
m+1,l
k
j
H
m+1,l
H
m,l
, s
m+1,j
= 1 s
m,j
if H
m,l
k
j
H
m+1,l
, m= 1, . . . , M 1
(10)
s
i,j
= 0 for i = 1, . . . , M; i =m, m+1
Note that in the above transformation the important equivalence in terms of average value
and piecewise linear utility is maintained for further decision analysis (Yang 2001). The
performance of k
j
in terms of the set of grades H
m,l
(m= 1, . . . , M) can be expressed as
{(H
m,l
,
m,l
), m= 1, . . . , M} with
m,l
=s
m,j
(m= 1, . . . , M) (11)
In many decision situations, a quantitative criterion may be measured by a random vari-
able and take several values with different probabilities. Such assessment information can
be expressed using a random number: {(k
j
, p
j
), j = 1, . . . , P}, where k
j
(j = 1, . . . , P) are
possible values that an alternative may take on the criterion, p
j
its corresponding prob-
ability, and P is the number of possible values that the criterion may take. Using (9)
and (10), we can calculate s
m,j
(m = 1, . . . , M) for each k
j
(j = 1, . . . , P). The random
number {(k
j
, p
j
), j = 1, . . . , P} can then be equivalently transformed to an assessment
{(H
m,l
,
m,l
), m = 1, . . . , M} under the set of grades {H
m,l
, m = 1, . . . , M} using the fol-
lowing matrix equation

1,l

2,l
.
.
.

M,l

s
1,1
s
1,2
s
1,P
s
2,1
s
2,2
s
2,P
.
.
.
.
.
.
.
.
.
.
.
.
s
M,1
s
N,2
s
M,P

p
1
p
2
.
.
.
p
P

(12)
When the quantitative criterion takes a deterministic number, such as k
j
, then p
j
= 1 and
p
i
= 0 for i = 1, . . . , M and i =j in (12). That is, for a deterministic value, (12) becomes

1,l

2,l
.
.
.

M,l

s
1,j
s
2,j
.
.
.
s
M,j

(13)
This is equivalent to (11).
In order to aggregate the basic quantitative criterion with other basic criteria, it is
necessary to transform the assessment results {(H
m,l
,
m,l
), m = 1, . . . , M} under the set
of grades {(H
m,l
, m = 1, . . . , M} to {(H
n
,
n,l
), n = 1, . . . , N} under the set of grades
{H
n
, n = 1, . . . , N} of the general criterion. We can use the matrix equation as described
in Sect. 2.3.1.1. to perform the transformation. Combining (4) with (12), we can equiva-
lently transform a deterministic number k
j
or a random number {(k
j
, p
j
), j = 1, . . . , P} to
Ann Oper Res (2012) 195:163187 175
{(H
n
,
n,l
), n = 1, . . . , N} using the following equation.

1,l

2,l
.
.
.

N,l

1,1

1,2

1,M

2,1

2,2

2,M
.
.
.
.
.
.
.
.
.
.
.
.

N,1

N,2

N,M

s
1,1
s
1,2
s
1,P
s
2,1
s
2,2
s
2,P
.
.
.
.
.
.
.
.
.
.
.
.
s
M,1
s
N,2
s
M,P

p
1
p
2
.
.
.
p
P

(14)
2.3.2 The ER algorithm
Once different types of assessment information is transformed into belief structures dened
by the same set of grades, both quantitative and qualitative criteria can be aggregated to-
gether, while quantitative criteria can take either precise or random numbers and qualitative
criteria can be assessed using different sets of grades. For example, the Engine Quality crite-
rion of a car may be obtained by aggregating the following belief structures under the same
set of grades.
S(Quietness) = {(Good, 0.5), (Excellent, 0.3)}
S(Responsiveness) = {(Good, 1.0)}
S(Fuel Consumption) = {(Below Average, 0.5), (Average, 0.5)}
S(Vibration) = {(Good, 0.5), (Excellent, 0.5)}
S(Starting) = {(Good, 1.0)}
In an ideal situation, the Engine Quality of a car will be assessed to be Good if its Respon-
siveness, Fuel Consumption, Quietness, and Vibration are all assessed to be exactly Good.
However, such consensus assessments are rare and performances on criteria are often as-
sessed to different evaluation grades, as shown in the above example. A further issue is that
an assessment may not be complete. For example, the assessment for Quietness is not com-
plete as the total degree of belief in the assessment is 0.5 + 0.3 = 0.8. In other words, 20%
of the belief degrees in the assessment are missing.
To judge the quality of an engine and compare it with other engines, a question is how to
generate a quality assessment for the engine by aggregating the various assessments of the
quality criteria as given above, which could be incomplete. This question is common to most
MCDM problems. The Evidential Reasoning (ER) algorithm (Yang and Xu 2002a) provides
a systematic and rational way of dealing with the aggregation problem. The rationality of
the ER algorithm is checked by using the following four synthesis axioms and it is proven
that the combined degrees of belief generated by the ER algorithm satisfy the axioms (Yang
and Xu 2002a; Huynh et al. 2006).
Axiom 1 Independence:
If no basic criterion is assessed to an evaluation grade at all then the general crite-
rion should not be assessed to the same grade either.
Axiom 2 Consensus:
If all basic criteria are precisely assessed to an individual grade, then the general
criterion should also be precisely assessed to the same grade.
Axiom 3 Completeness:
If all basic criteria are completely assessed to a subset of grades, then the general
criterion should be completely assessed to the same subset as well.
176 Ann Oper Res (2012) 195:163187
Axiom 4 Incompleteness:
If basic assessments are incomplete, then a general assessment obtained by ag-
gregating the incomplete basic assessments should also be incomplete with the
degree of incompleteness properly expressed
The logic behind those axioms is that if an alternative has a good (or bad) performance on a
sub-criterion, then it must be good (or bad) to a certain extent overall. The extent is measured
by both the degree to which that sub-criterion is important to the overall performance of the
alternative and the degree to which the performance on sub-criterion belongs to the good (or
bad) category.
The ER algorithm has 4 steps: generate basic probability masses, combine basic proba-
bility masses, generate combined degrees of belief, and generate a utility intervals. They are
outlined below.
2.3.2.1 Generate basic probability masses In the engine quality assessment problem, each
sub criterion plays a role in the assessment and the importance of the role is normally rep-
resented by a weight assigned to the criterion. Weights can be assigned using methods such
as those based on pairwise comparisons (Saaty 1988) or directly by the decision maker.
Weight assigned to each of the basic criteria W
i
(i = 1, . . . , L) reects its relative im-
portance to the general criterion. Therefore its absolute value is not important and for this
reason weights can be normalised. In the ER algorithm, the following equation is used to
calculate the set of normalised weights
i
(i = 1, . . . , L):

i
=
W
i

L
j=1
W
j
(i = 1, . . . , L); (15)
so that 0
i
1 and
L

i=1

i
= 1 (16)
The ER algorithm operates on probability masses which take into account the relative im-
portance of criteria and are dened as follows.
Let
n,i
denote a degree of belief that the i
th
basic criterion (any of the Quietness, Re-
sponsiveness, Fuel Consumption or Vibration in the numerical example) is assessed to a
grade H
n
. Let m
n,i
be a basic probability mass representing the degree to which the i
th
basic
criterion supports the hypothesis that the general criterion (Engine Quality) is assessed to
the n
th
grade H
n
. Let H denote the whole set of grades, or H = {H
n
, n = 1, . . . , N}. Then
probability mass m
n,i
is dened as the weighted belief degree as shown in (17). Let m
H,i
be a remaining probability mass unassigned to any individual grade after all N probability
masses m
n,i
(n = 1, . . . , N) have been assessed. Then, m
n,i
and m
H,i
are given by
m
n,i
=
i

n,i
n = 1, . . . , N (17)
m
H,i
= 1
N

n=1
m
n,i
= 1
i
N

n=1

n,i
(18)
If m
H,i
is decomposed into two parts: m
H,i
and m
H,i
with m
H,i
= m
H,i
+ m
H,i
, where
m
H,i
= 1
i
and m
H,i
=
i

1
N

n=1

n,i

(19)
Ann Oper Res (2012) 195:163187 177
then m
H,i
can be interpreted as the remaining probability mass that is not yet assigned to
individual grades due to the fact that criterion i only plays one part in the assessment as
indicated by its weight. In other words, m
H,i
provides the scope where other criteria can
play a role in the assessment. It should eventually be assigned to individual grades in a way
that is dependent upon how all criteria are assessed and weighted. The second part of the
remaining probability mass m
H,i
is due to the incompleteness in an assessment and therefore
should not be assigned to individual grades. From (19) we can see that m
H,i
is proportional
to
i
and will cause the subsequent assessments to be incomplete.
2.3.2.2 Combine probability massesthe new ER algorithm In (17) to (19), the contri-
bution of the i
th
basic criterion to the assessment of the general criterion is represented as
the basic probability masses. The ER algorithm uses the following recursive equations to
aggregate the basic probability masses. The associative property of the algorithm guaran-
tees that the order of combination has no effects on the end result (Yang and Xu 2002a). Its
equivalent analytical format is given in Sect. 2.3.2.4.
Let I
n,i
(n = 1, 2, . . . , N),

I
H,i
and

I
H,i
denote the combined probability masses gener-
ated by aggregating the rst i criteria. The (i +1)
th
criterion is then combined with the rst
i criteria in a recursive manner as follows.
I
n,1
= m
n,1
(n = 1, 2, . . . , N) (20a)
I
H,1
= m
H,1
(20b)

I
H,1
= m
H,1
(20c)

I
H,1
= m
H,1
(20d)
K
i+1
=

t =1
N

j=1
j=t
I
t,i
m
j,i+1

1
(20e)
I
n,i+1
= K
i+1
[I
n,i
m
n,i+1
+I
H,i
m
n,i+1
+I
n,i
m
H,i+1
] (n = 1, . . . , N) (20f)

I
H,i+1
= K
i+1
[

I
H,i
m
H,i+1
+

I
H,i
m
H,i+1
+

I
H,i
m
H,i+1
] (20g)

I
H,i+1
= K
i+1
[

I
H,i
m
H,i+1
] (20h)
I
H,i+1
=

I
H,i+1
+

I
H,i+1
(20i)
i = {1, 2, . . . , L1}
The process continues until i + 1 =L, and I
n,L
,

I
H,L
, and

I
H,L
are obtained, where I
n,L
is the combined probability mass assigned to the n
th
grade (n = 1, . . . N),

I
H,L
the combined
probability that needs to be redistributed to the N grades, and

I
H,L
the remaining combined
probability mass that is unable to be distributed to any specic grade due to insufcient
information and I
H,L
=

I
H,L
+

I
H,L
.
2.3.2.3 Generate combined degrees of belief After all L basic criteria have been aggre-
gated and I
n,L
(n = 1, 2, . . . , N),

I
H,L
and

I
H,L
are obtained, the remaining probability mass
that needs to be assigned to individual grades,

I
H,L
, is assigned to all individual grades
proportionally using the following normalization process so as to generate the combined
degrees of belief to the grade H
n
.

n
=
I
n,L
1

I
H,L
n = 1, 2, . . . , N (21a)
178 Ann Oper Res (2012) 195:163187
The degree of belief that is not assigned to any individual grades is assigned to the whole
set H by

H
=

I
H,L
1

I
H,L
(21b)
It is proven that the combined degrees of belief,
n
(n = 1, . . . , N), satisfy the four syn-
thesis axioms given earlier. The incompleteness in original assessments is preserved and
represented by
H
. The generated assessment for a general criterion can be represented by
the belief structure {(H
n
,
n
), n = 1, . . . , N} which gives the lower bound of the belief de-
gree to which the performance of an alternative on the general criterion is assessed to grade
H
n
while {
n
+
H
, n = 1, . . . , N} gives the upper bound. When there is no incompleteness
in basic assessments,
H
= 0 and the lower and upper bounds merges into being equal.
2.3.2.4 The analytical format of the ER algorithm It is proven (Wang et al. 2006b; Guo
et al. 2007) that the combined degrees of belief
n
(n = 1, . . . , N) and
H
obtained by the
recursive ER algorithm can also be obtained by the following analytical format.

n
=

L
l=1
(1
l

N
j=1,j=n

l,j
)

L
l=1
(1
l

N
j=1

l,j
)

N
k=1

L
l=1
(1
l

N
j=1,j=k

l,j
) (N 1)

L
l=1
(1
l

N
j=1

l,j
)

L
l=1
(1
l
)
i = 1, . . . , N, (22)
and

H
=

L
l=1
(1
l

N
j=1

l,j
)

L
l=1
(1
l
)

N
k=1

L
l=1
(1
l

N
j=1,j=k

l,j
) (N 1)

L
l=1
(1
l

N
j=1

l,j
)

L
l=1
(1
l
)
,
(23)
where
l
,
l,j
(l = 1, . . . , L; j = 1, . . . , N) are the same as those in (17).
2.3.2.5 Generate utility intervals If it is necessary to rank alternatives, their performances
represented by belief structures may not be directly comparable. In such circumstances, it is
desirable to generate numerical values equivalent to the belief structures in some sense. The
concept of expected utility is used to dene such a value. Suppose u(H
n
) is the utility of the
grade H
n
with u(H
n+1
) >u(H
n
) if H
n+1
is preferred to H
n
. If all assessments are complete,
there will be
H
= 0 and the expected utility can be used for ranking alternatives, which is
calculated by
u =
N

n=1

n
u(H
n
). (24)
Note that
H
given in (21b) or (23) is the unassigned degree of belief representing the extent
of the incompleteness (ignorance) in the overall assessment and the belief interval [
n
, (
n
+

H
)] provides the range of the belief degree associated with H
n
. In such circumstances,
three values are dened to characterise a belief structure or distributed assessment, namely
the minimum, maximum and average utilities.
Ann Oper Res (2012) 195:163187 179
Without loss of generality, suppose H
1
is the least preferred grade having the lowest
utility and H
N
the most preferred grade having the highest utility. Then the maximum, min-
imum and average utilities are given by
u
max
=
N1

n=1

n
u(H
n
) +(
N
+
H
)u(H
N
) (25)
u
min
= (
1
+
H
)u(H
1
) +
N

n=2

n
u(H
n
) (26)
u
avg
=
u
max
+u
min
2
(27)
wherein u
max
, u
min
and u
avg
are the best possible, worst possible and average performance
indicators in terms of utility values respectively, and u(H
n
) (n = 1, . . . , N) are the utility
values of the grade H
n
(n = 1, . . . , N).
2.3.3 Software implementation of the ER approachintelligent decision system
The ER approach provides a novel and rigorous means to support decision making with
both quantitative and qualitative criteria under various types of uncertainties. However its
application involves the handling of belief structures which is more demanding than the
handling of a decision matrix. Therefore, the method may not be widely used without the
facilitation of a software tool. Having realised this need, a computer software package called
Intelligent Decision System (IDS) which implements the ER approach was developed in the
late 1990s and improved in the subsequent years (Xu and Yang 2003; Xu et al. 2006a).
Although there are many multiple criteria decision support tools available, such as
ExpertChoice from http://www.expertchoice.com/,
Logical Decisions from http://www.logicaldecisions.com/, and
Banxia Decision Explorer from http://www.banxia.com/
IDS has the following unique features.
In IDS the employment of belief structure for problem modelling allows it to accept and
facilitate the collection of different types of raw information, such as quantitative infor-
mation with different units and probability uncertainty, and subjective judgements with
uncertainty using different sets of grades.
Assessment information can be completely known (e.g. 100% total degree of belief in a
belief structure), partially known (less than 100% total degree of belief) or completely
unknown (0% total degree of belief).
The aggregated performance of each alternative is a belief structure instead of an aver-
age score, which provides more informative conclusions and is proved useful in many
decision making situations. It provides a transparent and panoramic view of the perfor-
mances of the alternatives and shows the diversity or proles of their performances so
that decision makers can easily identify the strengths and weaknesses of each alternative
for formation of improvement strategies.
The ER approach has also been implemented in some web based applications (Xu 2010)
and groupware (Iourinski and Ramalingam 2005). The ER algorithm can also be easily
implemented in Excel spread sheet.
180 Ann Oper Res (2012) 195:163187
The development of IDS has sped up the applied research of the ER approach signi-
cantly. A signicant number of publications on its applications have appeared as outlined in
the following section.
2.4 Applications of the ER approach
In the recent years, the ER approach has been applied to decision problems in many areas.
The following list is not exhaustive.
Engineering design
Retro-t ferry and ship design evaluation (Yang and Sen 1994b, 1996, 1997)
Customer key voices identication for General Motors (Yang et al. 2011)
Consumer preference identication (Wang et al. 2009)
Engineering evaluation (Martinez et al. 2007)
New product design (Chin et al. 2009)
Reliability, safety and risk assessment
Water supply risk assessment (Li 2007)
Marine system safety analysis and synthesis (Wang et al. 1995, 1996)
Maritime security assessment (Yang et al. 2009)
Software safety synthesis (Wang 1997; Wang and Yang 2001)
Offshore risk analysis (Ren et al. 2009a)
System reliability prediction (Hu et al. 2010)
Business Management
Total quality management and organisational self-assessment (Yang et al. 2001; Siow et
al. 2001, Xu and Yang 2003)
Innovation capability assessment (Xu et al. 2006a)
Selection of strategies for implementing an IT infrastructure (Hilhorst et al. 2008)
Project management and supply chain management
Contractor selection and supplier assessment (Graham and Hardaker 1999; Sonmez et al.
2001, 2002; Ren et al. 2009b; Zhang and Zhou 2009)
Transportation management (Tanadtang et al. 2005; Xie et al. 2008)
Environmental and sustainability management
Environmental impact assessment (Wang et al. 2006a; Xu and Foster 2009; Yao and
Zheng 2010)
Evaluation of nuclear waste repository options (Xu 2009)
Nuclear safeguards evolution (Liu et al. 2009b; Kabak and Ruan 2010)
Smart Homes management (Augusto et al. 2008)
Policy making (Xu et al. 2008)
Group decision making (Fu and Yang 2010; Liu et al. 2010; Wang et al. 1996; Yang and
Fu 2009)
The large number of applications shows that the ER approach is capable of rationally
handling various types of uncertainties in an integrated way. The use of the evidence-based
modelling and reasoning process helps to improve objectivity and accountability in data
collection and initial assessment stages, which leads to increased consistency and reliability
in analysis and thus the improved condence in and communicability of decisions made.
The use of the belief structure helps to increase the transparency and quality of the analy-
sis process, which leads to more informative and robust decisions than those made by the
approaches based on conventional decision matrices.
There is another branch of the application of the belief structure and the ER algorithm
the development of belief rule based expert system (Yang et al. 2006a, 2007; Liu et al.
Ann Oper Res (2012) 195:163187 181
2008a, 2009a) and its applications in complex system modelling (Liu et al. 2008b; Ruan et
al. 2009; Calzada et al. 2010), fault diagnosis and prognosis (Zhou et al. 2010), and clinical
decision support (Kong et al. 2009). As those applications are in the areas of complex system
modelling and expert systems, not directly in the area of multiple criteria decision analysis,
they will not be further discussed in this paper. Interested readers may refer to the references
for detail.
3 Concluding remarks and further research directions
Although the ER approach has many advantages, in applications, sometimes the following
concerns are raised.
To human brains, the data structure of a belief structure is more complex than a simple
average number. Therefore to process the data in a belief decision matrix by hand is more
difcult. Although this issue is largely addressed through the development of the IDS
software, for those who wish to implement the algorithm themselves such as using Excel,
they are facing a slightly more challenging task than doing a weighted sum.
In the ER algorithm, harmonic evidence tends to strengthen the relevant beliefs that the
evidence supports and make them larger than the simple sum of the beliefs before they
are combined while conict evidence does the opposite. Because of such nonlinear nature
of the ER algorithm (Yang and Xu 2002b), the aggregated scores are normally different
from those obtained using linear combinations such as the one proposed by Huynh et
al. (2006). Although statistically viable, some decision makers or assessors may nd it
confusing when they see the overall performance scores do not equal to the weighted sum
of the sub scores.
Interpreting the outcome represented by a belief structure is not as straight forward as
interpreting a simple score. Each belief structure is a distribution. Different shapes of
distributions may convey different messages even if they have the same expected score.
At the moment, there are no guidelines about how to dene a frame of discernment for
specic decision problems and interpret the shapes of different distributions, so further
research is needed.
In addition to addressing the above issues, there are many related topics that can be
further studied in the area. The following are a few specic research topics to whet the
appetite of hungry minds.
(1) Denition of rationality of combination rules
Although four synthesis axioms (see Sect. 2.3.2) were proposed by Yang and Xu
(2002a) in an attempt to check the rationality of an information aggregation rule, the
following questions may still be asked.
Are there other denitions of rationality for evidential reasoning?
Are there other conditions in addition to the four axioms that need to be satised for
rational evidential reasoning?
It should be noted that the ER aggregation algorithm mentioned above (Yang and Xu
2002a) is not the only algorithm that satises the four axioms. For example the linear
combination rules also satisfy them (Huynh et al. 2006). The question is whether
only the rules satisfying the axioms are rational whilst all other rules irrational. For
the rational rules, what are their permissible ranges or conditions to use? In general,
is the choice of MCDM methods for solving a specic problem really a MCDM
problem itself in the evidential reasoning paradigm?
182 Ann Oper Res (2012) 195:163187
(2) Rationality check of combination rules and modication
Huynh et al. (2006) carried out a rationality check for the discounted Yagers rule of
combination (Yager 1987) and found that it does not satisfy the axioms. They went on to
modify the rule so that the axioms are satised. Along the same line, there are numerous
combination rules (Denoeux 2008) and they could be put through similar checking and
modication process theoretically. Practically, the meaning of those combination rules
needs to be properly interpreted and their suitable application areas properly identied
and classied.
(3) The extension of the ER algorithm to aggregate belief structures in which focal elements
are not singletons or sub sets of adjacent singletons
In the ER approach published in both 2002 and 1994, the focal elements are either
singletons or a whole set. If the framework of discernment is H = {H
1
, . . . , H
N
}, the
focal elements in the ER approach are then {H
1
, . . . , H
N
, H
1N
} where H
1N
is the set (or
interval) from H
1
to H
N
.
In 2006 (Xu et al. 2006b) the ER approach is extended to handle such interval
uncertainties in which the focal elements are both singletons and any subset of adja-
cent singletons. The focal elements in the extended ER algorithm therefore become
{H
11
, H
12
, H
13
, . . . , H
1N
; H
22
, H
23
, . . . , H
2N
; . . . ; H
(N1)(N1)
, H
(N1)N
; H
NN
}, where
H
ii
=H
i
, and H
ij
is the set (or interval) fromH
i
to H
j
(i = 1, . . . , N; j =i +1, . . . , N).
Theoretically, the focal elements can be subsets of any combination of singletons,
such as (H
1
, H
3
). However, the ER algorithm is not yet developed to handle such types
of uncertainties. Computationally, such extension could be quite challenging. Practi-
cally, whether there are such types of uncertainty in real world problems and the areas
of applications of such ER algorithms are yet to be identied.
(4) Comparison study of the ER approach with other approaches
Wang and Elhag (2007) compared the ERapproach with articial neural network (ANN)
in modeling a bridge maintenance decision making problem. In the study, historical
information including maintenance decisions made over a period of time is used to
train a neural network model and the parameters in the ER approach such as weights
of criteria and the shapes of the utility functions used. It was reported that ANN can
model the problem with better accuracy. However, the comparison is based on training
accuracy, not prediction accuracy. We know that ANN may be over trained which could
result in high training accuracy but poor prediction. Therefore further study with various
cases is necessary to draw a clear conclusion. Other aspects of the comparison study
may include computational efciency (Mohamed and Watada 2009) and uncertainty
handling capability of different approaches.
While the ERapproach itself is not proposed for systemmodeling and simulation like
ANN, there is indeed another line of research in ER-based system modeling: referred to
as brief rule based (BRB) expert systems (Yang et al. 2006a, 2007), as mentioned in the
last paragraph of the last section. It is reported that in many cases BRB systems have
the same powerful modeling capability as ANN but better transparency and prediction
power with the desirable features of allowing human intervention in model development
supported by optimal model training (Yang et al. 2007). For example, a BRB system can
be extrapolated to predict system output with good accuracy outside the trained range
while an ANN model may fail in such situations. However, this needs to be further
studied and validated in practice in greater detail and depth.
(5) If the D-S evidence theory is an extension to Bayesian theory, as claimed by Shafer
(1976), then the concept of Bayesian networks (Jensen 1997) may also be extended to a
casual reasoning network based on the evidence theory, perhaps an evidential reasoning
network. There are some activities in this area but no formal published work yet.
Ann Oper Res (2012) 195:163187 183
(6) ER for information fusion
With the proven rationality in handling conict evidence over the original Dempsters
combination rule, the ER algorithm should be applicable to multi-sensor data fusion in
articial intelligence (Buede and Girardi 1997). However, there is no published work in
this area yet.
In addition to the ER approach, there are other MCDM approaches proposed on the ba-
sis of Dempster-Shafer theory (DST) in the literature. For example, Beynon et al. (2000)
incorporated DST with Analytic Hierarchy Process (AHP). The method differs from the
ER approach in that its frame of discernment (FOD) is composed of alternatives and
criteria, while in the ER approach, the FOD is the possible outcomes of an alternative
assessed against a criterion. The DTS/AHP method has at least the advantage of reduc-
ing the computational workload required by original AHP, but it may also inherit the
ranking reversal problem of AHP (Triantaphyllou 2000). Another point that should be
noted is that the DST/AHP method does not seem to take the advantages offered by the
unique feature of DST for handling ignorance explicitly.
While the ER and DST/AHP approaches are in the value function school, DST has also
been introduced to the outranking school of MCDM approaches. One such an example
is given by Amor et al. (2007). They proposed an approach that allows attribute to be
measured using different data types, such as probability distributions, belief structures,
possibility distribution and fuzzy membership functions, to represent different types of
uncertainties. It differs from the ER approach in how information is aggregated. It uses
outranking approaches instead of value or utility function approaches. Therefore the
advantages and disadvantages of the two types of methods are those pertaining to the
outranking and value function schools (Figueira et al. 2005).
The above examples indicate that there are many other possible avenues to explore the po-
tential of DST in MCDM, which could lead to a rich and fertile ground for signicant theo-
retical breakthrough in research on decision making under uncertainty in general.
Acknowledgements This work forms part of the project supported by the UK Engineering and Physical
Science Research Council under Grant No.: EP/F024606/1 and the Natural Science Foundation of China
under Grant No.: 60736026. The author is also grateful to the anonymous reviewers for their constructive
comments which helped to improve the paper.
References
Amor, S. B., Jabeur, K., & Martel, J. M. (2007). Multiple criteria aggregation procedure for mixed evalua-
tions. European Journal of Operational Research, 181(3), 15061515.
Augusto, J. C., Liu, J., McCullagh, P., Wang, H., & Yang, J. B. (2008). Management of uncertainty and spatio-
temporal aspects for monitoring and diagnosis in a smart home. International Journal of Computational
Intelligence Systems, 1(4), 361378.
Bazerman, M. H. (2005). Judgment in managerial decision making. New York: Wiley.
Beynon, M., Curry, B., & Morgan, P. (2000). The Dempster-Shafer theory of evidence: an alternative ap-
proach to multicriteria decision modelling. Omega, 28, 3750.
Brans, J. P., & Mareshal, B. (1990). The PROMETHEE methods for MCDM; the PROMCALC, GAIA and
bank adviser software. In C. A. B. E. Costa (Ed.), Readings in multiple criteria decision aid (pp. 253
276). Berlin: Springer.
Buede, D. M., & Girardi, P. (1997). A target identication comparison of Bayesian and Dempster-Shafer
multisensor fusion. IEEE Transactions on Systems, Man and Cybernetics. Part A. Systems and Humans,
27(5), 569577.
Calzada, A., Liu, J., Rodriguez, R. M., &Martinez, L. (2010). A belief linguistic rule based inference method-
ology for handling decision making problem in qualitative nature. In Proceeding of FLINS (Fuzzy Logic
and Intelligent Technologies in Nuclear Science) Conference on Foundations and Applications of Com-
putational Intelligence, Chengdu, China. Singapore: World Scientic.
184 Ann Oper Res (2012) 195:163187
Chin, K. S., Yang, J. B., Lam, J., & Guo, M. (2009). An evidential reasoning-interval based method for new
product design assessment. IEEE Transactions on Engineering Management, 56(1), 142156.
Deng, Y., Chan, F. T. S., Wu, Y., & Wang, D. (2011). A new linguistic MCDM method based on multiple-
criterion data fusion. Expert Systems With Applications, 38(6), 69856993.
Denoeux, T. (2008). Conjunctive and disjunctive combination of belief functions induced by nondistinct
bodies of evidence. Articial Intelligence, 172(23), 234264.
Fan, Y., & Deer, P. (2008). Exploring evidential reasoning in multicriteria decision making under uncertainty.
International Journal of Intelligent Systems Technologies and Applications, 4(34), 211224.
Farkas, A., & Rzsa, P. (2001). Data perturbations of matrices of pairwise comparisons. Annals of Operation
Research, 101, 401425.
Figueira, J., Greco, S., & Ehrgott, M. (2005). Multiple criteria decision analysis: state of the art surveys.
Boston: Springer.
French, S. (1988). Decision theory an introduction to the mathematics of rationality. Ellis Horwood: Chich-
ester.
French, S. (1989). Readings in decision analysis. London: Chapman & Hall.
French, S., & Xu, D. L. (2005). Comparison study of multi-attribute decision analysis tools. Journal of Multi-
Criteria Decision Analysis, 13(23), 6580.
Fu, C., & Yang, S. L. (2010). The group consensus based evidential reasoning approach for multiple attribu-
tive group decision analysis. European Journal of Operational Research, 206(3), 601608.
Gonzales, C., & Jaffray, J. Y. (1998). Imprecise sampling and direct decision making. Annals of Operation
Research, 80, 207235.
Graham, G., & Hardaker, G. (1999). Contractor evaluation in the aerospace industry: using the evidential
reasoning approach. Journal of Research in Marketing & Entrepreneurship, 3(3), 162173.
Greco, S., Matarazzo, B., Slowinski, R., & Zanakis, S. (2011). Global investing risk: a case study of knowl-
edge assessment via rough sets. Annals of Operation Research, 185(1), 105138.
Guitouni, A., Martel, J. M., Blanger, M., & Hunter, C. (2008). Multiple criteria courses of action selection.
Military Operations Research, 13(1), 3550.
Guo, M., Yang, J. B., Chin, K. S., &Wang, H. W. (2007). Evidential reasoning based preference programming
for multiple attribute decision analysis under uncertainty. European Journal of Operational Research,
182(3), 12941312.
Guo, M., Yang, J. B., Chin, K. S., & Wang, H. W. (2009). Evidential reasoning approach for multi-attribute
decision analysis under both fuzzy and interval uncertainty. IEEE Transactions on Fuzzy Systems, 17(3),
683697.
Hilhorst, C., Ribbers, P., Heck, E. V., & Smits, M. (2008). Using Dempster-Shafer theory and real options
theory to assess competing strategies for implementing IT infrastructures: a case study. Decision Support
Systems, 46(1), 344355.
Hu, C. H., Si, X. S., & Yang, J. B. (2010). System reliability prediction model based on evidential reasoning
algorithm with nonlinear optimization. Expert Systems With Applications, 37(3), 25502562.
Huynh, V. N., Nakamori, Y., Ho, T. B., & Murai, T. (2006). Multiple-attribute decision making under uncer-
tainty: the evidential reasoning approach revisited. IEEE Transactions on Systems, Man, and Cybernet-
ics, 36(4), 804822.
Hwang, C. L., & Yoon, K. (1981). Multiple attribute decision making methods and applications. Berlin:
Springer.
Iourinski, D., & Ramalingam, S. (2005). Using Dempster Shafer theory to aggregate usability study data. In
Third international conference on information technology and applications (ICITA05).
Jensen, F. V. (1997). Introduction to Bayesian networks. Berlin: Springer.
Kabak, ., & Ruan, D. (2010). A cumulative belief degree-based approach for missing values
in nuclear safeguards evaluation. IEEE Transactions on Knowledge and Data Engineering.
http://www.computer.org/portal/web/csdl/doi/10.1109/TKDE.2010.60.
Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: preferences and value trade-offs. New
York: Wiley.
Kong, G. L., Xu, D. L., Liu, X. B., & Yang, J. B. (2009). Applying a belief rule-base inference methodology
to a guideline-based clinical decision support system. Expert Systems, 26(5), 391408.
Li, H. (2007). Hierarchical risk assessment of water supply systems. Civil and building engineering. Lough-
borough, Loughborough University, PhD.
Liu, J., Wang, Y. M., & Martinez, L. (2008a). Extended belief rule-based inference framework. In Proceed-
ings of 2008 international conference on intelligent systems and knowledge engineering (ISKE2008),
Xiamen, China.
Liu, J., Yang, J. B., Ruan, D., Martinez, L., & Wang, J. (2008b). Self-tuning of fuzzy belief rule bases for
engineering system safety analysis. Annals of Operation Research, 163(1), 143168.
Ann Oper Res (2012) 195:163187 185
Liu, J., Martinez, L., Ruan, D., & Wang, H. (2009a). Generating consistent fuzzy belief rule base from sample
data. In Proceeding of the 4th international conference on intelligent systems & knowledge engineering
(ISKE2009), Hasselt, Belgium.
Liu, J., Ruan, D., Wang, H., & Martinez, L. (2009b). Improving IAEA safeguards evaluation through en-
hanced belief rule-based inference methodology. International Journal of Nuclear Knowledge Manage-
ment, 3(3), 312339.
Liu, X. B., Zhou, M., & Yang, J. B. (2010). Evidential Reasoning approach for MADA under group and
fuzzy decision environment. In The 2nd international symposium on intelligent decision technologies
(IDT-10), Baltimore, USA.
Martinez, L., Liu, J., Ruan, D., & Yang, J. B. (2007). Dealing with heterogeneous information in engineering
evaluation processes. Information Sciences, 177, 15331542.
Mohamed, R., & Watada, J. (2009). Logical method for logical operations based on evidential reasoning.
International Journal of Knowledge Engineering and Soft Data Paradigms, 12, 151172.
Nishisato, S., & Ahn, H. (1995). When not to analyze data: decision making on missing responses in dual
scaling. Annals of Operation Research, 55, 361378.
Phillips, L. D. (1984). A theory of requisite decision models. London School of Economics and Political
Science, Decision Analysis Unit.
Ren, J., Wang, J., Jenkinson, I., Xu, D. L., & Yang, J. B. (2009a). An offshore risk analysis method using
fuzzy Bayesian network. Journal of Offshore Mechanics and Arctic Engineering, 131(4). Available from
http://link.aip.org/link/?JOM/131/041101.
Ren, J., Yusuf, Y. Y., & Burns, N. D. (2009b). A decision-support framework for agile enterprise partnering.
The International Journal of Advanced Manufacturing Technology, 41(12), 180192.
Roy, B. (1990). The outranking approach and the foundations of ELEC-TRE methods. In C. A. B. E. Costa
(Ed.), Readings in multiple criteria decision aid (pp. 155183). Berlin: Springer.
Roy, B. (1996). Multi-criteria modelling for decision aiding. Dordrecht: Kluwer Academic.
Ruan, D., Carchon, R., Meer, K. V. D., Liu, J., Wang, H., & Martinez, L. (2009). Belief rule-based inference
methodology to improve nuclear safeguards information evaluation. In Proceeding of the 28th North
America fuzzy information processing conference associated with IEEE (NAFIPS09), Cincinnati, Ohio,
USA.
Saaty, T. L. (1988). The analytic hierarchy process. University of Pittsburgh.
Savage, S. L. (2009). The aw of averages. New York: John Wiley & Sons.
Shafer, G. A. (1976). Mathematical theory of evidence. Princeton: Princeton University Press.
Siow, C. H. R., Yang, J. B., & Dale, B. G. (2001). A new modelling framework for organisational self-
assessment: development and application. Quality Management Journal, 8(4), 3447.
Sonmez, M., Yang, J. B., & Holt, G. D. (2001). Addressing the contractor selection problem using an eviden-
tial reasoning approach. Engineering, Construction and Architectural Management, 8(3), 198210.
Sonmez, M., Graham, G., Yang, J. B., & Holt, G. D. (2002). Applying evidential reasoning to pre-qualifying
construction contractors. Journal of Management in Engineering, 18(3), 111119.
Stewart, T. (2005). Dealing with uncertainties in MCDA. In J. Figueira, S. Greco, & M. Ehrgott (Eds.),
Multiple criteria decision analysis: state of the art surveys (pp. 445470). London: Springer.
Tanadtang, P., Park, D., & Hanaoka, S. (2005). Incorporating uncertain and incomplete subjective judgments
into the evaluation procedure of transportation demand management alternatives. Transportation, 32(6),
603626.
Tavana, M., Sodenkamp, M. A., & Suhl, L. (2010). A soft multi-criteria decision analysis model with appli-
cation to the European Union enlargement. Annals of Operation Research, 181(1), 393421.
Triantaphyllou, E. (2000). Multi-criteria decision making methodologies: a comparative study. Dordrecht:
Kluwer Academic.
Tsoukis, A. (2007). On the concept of decision aiding process: an operational perspective. Annals of Oper-
ation Research, 154(1), 327.
Vincke, P. (1992). Multicriteria decision-aid. New York: Wiley.
Wang, J. (1997). A subjective methodology for safety analysis of safety requirements specications. IEEE
Transactions on Fuzzy Systems, 5, 113.
Wang, Y. M., & Elhag, T. M. S. (2007). A comparison of neural network, evidential reasoning and multiple
regression analysis in modelling bridge risks. Expert Systems With Applications, 32, 336348.
Wang, J., & Yang, J. B. (2001). A subjective safety based decision making approach for evaluation of safety
requirements specications in software development. International Journal of Reliability, Quality, and
Safety Engineering, 8(1), 3557.
Wang, J., Yang, J. B., & Sen, P. (1995). Safety analysis and synthesis using fuzzy sets and evidential reason-
ing. Reliability Engineering & Systems Safety, 47(2), 103118.
Wang, J., Yang, J. B., & Sen, P. (1996). Multi-person and multi-attribute design evaluations using evidential
reasoning based on subjective safety and cost analysis. Reliability Engineering & Systems Safety, 52,
113127.
186 Ann Oper Res (2012) 195:163187
Wang, Y. M., Yang, J. B., & Xu, D. L. (2006b). The evidential reasoning approach for multiple attribute
decision analysis using interval belief degrees. European Journal of Operational Research, 175(1), 35
66.
Wang, Y. M., Yang, J. B., & Xu, D. L. (2006a). Environmental impact assessment using the evidential rea-
soning approach. European Journal of Operational Research, 174(3), 18851913.
Wang, Y. M., Yang, J. B., & Xu, D. L. (2007). On the combination and normalization of interval-valued belief
structures. Information Sciences, 177.
Wang, Y. M., Yang, J. B., Xu, D. L., & Chin, K. S. (2009). Consumer preference prediction by using a hybrid
evidential reasoning and belief rule-based methodology. Expert Systems With Applications, 36(4), 8421
8430.
Xie, X. L., Xu, D. L., Yang, J. B., Wang, J., Ren, J., & Yu, S. (2008). Ship selection using a multiple criteria
synthesis approach. Journal of Marine Science and Technology, 13, 5062.
Xu, D. L. (2009). Assessment of nuclear waste repository options using the ER approach. International
Journal of Information Technology and Decision Making, 8(3).
Xu, D. L. (2010). A Web based assessment tool via the evidential reasoning approach. In D. Ruan (Ed.),
Computational intelligence in complex decision systems. Paris: Atlantis Press.
Xu, D. L., & Foster, C. (2009). Prioritising product groups for carbon labelling. In 20th international confer-
ence on multiple criteria decision making, Chengdu, China.
Xu, D. L., & Yang, J. B. (2001). Introduction to multi-criteria decision making and the evidential reasoning
approach. (Working Paper Series, Paper No.: 0106), ISBN: 1-86115-111-X.
Xu, D. L., & Yang, J. B. (2003). Intelligent decision system for self-assessment. Journal of Multi-Criteria
Decision Analysis, 12, 4360.
Xu, D. L., & Yang, J. B. (2005). An intelligent decision system based on the evidential reasoning approach
and its applications. Journal of Telecommunications and Information Technology, 3, 7380.
Xu, D. L., McCarthy, G., & Yang, J. B. (2006a). Intelligent decision system and its application in business
innovation self assessment. Decision Support Systems, 42, 664673.
Xu, D. L., Yang, J. B., & Wang, Y. M. (2006b). The ER approach for multi-attribute decision analysis under
interval uncertainties. European Journal of Operational Research, 174(3), 19141943.
Xu, D. L., Liu, J., Yang, J. B., Liu, G. P., Wang, J., Jenkinson, I., & Ren, J. (2007). Inference and learn-
ing methodology of belief-rule-based expert system for pipeline leak detection. Expert Systems With
Applications, 32(1), 103113.
Xu, D. L., Yang, J. B., & Liu, X. B. (2008). Handling uncertain and qualitative information in impact
assessmentapplications of IDS in policy making support. In H. Ruan, F. Hardeman, & K. van der
Meer (Eds.), Intelligent decision and policy making support systems. Berlin: Springer.
Yager, R. R. (1987). On the Dempster-Shafer framework and new combination rules. Information Sciences,
41(2), 93137.
Yang, J. B. (2001). Rule and utility based evidential reasoning approach for multiple attribute decision anal-
ysis under uncertainty. European Journal of Operational Research, 131(1), 3161.
Yang, S. L., & Fu, C. (2009). Constructing condence belief functions from one expert. Expert Systems With
Applications, 36(4), 85378548.
Yang, J. B., & Sen, P. (1994a). A general multi-level evaluation process for hybrid MADM with uncertainty.
IEEE Transactions on Systems, Man, and Cybernetics, 24(10), 14581473.
Yang, J. B., & Sen, P. (1994b). Evidential reasoning based hierarchical analysis for design selection of ship
retro-t options. In J. S. Gero & F. Sudweeks (Eds.), Articial intelligence in design, The Netherlands
(pp. 327344). Dordrecht: Kluwer Academic.
Yang, J. B., & Sen, P. (1996). Interactive tradeoff analysis and preference modelling for preliminary multiob-
jective ship design. Systems Analysis, Modelling, Simulation, 26, 2555.
Yang, J. B., & Sen, P. (1997). Multiple attribute design evaluation of large engineering products using the
evidential reasoning approach. Journal of Engineering Design, 8(3), 211230.
Yang, J. B., & Singh, M. G. (1994). An evidential reasoning approach for multiple attribute decision making
with uncertainty. IEEE Transactions on Systems, Man, and Cybernetics, 24(1), 118.
Yang, J. B., & Xu, D. L. (2002a). On the evidential reasoning algorithm for multiattribute decision analysis
under uncertainty. IEEE Transactions on Systems, Man and Cybernetics. Part A. Systems and Humans,
32(3), 289304.
Yang, J. B., & Xu, D. L. (2002b). Nonlinear information aggregation via evidential reasoning in multiat-
tribute decision analysis under uncertainty. IEEE Transactions on Systems, Man and Cybernetics. Part
A. Systems and Humans, 32(3), 376393.
Yang, J. B., Dale, B. G., & Siow, C. H. R. (2001). Self-assessment of excellence: an application of the
evidential reasoning approach. International Journal of Production Research, 39(16), 37893812.
Yang, J. B., Liu, J., Wang, J., Sii, H. S., & Wang, H. W. (2006a). A belief rule-base inference methodology us-
ing the evidential reasoning approachRIMER. IEEE Transactions on Systems, Man and Cybernetics.
Part A. Systems and Humans, 36(2), 268285.
Ann Oper Res (2012) 195:163187 187
Yang, J. B., Wang, Y. M., Xu, D. L., & Chin, K. S. (2006b). The evidential reasoning approach for MCDA
under both probabilistic and fuzzy uncertainties. European Journal of Operational Research, 171(1),
309343.
Yang, J. B., Liu, J., Xu, D. L., Wang, J., & Wang, H. W. (2007). Optimization models for training belief
rule based systems. IEEE Transactions on Systems, Man and Cybernetics. Part A. Systems and Humans,
37(4), 569585.
Yang, Z. L., Wang, J., Bonsall, S., & Fang, Q. G. (2009). Use of fuzzy evidential reasoning in maritime
security assessment. Risk Analysis, 29(1), 95120.
Yang, J. B., Xu, D. L., Xie, X. L., & Maddulapalli, A. K. (2011). Multicriteria evidential reasoning decision
modelling and analysisprioritising voices of customer. Journal of the Operational Research Society,
62(9), 16381654.
Yao, R., & Zheng, J. (2010). A model of intelligent building energy management for the indoor environment.
Intelligent Buildings International, 2(1), 7280.
Zhang, X., & Zhou, X. (2009). Logistics service supplier assessment base on the ER approach. In Proceeding
of 2009 international conference on electronic commerce and business intelligence, Beijing.
Zhang, Z. J., Yang, J. B., & Xu, D. L. (1989). A hierarchical analysis model for multiobjective decision mak-
ing. In Analysis, design and evaluation of man-machine system, selected papers from the 4th IFAC/IFIP/
IFORS/IEA conference in 1989 (Ed), Xian, China (pp. 1318). Pergamon, Oxford.
Zhou, Z. J., Hu, C. H., Yang, J. B., Xu, D. L., Chen, M. Y., & Zhou, D. H. (2010). A sequential learning
algorithmfor online constructing belief rule based systems. Expert Systems With Applications, 37, 1790
1799.