Anda di halaman 1dari 19

Available online at www.sciencedirect.

com

Decision Support Systems 45 (2008) 291 – 309


www.elsevier.com/locate/dss

Two-level trust-based decision model for information assurance


in a virtual organization
Yanjun Zuo a,⁎, Brajendra Panda b
a
Department of Information Systems and Business Education, University of North Dakota, Grand Forks, ND, USA 58203
b
Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR, USA 72701
Received 31 March 2006; received in revised form 19 December 2007; accepted 20 December 2007
Available online 3 January 2008

Abstract

Like other unstructured decision problems, selection of external trustworthy objects are challenging particularly in a virtual
organization (VO). Novel methods are desired to filter out invalid information as well as insecure programs. This paper presents a
new, conceptual approach to support selection of objects. It is a two-level decision model, which helps a VO participant determine
whether an external object can be accepted based on the object's quality and security features. This hierarchical decision-making
process complies with both practical evidence and theoretical decision models. Its underlying concepts are logically sound and
comprehensible. We illustrate the approaches using software selection.
© 2007 Elsevier B.V. All rights reserved.

Keywords: Decision model; Hierarchical decision-making process; Information assurance; Trust; Virtual organization; Policy rule; Policy specification

1. Introduction have dynamic members, i.e., participants join and leave at


any time; and (4) they allow resources shared in a con-
A virtual organization represents a loosely coupled trolled and accountable manner. An open source software
community with a set of participants sharing resources community is an example of a virtual organization, where
based on mutually agreed upon rules. In our discussion, thousands of programmers and software engineers
each participant represents an independent computing voluntarily contribute to developing large-scale software
system associated with a single administrative domain. and offer programs they have developed to share with
Each system contains a set of internal applications (see other participants. Such a virtual organization is decen-
Fig. 1). An internal application performs dedicated func- tralized and self-organized. In GNUe [8], for example, no
tions or provides certain services. Some common features company or corporate executive has administrative
of a VO are: (1) they are self-organized by participants authority or resource control to determine what work
based on mutual interests; (2) they have a large scope will be done, what the schedule will be, and who will be
spanning over multiple administrative domains; (3) they assigned to perform any of the specified tasks [26]. The
participants decide to offer and consume information
⁎ Corresponding author. based on their own needs and criteria. Many other types of
E-mail addresses: yanjun.zuo@und.nodak.edu (Y. Zuo), VOs exist including peer-to-peer systems, Grid systems,
bpanda@uark.edu (B. Panda). and electronic virtual markets.
0167-9236/$ - see front matter © 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.dss.2007.12.014
292 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

Fig. 1. Architecture of a virtual organization (VO).

Information assurance has become a major concern for the internal application level. As mentioned earlier, a
many VOs. Low barriers to publishing information in a participant of a VO represents an independent system,
VO require novel mechanisms to verify the quality and which contains a set of internal applications providing
security features of available information before they can different functions and services. The two-level decision
be used by a participant. In an open source software model separates the specifications of selection criteria
community, there are thousands of software freely between a system and its internal applications. With
available for download. The quality of each software different focuses and scopes at the two levels, the require-
varies widely due to the expertise of the software's ments for information assurance are specified with differ-
developers, the software engineering practices those de- ent degrees of details. The decision at the system level is
velopers use, and the information process culture those based on a set of general trust-related attributes for a given
developers have. In this paper, we focus on two aspects of type of objects and their respective testing conditions. For
information assurance: information quality and security. instance, for software selection and reuse, a system may
Information quality refers to the quality of an object, e.g., define general policy rules based on general attributes
correctly describes a “thing” or provides a function. For related to the software's licenses (not all the open source
instance, the quality of a software program can be software is created with the same licenses) and virus
described by its functionality, usability, reliability, etc. If a detection. These rules, for example, may define that “any
program has been developed with race condition or software without appropriate licenses can't be selected”
deadlock, for example, then that program is considered to and “the software must pass virus detection test.” The goal
have poor quality. If a program produces correct results in is to quickly filter or select an object, if possible. The
a consistent and predictable manner given a full set of decision rules defined at this level are applied within the
well-prepared testing inputs, then it has high quality in entire scope of the system. The decision at the application
term of functionality. Security features of a program refer level, on the other hand, is based on additional or refined
to its safety and reliability when being executed by users. trust-related attributes for the given type of objects and
A program is safe to use if it does not contain malicious their respective testing conditions to further filter and
code, is free of vulnerabilities, and has no functions rearrange the objects that have been selected at the system
beyond its designed specifications. level. For instance, the decision rules at the internal ap-
Our framework addresses the issue of information plication level may specify, “the software selected to run
assurance from the object trust perspective. A user on a server machine must not have hidden routines to
evaluates the quality and security of a program based on open network connections without system administrator's
how much the program can be trusted from two aspects: acknowledgement” and “the software selected to be used
(1) whether the program functions correctly, and as components to build mission-critical projects cannot
(2) whether the program is secure and safe for use. accept arbitrary-length files as inputs for security
The core part of our framework is a two-level decision reasons.” Any external object must satisfy the require-
model developed to assist users in selecting external ments defined at both the system and internal application
objects that satisfy the users' requirements for information levels in order to be used internally. A flow chart for a
assurance. As the name implies, a final decision is made high-level view of the proposed decision process is illus-
based on evaluations at two levels – the system level and trated in Fig. 2.
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 293

Fig. 2. Flow chart of the two-level decision process.

Throughout this paper, we use open source software imums), and maxmin (maximize the minimums) [21].
selection and reuse as an illustrative example. Although Our decision model falls into the satisfying category,
our model focuses on the security and quality features of a where the first satisfactory alternative is chosen rather
given object, it is open to other dimensions. Our major than the best alternative. It is essentially different from
contributions include (1) proposal of a trust-based hierar- those decision methods [5,19] that aim at selecting the
chical decision model, which focuses on object intrinsic “best” option among alternatives.
and extrinsic features; (2) design of key decision-making The presented framework also has solid theoretical
components, e.g., attribute-driven policy rules, threshold foundations on the filter theory [14] and the multi-
selection criteria and operators, and balance between attribute choice principle of decision-making [4,22].
positive and negative features of an object; (3) develop- According to the filter theory, users make some choices
ment of an utility fusion theory based on the decreasing through a series of selection filters, where different criteria
margin utility theory; and (4) applications of the proposed can be used in the successive stages. In the two-level
framework to software selection and reuse. decision model as presented in this paper, decisions are
made at both system and internal application levels in this
2. Related work order. A software program that is obviously out of the
boundaries is ruled out first. Other software that satisfies
Two major streams of literature are related to our the system-level decision criteria can be selected. Then
work: general decision-making methods and trust-based the refined decision criteria are applied to the selected
decision models. We discuss each of them next. software to organize them for different usages (e.g., used
as software components for mission-critical, routine, or
2.1. General decision-making methods experimentation projects). The software in each reorga-
nized group may also be granted appropriate privileges to
In terms of general decision-making methods, our access local resources. According to the multi-attribute
framework can be classified into the “Decisions choice principle of decision-making, various strategies are
whether” [13] category, which is the “yes/no”, “either/ used for different types of choices. The decision at the
or” decision that must be made before one proceeds with system level in our model uses conjunctive rules and
other alternatives. Available choices for one subject of quantitative numerical variables to eliminate choices that
matter are evaluated one by one. One can be accepted as lie outside boundaries. The decision at the application
long as it satisfies the evaluator's requirements. There level uses qualitative analysis to address less formalized
are several decision-making strategies such as optimiz- decision problems, for example, organizing objects into
ing [13], satisfying, maximax (maximize the max- different groups based on their features.
294 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

Decision support systems (DSS) focus on supporting relating to Web documents. Its core component is a
and improving managerial decision-making. There are recommendation-based query engine, which interprets the
various sub-fields in the general DSS discipline including policy and list of arguments and returns an answer to the
personal decision support systems, group support sys- calling application. IBM Trust Establishment Framework
tems, negotiation support systems, etc. [2]. Artificial [9] is a role-based access control model. Its trust
intelligence techniques have been employed predomi- Establishment module validates the client's certificate
nantly to solve various problems in decision-making and then maps the certificate owner to a user role based on
under uncertainty [30]. Rough set theory, fuzzy logic, a set of pre-defined rules. Although those trust models are
neural networks, genetic algorithms, and intelligent effective, they don't fully address the intrinsic features of
agents systems have been widely used in DSS [1,27,34, individual objects under evaluation. On the other hand,
37,18,10,33]. Those soft computing models have strong our model systematically defines the intrinsic and extrinsic
theoretical foundations but sometimes are complex and trust features for object evaluation and selection.
hard to be applied to such an unstructured decision We use open source software selection as an illustrative
problem as selecting objects with desires features. Our example for our framework. Software selection is a
model is simple and effective. While being applied to challenging issue. The Business Readiness Rating (BRR)
software evaluation and selection, it assists users in proposal [31] offers a standard model for rating open
eliminating the chances that selected software programs source software and is comparable to our work. But our
are embedded with malicious code or hidden vulnerabil- model focuses more on decision-making theory and pro-
ities, or developed with incorrect functions. The model cess. In addition to object selection, our model goes one
appeals to management since it is intuitive and relatively step further and specifies refined rules to re-filter and
transparent, allowing users to see the logic of the results. group the selected objects based on their quality and
Two-level decision-making also mitigates the risk of security features. There is another trust-based decision-
using wrong, incomplete, or insecure information. making research work related to open source trustworthi-
ness. A trust management solution is presented in [35],
2.2. Trust-based decision models which can manage trust adaptively in a component based
software system. A trust model was developed to specify,
Many researchers have investigated trust as a critical evaluate, and set up trust relationships that exist among
factor in decision-making in various field [11,20,17, system entities. But the focus of the paper is on
24,29]. Our decision model is trust-based. But it focuses trustworthy middleware architecture. The paper in [28]
on studying the intrinsic and extrinsic trust features of an discusses how the EU-funded EDOS project can be used
object. An object is evaluated based on a set of trust- to improve the evaluation and monitoring of Free/Libre
related attributes and their corresponding testing condi- and Open Source Software (F/LOSS) security. The paper
tions. The evaluator then decides whether the object is addresses various important security issues under differ-
trustworthy and, hence, can be used. Focusing on the ent assumptions regarding the underlying technical
study of information quality and security at object level infrastructure of F/LOSS. A security risk in software
gives a user higher confidence in using an object since the component technology is discussed in [16] – an appli-
object has been directly assessed instead of letting the cation owner may incriminate a component designer
evaluator entirely rely on others' opinions about the falsely for any damage in his application which in reality
object. Existing trust-based decision models, on the other was caused by somebody else. A solution was provided
hand, are based on subject trust management. For instance, where trust values of the component users are stored and
they rely on the principle of transitive trust. A user trusts each user's ratings are discounted based on his/her trust
another object because a trusted third party says so. Any value. Our model is different from [28,16] since they
process that needs to access local resources should present attempt to solve different set of problems in open source
a set of certificates signed by some trusted parties to software evaluation and selection.
confirm to the local system administrators that the request
submitter is trustworthy. The PolicyMaker framework 3. Terminology
[3,25], for example, defines which resources the user
having a particular public key can access. Certificates and This section first defines object, trust-related attributes
policies play a crucial role in describing who are to be of a type of objects, and the values of those attributes given
trusted and what level of rights those trusted entities an object. Then the coined term utility is introduced. A
should have. The REFEREE framework [6] is another UML class diagram is developed to illustrate the relation-
trust management system for making access decisions ships among the terms defined in this paper (see Fig. 3).
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 295

Definition 1. An object is a passive entity that represents provide insights for an evaluator to assess the trustworthi-
a piece of information or knowledge in various forms such ness of the object.
as a software program, a data item, a statement, or a file. Trust-related attributes of a type of objects include both
Object is a generic term. In our discussion, it refers to intrinsic and extrinsic attributes of the objects as long as
an intelligent resource shared in a virtual organization. A (1) the values of the attributes expresses the trust features
set of objects, which addresses the same issue, belongs to of the objects and (2) those values help an evaluator assess
one type. In terms of open source software, there are the trustworthiness of the objects. For instance, the
often similar software programs which all claim to robustness (or fault-tolerance) of a software program de-
provide similar functions. That software is considered as scribes one functional feature about the program. This
one type. For example, Ethereal, TCPdump, or SNMP feature helps an evaluator in making a decision regarding
software all perform network monitoring and packet the quality of the software. Hence, this attribute is con-
sniffing. Hence, they can be classified as a type of net- sidered a trust-related attribute for the type of software.
work monitoring software. We assume a user only needs Other examples of trust-related attributes include the total
to select one object from the set of available ones for that books published about a software package under evalua-
type. tion, the difficulty level to enter the core development
Selecting a trustworthy object relies on evaluating the team of the software, and the recommendations from
trust-related features of the object. These features are adaptive users about the software.
quantitatively expressed by the values of a set of trust- Defining trust-related attributes of a type of objects is
related attributes defined for the type of objects to which domain specific; therefore, our framework is general
the candidate object belongs. enough to allow the designers to define those attributes in
their applications. For open source software selection,
Definition 2. A trust-related attribute of a type of objects trust attributes should be specific and address the quality
describes a property or characteristic of the objects of this and/or security features of software available. Examples
type. The value of such an attribute, given an object, of software related trust attributes are provided in Table 1
reflects the object's quality and/or security features that and Fig. 5.

Fig. 3. UML class diagram of terminology.


296 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

The value of a trust-related attribute (or attribute for boundaries, or select an object, which clearly meets the
short) given an object describes either a favorable or system's expectations.
unfavorable feature of the object. A generic term utility The steps in developing policy rules can be summar-
[32] is coined that measures such a feature. Utility is ized as: (1) identifying critical attributes, non-critical
used as a scale to express and measure an evaluator's (regular) attributes, desired positive values (or ranges)
expectation (or satisfaction) for the quality and security and/or dominating negative values (or ranges) of those
features of a given object. It is unit-less and can be critical attributes; (2) defining primary negative rules
applied to compare and represent features evaluated from based on the identified dominating negative attribute
different aspects. Common utility shapes include con- values (or ranges), defining primary positive rules based
cave, convex, and combination [36]. We discuss how to on the identified desired positive attribute values (or
determine the utilities assigned to attribute values in ranges), and defining accumulating rules based on other
Section 4.3. regular attribute values (or range); (3) defining positive
and negative threshold selection functions (for implied
4. Decision model at the system level rules); and (4) defining residual policy rules. Next we
explain these terms and discuss the policy making process.
Since the decision model at the system level defines
general rules to select objects, the decision process at this 4.1. Dominating negative values, desired positive values,
level is desired to be quick, standard, but less specific. and critical attributes
Based on a relatively small set of most representative and
important attributes, the decision rules at this level filters Most objects have both positive and negative features
out some objects quickly, which are obviously out of as measured based on their values for the attributes

Table 1
Representatives of refined trust-related attributes for software programs
Category Representatives attributes
Security-oriented negative attributes • Creating hidden network connections and listening at privilege ports
• Forking new process (or threads) automatically
• Writing to local disks without user’s acknowledgements
• Reading system configuration and/or log files
• Creating a large number of temporary files
• Permitting default or weak passwords
• Manipulating hidden-fields
• Dropping a backdoor and creating hidden user accounts
• Allowing exchanges of sensitive information in plain text
• Leaving executable code in memory after execution
• Permitting relative and default file and/or directory paths
• Embedding code segments out of the designed specifications of the program
• Accepting files with arbitrary-lengths as inputs
Security-oriented positive attributes • Certified by CERT (Center of Internet Security hosted at the Carnegie Mellon University)
• Carrying valid security-proof code supplied by the software developers
• Requiring strong data type
• Conducting rigorous checks for memory-overflow
• Executing only in a sand-box limited scope
Quality-oriented positive attributes • Using a set of well-proved and widely-used algorithms
• Integrating only highly trustworthy software components
• Being analyzed by leading consulting firms with positive results
• Being developed using rigorous software development practices
• Having a large number of user downloads since its release
• Having complete and informative documentations
• Containing user friendly interfaces
Quality-oriented negative attributes • Having a large number of functional bugs reported by users since its release
• Having race conditions and deadlocks
• Using self-references or having infinite loops
• Having poor software scalability and portability
• Providing no exception handling routines
• Using software components without appropriate licenses
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 297

defined for the type of objects. If an object has a acceptance or rejection decision no matter what values a
negative feature that an evaluator cannot accept, then the candidate object has for those attributes. Examples of
object should be rejected immediately. Such a feature is regular attributes include (1) the number of books
expressed by a dominating negative value. published about a software; (2) the average volume of
general mailing list about the software in the last six
Definition 3. A dominating negative value of an object months; (3) documentations about the software; and (4) the
refers to such a value for an attribute that the feature average number of downloads of the software per month.
represented by this value is so “negative” that a system
must make a denial decision towards that object regard- 4.2. Decision-making logic at the system level
less of its values for other attributes. Such a value is
attribute-specific. Decision-making at the system level first specifies a
Examples of dominating negative values for a set of set of dominating negative attributes and the respective
software programs include (1) the value “ten” (or more) testing conditions for a given type of objects under
for the attribute “outstanding critical security vulner- consideration. Each testing condition identifies a set (or
abilities” about a software program of that type; (2) the a range) of thresholds corresponding to a dominating
value “yes” for the attribute “backdoor software installed negative attribute. If a given object has a value for that
in the program under evaluation,” and (3) the value “yes” attribute, which is evaluated in the defined range, then
for the attribute “serious errors in the algorithms used by the feature represented by this value is not acceptable by
the program.” the system. Hence, the object must be rejected. If the
In the other end, an attribute of a type of objects is so object has no dominating negative value and it has at
attractive that if a given object has a value for such a least one desired positive value, then the object can be
favorable attribute and that value is good enough, then selected.
the object can be selected by the system. As an object can have complex features, it is difficult
Definition 4. A desired positive value of an object to decide that it is an absolutely “good” or “bad” object.
refers to its value for an attribute so that the feature On one hand, the object may have some positive
represented by the value is so “favorable” and the object features; but those merits are not strong enough for a
can be accepted by a system if it does not possess any system to make an acceptance decision. On the other
dominating negative values. hand, the object may have some negative aspects; but
Examples of desired positive attribute values include those unfavorable features are not severe enough for the
(1) the value “yes” for the attribute “security- and quality- system to make a rejection decision. Hence the decision
carrying code provided by the software developers;” model at the system level applies a scoring mechanism
(2) the value “yes” for the attribute “thorough analysis and to accumulate the positive and negative features
positive reports about the software's functions from a measured based on their utilities respectively. When
leading consulting firm;” and (3) the value “yes” for the the balance of those accumulated utilities is beyond a
attribute “rigid software engineering practices in devel- certain level, a decision can be made. Since such a
oping the software program and the excellent reputation scoring mechanism is only applied after any dominating
of the software development team.” After a dominating negative and/or desired positive values are considered, it
negative value or a desired positive value is specified, a is used for regular attribute values, non-dominating
critical attribute can be easily defined. negative values, and no-desired positive values.
For a candidate object O, the accumulated positive and
Definition 5. A critical attribute refers to an attribute negative utilities can be represented as a point, say P, in a
of a type of objects that a given object can have either a two-dimension coordinate system (see Fig. 4). There are
dominating negative value or a desired positive value. three regions in this system. If P falls into region (1), the
The former is called dominating negative attribute and accumulated utility based on O's negative features is high
the latter is called desired positive attribute. while the accumulated utility based on its positive features
A critical attribute has at least one value (given an is low. In this case, O should be rejected. Region (1) is
object), which could lead to a decision (either acceptance called the rejection region. If P falls into region (2), O can
or rejection of the object). Such a value can be assigned be selected since the accumulated utility for its positive
with an infinitely large utility (negative or positive) since it features overweight the accumulated utility for its neg-
may lead to a decision. All other attributes of a given type ative features. Region (2) is called the acceptance region.
of objects are considered as regular (non-critical) because For any object with P falling in region (3), a further
their values alone can't allow an evaluator to make an analysis (see Section 4.3) is required to determine whether
298 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

Fig. 4. Non-linear threshold selection/rejection curves.

the object can be selected. Region (3) is called the inde- of the candidate object. Visually, both curves have a
cisive region. linear line shape with angles of 45o with both axes. The
Along with the two coordinate axes, the two curves as functions are defined below (where constant value
shown in Fig. 4 can uniquely determine the three regions. a represents the selection threshold and constant value
The curve that determines region (1) (with the vertical d represents the rejection threshold):
axis) is called the rejection threshold curve. Similarly, the
curve that determines region (2) (with the horizontal axis) Selection curve function : pu  nu ¼ a or nu
is called the selection threshold curve. Each curve is ¼ pu  a ð1:aÞ
determined by a general function like nu = f(pu), where nu
represents the accumulated utility for the negative Rejection curve function : nu  pu ¼ d or nu
features, or negative utility for short, of an object under ¼ pu þ d ð1:bÞ
evaluation; pu represents the accumulated utility for the
positive features of the object; and f represents the rela- 4.2.2. Logarithmic non-linear functions
tionship function between nu and pu. f is called a selec- This function assumes non-linear relations between
tion curve function for the selection threshold curve and a values of nu and pu for the selection or rejection threshold
rejection curve function for the rejection threshold curve. curves. The rational of this function is that more positive
See Fig. 4 for an example of the three regions, two features need to “cover” or “compensate for” any negative
threshold curves, rejection threshold, and selection features that a candidate object may have. Compared with
threshold. the linear functions, which are more tolerable, the non-
Making a balance between the positive and negative linear selection/rejection functions are more conservative.
features of a candidate object is subjective by nature. It Some objects, which may be selected under the linear
depends on such factors as how strict one is in selecting functions, could be rejected if the non-linear functions are
objects, to which degree to accept objects with negative applied. The two functions are defined below (where b
features, and how to view the net results of positive and represents a base such as 2 (binary), e (natural log
negative features. We address this issue by defining function), etc.). An example of the non-linear function
acceptance and rejection functions with flexibility. Policy curves is given in Fig. 4. The larger the b value is, the more
makers and system administrators can then determine conservative the selection threshold curve (towards
how strictly they want to set the policies. Two functions rejection) would be. Hence a decision maker can adjust
are discussed below, which can be used by an evaluator to this value for flexibility.
make a balance between the positive and negative features
of a given object: constant-difference linear function and Selection curve function : nu ¼ logb ðpu þ 1  aÞ ð2:aÞ
logarithmic non-linear function.  
Rejection curve function : nu ¼ logb pu þ bd ð2:bÞ
4.2.1. Constant-difference linear function
This function assumes that the difference between the 4.3. Policy specification
values of nu and pu is constant for any point on the two
selection threshold curves. For a given object, its Based on the decision-making logic at the system level,
negative features are compared with its positive ones. an attribute-driven policy specification can be defined to
Their difference determines the strength (or weakness) select only qualified objects from external sources.
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 299

Definition 6. A policy specification is a set of trust- by a primary negative rule, it is tested based on the
based policy rules, specifying high-level descriptions of following primary positive rules, accumulating rules, and
what features of an object can't be tolerated and thus the implied rules. Finally, if all available similar objects have
object must be denied as well as what features are been evaluated and no object has been selected for that
favorable and thus the object could be accepted. subject matter, residual policy rules are applied (see
As discussed earlier, critical attributes and their Section 4.4).
dominating negative values and/or desired positive values Every critical attribute identified for a type of objects
are specified first in a decision-making process. The must be addressed in a policy specification. The number
testing conditions can then be defined based on those of non-critical (or regular) attributes to consider depends
threshold values. Finally, the policy rules can be devel- on the nature of a particular decision case. In terms of open
oped based on the testing conditions. There are four types source software selection and reuse, the Business
of policy rules in a policy specification: (1) the first type Readiness Rating proposal (BRR) [31] suggests that 12
specifies a dominating negative attribute for a type of categories of attributes are ranked and fewer than 7 top
objects and the related testing condition. If a candidate categories are considered. The number of metrics (similar
object has a value for such a negative attribute and that to attributes in our paper) in each category is typically no
value is tested as not acceptable, then the object must be more than 7. After specifying a set of attributes, say, a1,
denied; (2) the second type specifies a desired positive a2, a3,…, an, the utilities (positive or negative) assigned to
attribute for the type of objects and the respective testing the domain values of those attributes can be determined
condition. If a candidate object has a value for such a by following the three steps below:
positive attribute and the value is tested as good enough,
then the object can be accepted (unless it also possesses a (1) Normalize and transform, if necessary, the domain
dominating negative value for some critical attributes); values of each attribute in a range (called attribute
(3) the third type of policy rule specifies non-critical value range) with discrete elements. The number
attributes of the type of objects, for which a candidate of normalized domain values in each range can be
object has a value that is not significant enough (for an flexible according to different scaling policies. To
evaluator to make an acceptance or rejection decision). make our following discussions clear, we assume
But, the utility based on that attribute value can be that the value ranges for attributes a1, a2, a3,…, an
accumulated towards the positive or negative features of are represented as [v11, v12,…, v1i], [v21, v22,…,
the object; and (4) the fourth type of policy rule verifies v2j], [v31, v32,…, v3m],…, and [vn1,vn2,…, vnk],
whether the accumulated positive and negative features respectively, where the first subscript letter of
(measured as their respective utilities) for the candidate each element identifies the attribute and the
object enable the system to make a decision after second subscript letter represents a domain value
balancing their opposite effects. A policy rule of types number for that attribute. We require that in an
(1), (2) or (3) is called a primary rule. More specifically, a attribute value range, the left most value (e.g., v11
rule of type (1) is called a primary negative rule and a rule in [v11, v12,…, v1i]) must be the least favorable
of type (2) is called a primary positive rule. A rule of type value and the right most value must be the most
(3) is called a primary accumulating rule. A rule of type favorable value. Therefore, v1i, v2j, v3m,…, vnk
(4) is called an implied rule. An implied rule considers a represent the most favorable values for attributes
pair of accumulated positive and negative utilities and a1, a2, a3,…, an respectively and v11, v21, v31,…,
produces a value from the set {acceptance, rejection, vn1 represent the least favorable values for those
indecisive}. attributes respectively.
All the primary policy rules are evaluated and enforced (2) Rank or weight the n most favorable values v1i, v2j,
in a pre-defined order. A primary negative rule, if any v3m,…, vnk based on their significance/importance
exists, cannot appear after any other types of primary related to intended usage of those objects. Assign a
rules. In other words, a policy specification must start with maximum positive utility (e.g., 10) to the most
a set of primary negative rules, if any. Those primary significant value, say v2j. Then the other n − 1 most
negative rules specify the dominating negative attributes significant values, i.e., v1i, v3m…, vnk can be
of a type of objects and their testing conditions. If a given assigned their corresponding utilities according to
object has one of those values as tested true by such a their relative importance/significance to v2j. For
condition (i.e., it has a negative feature that cannot be instance, if v2j is evaluated as twice as significant as
accepted by the system), that object must be denied. If the v3m in terms of the features they represent to-
object does not possess any negative feature as specified wards the intended usage of the objects under
300 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

consideration, then the utility assigned to v2j is many], and [v31: easy; v32: normal; v33: difficult; v34: very
roughly the twice of that assigned to v3m. Apply the difficult], respectively. Step 2: Rank v13, v23, and v34 in
similar procedure to the n least significant values terms of their significance related to the intended usage of
v11, v21, v31,…, vn1. After this step, the utilities of the software. Assume that v13 is ranked highest and a
each attribute's domain values must be in a certain utility 10 (utility is a relative measuring scale) is assigned.
range. Relative to v13, v23 and v34 can be assigned utilities 4 and
(3) Determine the utilities of all the other intermediate 6 based on their relative significance compared with v13.
domain values for each attribute except the most Relative ranking or weighting factors among those values
and the least favorable ones. The utilities assigned can be determined using either the Analytic Hierarchy
to those intermediate values are based on their Process or the “zig–zap” method as proposed in BRR [31]
relative “distances” to the most and the least (Due to page limitation, we won't discuss this issue
favorable values. further). Similarly, utilities can be assigned to the least
favorable attribute value of each attribute. In our example,
To further explain how the scoring mechanism scales v11, v21, and v31 are assigned utilities as − 8, 0, − 2, re-
different attribute values in the decision process of spectively. Step 3: Determine the utilities for the inter-
selecting an open source software program, we give a mediate domain values of each attribute. According to
short example with only three attributes to consider: (a) A1: step 2, for the attribute A2 the utility assigned to its most
the number of security and quality patches about the favorable value, i.e., “many”, is 4 and the utility assigned
software released in the past 12 months; (b) A2: total books to its least favorable value, i.e., “few”, is 0. The utility
published about the software; and (c) A3: difficulty level to assigned to the intermediate value “some” can be calcu-
enter the core development team. Three steps are specified lated as the average of the two values, i.e., (0 + 4) / 2 = 2, if
below: Step 1: Normalize the domain values of each we figure out (via Analytic Hierarchy Process or the “zig–
attribute. For instance, the domain values of attribute A2: zap” method) the value “some” is equally close to “few”
“total books published about a software package” can be and “many.” Apply this procedure to the intermediate
normalized in a range [few; some; many] according to the values in other two attribute value ranges.
following guidelines: 0–5→ “few”, 6–15→ “some”, 16 or A complete example of the primary rules of a policy
more→ “many.” The similar method can be applied to the specification is given in Fig. 5. Here, a system evaluates
other two attributes. Suppose the three normalized attribute software S in order to determine if S satisfies the system's
value ranges for A1, A2, and A3 are represented as [v11: N 6; requirements for information quality and security. The
v12: 0–2 or 5–6; v13: 3–4], [v21: few; v22: some; v23: policy specification starts with a primary negative rule,

Fig. 5. Primary rules of a policy specification to evaluate a software program S.


Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 301

which indicates S should be rejected if either S or its object based on certain customized criteria. In terms of
software components don't have appropriate licenses (not option (3), one criterion is to select the object with the
all open licenses are created equal; some licenses are highest trust value.
much more restrictive than others). The dominating The trust value of an object can be calculated based on
negative attribute specified in this rule is “appropriate the aggregation of a group of recommenders' options
software licenses of software and its components, if any.” towards the quality and/or security features of the object.
For a candidate software program, if its value for this Each recommended trust value of the object is then
attribute is evaluated as “no”, then this software program discounted using the trustworthiness of the recommender.
should be rejected. If the value is “yes”, then the This is simply because the recommender may not be fully
evaluation moves to the next rule. Rules 2–5 are also trusted in making a reliable recommendation. Consider
primary negative rules. If S doesn't have any dominating the case where a recommender doesn't provide his/her
negative values as tested by those rules but has a desired authentic opinions about the object or the recommender
positive value, which satisfies any of primary positive has imperfect knowledge about the object under evalua-
rules 6–8, then S can be accepted. Otherwise, S is tion. Formula (3) represents a method to calculate the trust
evaluated based on rule 9, which is a primary accumulat- value of an object, O, for an evaluator E in terms of its
ing rule based on the attribute “the number of security and quality and security features. The trust value is denoted as
quality patches released on the software in the past T(E, O).
12 months.” If S's value for this attribute is greater than 6,
then a negative utility (or disutility) of − 8 is assigned to X
i¼n
T ð E; OÞ ¼ ½T ð E; Ri Þ⁎T ðRi ; OÞ=n ð3Þ
this value. Consequently, the software's accumulated i¼1
negative utility is subtracted by 8. If S's value for this
attribute is between 0–2 and 5–6, then its accumulated where T(E, Ri) represents the trust that evaluator E places
positive utility is incremented by 1. If the value is 3 or 4, on recommender Ri, where 1 ≤ i ≤ n (we assume n
S's accumulated positive utility is incremented by 9. The recommenders are available) and T(Ri, O) represents the
positive and negative accumulated utilities are counted recommended trust value of object O by Ri.
separately for a given object under evaluation. After rule We next discuss the calculation of the trust value of a
9, the following primary accumulating rules are evaluated recommender T(E, Ri) using a web of trust network. A
for S one by one. Finally, the implied rules are applied, web of trust describes direct trust relationship between
which take the accumulated utilities for both positive and two neighbor subjects. One subject is a neighbor to
negative features of S as inputs to determine if S should be another if the latter has built and maintained personal
selected, or rejected, or no definitive decision is reached. trust relationship with the former. Those direct trusts
form a trust network. A web of trust is represented by a
4.4. Residual policy rules weighted directed graph with each vertex representing a
subject and each directed edge with weight t from
After applying all the primary and implied policy rules vertices V to V' representing that V directly trusts V'
to a candidate object, three results are possible: (1) the with trust value of t. Indirect trust between two subjects
object has been accepted and the decision process for the that are not neighbors can be calculated based on the
matter of subject is completed; (2) the object has been principle of subject trust transitivity. Under the max-
denied and the next object about the same subject of topic imum principle, a user believes anything that is believed
should be evaluated; or (3) no decision can be made by at least one of the users she trusts. The trustworthi-
regarding the object but the object can be added to a set ness of a subject can be obtained by applying the
Candidate, which contains the residual objects after all algorithm in Fig. 6. The algorithm identifies a path with
the primary policy rules and implied rules have been minimum sum of distrust for every pair of remote
applied. Since all the residual objects are about the same subjects. Then appropriate aggregation of trust along
subject of interest, selecting one, if any, is enough. that path can be applied to calculate the indirect trust
The residual policy rules help select an object from values between any pair of subjects based on a trust
the Candidate set. There are several ways to design network. Given a trust network G, for every edge e ∈ E
residual policy rules: (1) filter out any object with (G) with edge weight t, where t is a real number in the
negative utility beyond a threshold; (2) rank the range [0, 1], re-label e with t' = 1 − t. So a new graph G′
candidate objects based on the difference between is generated with V(G) = V(G') and E(G) = E(G') but
their positive and negative utilities, then select the one each edge of G′ has different value from the
with the largest net positive difference; or (3) select the corresponding edge of G. Intuitively, t′ specifies the
302 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

Fig. 6. Subject trust calculation.

degree of direct distrust between two subjects. G′ is type of objects. Technical attributes can be classified
called a distrust network. The goal of this algorithm is to into two major categories: security-oriented and quality-
find a path between any pair of nodes of G′ with oriented. The second aspect focuses on favorable or
minimum sum of t′ values along that path. unfavorable semantic meanings of an attribute. An
attribute, which describes a desired and welcome (i.e.,
5. Decision model at the application level favorable) property of the type of objects, is called a
positive attribute. A positive attribute expresses a user's
The decision model at the system level focuses on the expectation towards a particular feature of the type of
system-wide general requirements for information objects. Hence, users wish to see a higher value for a
quality and security without addressing any particular positive attribute given an object. Examples of positive
concerns of internal applications within that system. The attributes of software programs include rigorous soft-
decision model at the application level allows individual ware engineering practices in producing the software,
internal applications to specify and apply their addi- the expertise of the software development team, and
tional and refined policies to filter or reorganize those complete and informative documentations of the soft-
objects selected at the system level. ware. On the other hand, an attribute with negative
semantic meaning, called a negative attribute, repre-
5.1. Additional and/or refined attributes sents an unfavorable property of the type of objects. A
negative attribute expresses an evaluator's concerns and
The decision model at the application level defines wishes to minimize such property of a given object.
additional and/or refined trust-related attributes for a Users would like to see a lower value for a negative
type of object. Certain attributes would be best specified attribute. One example of a negative attribute is the
by individual internal applications. For instance, for the possibility of virus infection given a software program.
selection of open source software, additional attributes Based on a positive feature of an object, the semantic
can be defined to describe the refined technical features meaning of a utility is considered positive. Such a utility
of available software and restrict their usages based on is called a positive utility. In contrast, a utility based on a
different criteria. Developing mission-critical, routine, negative feature of an object is called a negative utility,
experimental, or testing projects each has different or disutility.
requirements towards the quality and security features of A positive or negative attribute only represents an
their software components. Any mission-critical project evaluator's preferences or concerns towards a favorable
requires that its software components have high level of or unfavorable property of a type of objects, respec-
reliability. Our study draws its motivation from research tively. An object under evaluation may have a “poor”
on service quality [7,23,15]. But it is different in terms value for a positive attribute. For instance, the attribute
of the measurements in focus. “correctness of algorithm” describes a desired property
Trust-related attributes of a type of objects can be of a software program (hence, from the user's
refined from two aspects: (1) domain-dependent tech- perspective, it is considered as a positive attribute).
nical perspective; and (2) semantic meaning perspective. But a particular software program may be assessed with
The first aspect focuses on the intrinsic features of the a negative value for this attribute if that program was
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 303

developed using incorrect or questionable algorithms. consideration. A candidate object's negative features, if
On the other hand, an object may have a positive value any, can't go beyond a certain level as defined based on
for a negative attribute. For instance, “memory leak” is those negative dimensions in order for that object to be
considered as a negative description of any software, selected.
which expresses a user's concern about the negative
feature of the program's memory management. But, for 5.3. Specifying requirements for an object from the
a particular program, the user may assess a very positive perspective of an attribute dimension
value for this attribute if the program was tested or
analyzed with no memory leak at all. This section discusses the evaluation of a candidate
Combining both the technical and semantic aspects, object from the viewpoint of one dimension. A theory is
four categories of refined and additional attributes are first derived and then a set of threshold selection
identified for software programs. Representatives for operators is defined.
each category are listed in Table 1.
5.3.1. Utility fusion theory
5.2. Attribute dimension For a given dimension di which contains n attributes,
a1, a2, …, an, an internal application quantitatively
Although some attributes describe the quality and measures the “goodness” or “badness” of a candidate
security features of a type of objects, their semantic object, say, O. O is evaluated with a value, say, vi, where
meanings are close to each other's. Those attributes can 1 ≤ i ≤ n, for every corresponding attribute ai of dimen-
be considered together from the same point of view. sion di. The attributes of a dimension in a particular order
Object trustworthiness is a multi-faced concept. Trust- form an attribute permutation. In the following discus-
related attributes of a type of objects can be organized sion, we consider an attribute permutation unless
into different sets, called attribute dimensions. specified otherwise. Based on ai the evaluator measures
an utility, say, U(V(ai)), called a component utility.
Definition 7. An attribute dimension of a type of
According to the declining marginal utility theory [12],
objects consists of a set of attributes, which have similar
the marginal utility due to one more component utility is
logical meanings and all describe one feature aspect of
accordingly decreasing (see Fig. 7). Given a candidate
the type of objects.
object, the total utility that an evaluator measures based on
All the attributes in a dimension describe the feature(s) the object's values for all attributes of dimension di, called
of a type of objects from the same aspect. For instance, dimension utility and denoted as U(di), is thus less than the
one attribute dimension is about the producer information sum of all the individual component utilities. This is
of the objects. Attributes in this dimension may include partially due to the overlaps among the features
(1) the trustworthiness of the producer in faithfully represented by different attribute values. The process of
completing its work as assessed based on the evaluator's dimension utility accumulation presents a “fast start but
past personal experiences with the producer; (2) the slow growth” pattern.
reputation of the producer in producing quality products Utility fusion theory. Given a candidate object, the
as recognized based on public opinions; and (3) the fused (or accumulated) utility for an attribute dimension
aggregated trust value of the producer based on with a set of component utilities is not greater than the
recommendations from some third parties. All the three mathematical addition of Pall the corresponding component
attributes describe the trustworthiness of the producer of a utilities, i.e., U ðdi Þ V i¼ni¼1 U ðV ðai ÞÞ, where a1,…, an
given object and hence can be grouped in one dimension.
A dimension consisting of only positive attributes is
called a positive dimension. Otherwise, if it consists of
only negative attributes, it is called a negative dimen-
sion. A positive dimension represents an evaluator's
overall expectations towards some positive character-
istics of the type of objects from a certain perspective.
Any selected object must have the required minimum
positive features as evaluated based on the criteria from
the perspective of that positive dimension. In the mean
time, an evaluator also specifies negative dimensions to Fig. 7. Relationship between accumulated utility and component
describe potential negative features of objects under utilities.
304 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

represent the candidate's values for the attributes in the 1 ≤ i ≤n, and each testing condition Ck (1≤ k ≤ n) is
attribute dimension. defined based on a attribute, say ak, in that dimension.
Consider a dimension D, which describes software Each Ck verifies whether a candidate object's value for ak
scalability. D consists of four attributes: (1) reference satisfies the evaluator's expectation for the feature
deployment, which measures whether the software is expressed by ak. Mathematically, the threshold value i
scalable and tested through a real-world deployment; represents the Psmallest number of attributes in the dimen-
(2) design for scalability, which measures whether the sion so that k¼i k¼1 U ðV ðak ÞÞzUe , where Ue represents
components of the software were designed with scalability the expected utility, i.e., the minimum utility with which
in mind. Does the program thread-safe? Does it run in a the evaluator is satisfied toward that dimension (the i
cluster environment? (3) incorporation of third-party plug- value is illustrated in Fig. 7). If every two component
ins, which measures the design for extensibility through utilities are roughly equal, i.e., U(V(aj)) ≈ U(V(at)), where
third-party plug-ins; and (4) public API/External Service,
which measures the extensions via a public API and the Component utilities can be reformatted to be rough-

0 ≤ j, t ≤ n, then i can be calculated as i ¼ U ðVUeðajÞÞ . ⌋
design for customization. The corresponding testing ly even. For example, if U(V(aj)) = 2 U(V(at)), then U(V
conditions for the four attributes can be defined as C1: (aj)) can be divided into U(Va′j ) and U(V(a″j )) such that U
the scalability of a given software program is tested in real (Va′j ) + U(Va″j ) = U(Vaj) and U(Va′j ) = U(Va″j ).
use with positive reports; C2: the program was designed Consider the dimension D as discussed above. An
with scalability in consideration and this has been verified evaluator may require that at least three of C1, C2, C3
by reviewing the detailed software engineering practice; and C4 be satisfied in order for a candidate object to be
C3: the software program allows third-party plug-ins; and selected. In this case the basic low-bound threshold
C4: the software program allows for extensions via a selection operator is specified as ϴ3(C1, C2, C3, C4) in
public API and shows design for customization. If a term of dimension D.
software program satisfies C1, then the fact that the
program satisfies C2 only adds a marginal utility in the eye 5.3.2.2. Conditional low-bound threshold selection
of the evaluator, which is less than the value evaluated in a operator. The basic low-bound threshold selection
situation if C1 was not conducted. For the third condition operator does not specify particular testing conditions
C3, additional utility resulted from the allowance of third- that a candidate object must satisfy. A more specific and
party plug-ins may not add too much new “findings” restricted low-bound threshold selection operator, called
regarding the software's scalability. The marginal utility of conditional low-bound threshold selection operator and
considering C3 would then be further decreased. Thus, the denoted as ϴi, [Cj…, Ck] (C1, C2,…, Cn), where 1 ≤ j ≤ k ≤ n,
accumulated utility based on the four conditions is less is defined to indicate that an object meets the evaluator's
than the mathematical sum of the four individual utilities requirements for that dimension if it satisfies at least i out
based on their corresponding testing conditions. of n testing conditions and at the same time conditions
Cj,…, Ck must be satisfied as well. Consider the dimension
5.3.2. Low-bound threshold selection operators D again. If the evaluator specifies an additional require-
Given a positive attribute dimension defined for a ment that any software program selected must be scalable
type of objects, an evaluator may only need to assess a and tested using real-world deployment, the conditional
candidate object's values for a subset of attributes of that low-bound threshold selection operator can be defined as
dimension. Evaluating other attribute values in the same ϴ3, [C1] (C1, C2, C3, C4).
dimension may not add too much utility (according to
the utility fusion theory). As long as the accumulated 5.3.3. Upper-bound threshold selection operators
utility based on this subset of attributes is good enough, It is rare to have perfect objects. Hence, flexibility is
the evaluator considers that the features of the object are desired to allow for a limited number of non-critical
acceptable from the viewpoint of that dimension. negative features of an object to be accepted given the
condition that the object possesses other significant and
5.3.2.1. Basic low-bound threshold selection operator. critical positive features. As discussed earlier, a negative
For a positive dimension, the basic low-bound threshold dimension expresses an evaluator's concerns about certain
selection operator, denoted as ϴi(C1, C2,…, Cn), specifies possible negative features of an object. An evaluator does
an evaluator's requirements for a candidate object from not expect any selected object to be discovered with
the viewpoint of that dimension. An object meets an negative features worse than a tolerable level.
evaluator's expectation if the object satisfies at least i out Consider a negative dimension D' about I/O features
of n testing conditions, C1, C2,…, Ck,…, Cn, where of a set of software programs with six attributes:
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 305

(1) allowance of arbitrary-length file input; (2) forceful 5.4. Inter-dimension evaluation
browsing; (3) cross-site references; (4) hidden-field
manipulation; (5) cookie posing; and (6) manipulation The low-bound and upper-bound threshold selection
of local file systems. The corresponding testing condi- operators support decision-making from a single
tions based on these attributes are defined as below: attribute perspective. To assess a given object from the
multi-dimension point of view, an internal application
C1′: a candidate program accepts files with arbitrary- must specify policies to make a balance among different
lengths as inputs dimensions.
C2′: the program forcefully browses Recall that an attribute dimension describes one aspect
C3′: the program uses cross-site references of the quality or security features of a type of objects. An
C4′: the program manipulates hidden-fields evaluator needs to weigh the importance of different
C5′: the program has the feature of cookie poising dimensions and maintain a balance between the minimum
C6′: the program writes to local file systems required positive features and the most tolerable negative
features. More specifically, an internal application may
5.3.3.1. Basic upper-bound threshold selection opera- accept an object if it has at least certain “strong” values for
tor. For a negative dimension with m attributes, the the attributes of some crucial positive dimensions but at
basic upper-bound threshold selection operator, de- the same time the evaluated utilities based on its values for
noted as Ωj(C1′, C2′,…, Cm′), indicates that no more than j the attributes of other non-crucial negative dimensions are
out of m testing conditions, i.e., C1′, C2,… Ck,…, Cm, not worse than a tolerable threshold. A simple example
where 1 ≤ j ≤ m, should be satisfied in order to consider can further illustrate this idea. The decision policies
that a candidate object doesn’t go beyond an evaluator's specified by an internal application may indicate that a
concerns regarding negative features of the object from software program can be selected if it satisfies: (1) the
the perspective of that dimension. A basic upper-bound scalability criteria defined based on attributes of dimen-
threshold selection operator defines this “upper” bound sion D; (2) at least two testing conditions defined based on
for negative features of an object under evaluation, the attributes of dimension D″ about the functional
indicating the “worst” acceptable criteria in order to correctness of the software program; and (3) no more than
select an object from the point of view of that two of the six testing conditions defined based on the
dimension. Consider the negative dimension D′. If no attributes of dimension D'. Dimension D and D′ were
more than two testing conditions can be satisfied in defined in Sections 5.3.2 and 5.3.3 respectively. Dimen-
order for a software program to be accepted, the basic sion D″ consists of three attributes: (a) static analysis of
upper-bound threshold selection operator is defined as algorithms used in the software program; (b) dynamic
Ω2(C1′, C2′, C3′, C4′, C5′, C6′). testing of software program with a set of well-prepared
inputs; and (c) user feedbacks about the functions of the
5.3.3.2. Conditional upper-bound threshold selection software program; the first two dimensions represent an
operator. If some testing conditions, e.g., Ci′,…, Ck′, evaluator's expectations towards the necessary positive
where 1 ≤ i, k ≤ n, must not be satisfied, then the basic features that any selected software must have. At the same
upper-bound threshold operator can be refined as time, the evaluator requires that the software possesses no
Xj;½Ci ;VCiþ1
V ; N ; CkV ðC1V; C2V; N ; CmVÞ. Since more strict condi- negative security features worse than those indicated by
tions are specified for the selection criteria, such an the third dimension. Any software can be selected only if
operator is called a conditional upper-bound threshold it has positive features at least as good as those specified in
selection operator. According to its semantic meaning, an the first two dimensions and at the same time it has no
object can't satisfy more than j out of m testing conditions negative features worse than those specified in the third
and Ci′, Ci′ + 1,…, Ck′ must be among the j conditions. dimension. The balance between the positive and negative
Consider the dimension D' again. If the selected features for an object is achieved using logical operators
software will be used to develop a mission-critical such as AND and OR as shown in a trust zone mapping
project, the evaluator may not select any software that policy below.
has the feature of cookie posing. Then condition C5′
must not be satisfied. In this case, the conditional upper- 5.5. Trust zone and trust zone mapping
bound threshold selection operator is defined as Ω2[C5′]
(C1′, C2′, C3′, C4′, C5′, C6′), which indicates that at Objects are first selected at the system level, which
least two of C1′, C2′, C3′, C4′, C5′ and C6′ must not be all satisfy the system-wide requirements for information
satisfied and C2′ must be one of them. assurance. Before they can be safely used by internal
306 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

applications, those objects are further clustered into


different groups. This is the second level of decision-
making process (see Fig. 2).
Organizing and clustering external objects appropri-
ately serves two purposes: minimizing security risks of
executing external software and limiting their usage by
internal applications. The first objective is to make sure
that any accepted external programs will not compromise
local systems or introduce security vulnerabilities. In light
of this, external software should be assigned with different Fig. 8. Specification of a trust zone mapping policy.
privileges to access local resources. Privileges can be
defined on a group basis. Any object mapped to one group node relates the testing conditions defined based on the
automatically inherits the access rights assigned to this attributes of the same dimension.
group. Internet Explorer, for instance, isolates the down-
loaded websites from native documents in local systems. 5.5.2. A case study
It defines five zones: Internet, Local Intranet, Trusted This case study illustrates how an internal application
Sites, Restricted Sites, and My Computer. Web sites in defines a trust zone mapping policy to organize a
each zone have different levels of security restrictions to software program into the appropriate trust zone(s). The
access local resources. The second purpose of clustering following testing conditions are defined based on the
selected objects is to make sure that they will be used attributes identified for the type of software of interest:
appropriately to build larger-scale projects with different
usages. C1: algorithms used in a candidate software program
are analyzed to be correct
Definition 8. A trust zone represents a logical group of
C2: the software program is tested and produces
objects, all of which satisfy the requirements of an
correct outputs given a set of well-prepared inputs
internal application for information quality and/or
C3: users provide positive feedbacks about the
security and hence can be assigned appropriate
functions of the software program
privileges to access local resources and used for
C4: the software program is scalable and tested in real
intended purposes.
use through a real-world deployment with positive
One important aspect of information assurance is to
reports
disseminate data to different groups based on their
C5: the software program was designed with
features. Objects are mapped to different trust zones
scalability in mind and has been verified by the
based on a set of policy rules as discussed next.
evaluator by reviewing the detailed software engi-
neering practices
5.5.1. Trust zone mapping policy and trust zone C6: the software program was developed to allow
mapping graph third-party plug-ins
A trust zone mapping policy consists of a set of C7: the software program allows for extensions via a
logical rules using the upper-bound and low-bound public API and shows design for scalability
threshold selection operators and logical operators such C8: the software program has race condition
as AND, and OR. Such a policy determines the trust zone C9: the software program has deadlocks
(s) to which an object can be mapped. An object can be C1′: the software program carries valid security-proof
mapped to one or more trust zones according to its values code supplied by the software producer
of attributes in the dimensions under consideration. A C2′: the software program has been monitored by CERT
formal specification of a trust zone mapping policy with and no open security vulnerabilities are outstanding
a set of logical rules is shown in Fig. 8. C3′: the software program has very few security bugs
An AND–OR graph-like data structure, called a trust reported by users since its release
zone mapping graph, represents a mapping policy C4′: the software program leaves executable code
defined for a trust zone. In such a graph, the root is segments in memory after execution
labeled with the identifier of the trust zone. Each of the C5′: the software program accesses memory out of its
leaf nodes represents a testing condition defined based allocated address space
on an attribute in a dimension. An internal node rep- C6′: the software program has the problem of memory
resents an AND, OR, ϴj, or Ωi operator. A ϴj, or Ωi leak
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 307

C7′: the software program creates hidden network of a dimension about networking and data communica-
connections and listens at privilege ports tions. C10 ′ , C11′ and C12 ″ correspond to the security-
C8′: the software program automatically turns on oriented negative attributes of a dimension about risks
parsimonious mode and intercepts network traffic of using default values. C13 ′ , C14′ , C15′ , C16
′ , C17
′ and
C9′: the software program uses weekly seeded C′18 are defined according to the security-oriented
cryptographic keys for network communications negative attributes of a dimension about I/O operations
C10′ : the software program permits default passwords and program interfaces. An example of mapping policy
C11′ : the software program permits relative and/or for Trust Zone 1 is defined below and the correspond-
default directory paths ing trust zone mapping graph is shown in Fig. 9.
C12′ : the software program exchanges sensitive fh2 ðC1 ; C2 ; C3 Þ AND h3 ðC4 ; C5 ; C6 ; C7 Þ AND X0 ðC8 ; C9 Þg
information in default plain text across networks AND fh1 ð C1V; C2V; C3VÞ OR ½X0 ð C4V; C5V; C6VÞ AND
C13′ : the software program accepts arbitrary-length X0 ð C7V; C8V; C9VÞ AND X2½C12V ð C10V; C11V; C12VÞ AND
files as input X4½C13 ;VC17 V ðC13V; C14V; C15V; C16V; C17V; C18VÞgYTrust Zone 1
C14′ : the software program forcefully browses Each trust zone is assigned a set of privileges for those
C15′ : the software program uses cross-site references objects mapped to this trust zone to access local resources.
C16′ : the software program manipulates hidden-fields Access control is expressed by a mapping between
of files Z →R ×O, where Z represents a set of trust zones, R
C17′ : the software program has the feature of cookie represents a set of access rights, e.g., read, write, and
poising execute, and O represents a set of protected local resources.
C18′ : the software program writes to local file systems An object mapped to a trust zone inherits all the rights
assigned to that trust zone and can only access resources as
Testing conditions C1, C2, and C3 are defined based defined. If an object has been mapped to multiple trust
on quality-oriented positive attributes of a dimension zones, then its access rights are the accumulation of those
focusing on functional correctness of a type of software defined for all the mapped trust zones.
programs. C4, C5, C6, and C7 are defined based on Consider the following situation. According to the
quality-oriented positive attributes of a dimension principle of least privilege, system administrators only
regarding scalability of the software programs. C8 and allow external programs that have been mapped to Trust
C9 are defined based on quality-oriented negative Zone 1 to access files on public drive D1 with full set of
attributes of a dimension describing inter-process rights (i.e., read, write, and execute), read and execute
communications of the programs. C1′, C2′, and C3′ files on drive D3, which hosts the system’s web pages,
are related to the security-oriented positive attributes of and read files on a drive D2, which the objects need to
a dimension about third-party testable security features access in order to complete their tasks. The mapping
of the programs. C4′, C5′ and C6′ are defined based on between Trust Zone 1 and the assigned access rights is
the security-oriented negative attributes of a dimension illustrated below (where symbols R, W and E represent
about memory management. C7′, C8′, and C9′ are read, write, and execution respectively): Trust Zone 1Y
associated with the security-oriented negative attributes f½D1 : R; W ; E; ½D2 : R; ½D3 : R; E g

Fig. 9. Mapping graph for trust zone 1.


308 Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309

Access rights can be abbreviated as “All” if there is Brajendra Panda has been supported by the US AFOSR
no restriction for external objects to access local under grant F49620-01-10346.
resources. This should be only applied to those highly
trustworthy objects. In contrast, a default trust zone is References
assigned “no access right”, which essentially prohibits
those external objects that are mapped to this zone from [1] K.J. Adams, D.A. Bell, L.P. Maguire, J. McGregor, Knowledge
accessing any local resources. By default, all the objects discovery from decision tables by the use of multiple-valued
are mapped to this trust zone. logic, Artificial Intelligence Review 19 (2) (2003) 153–176.
Consider another situation. An internal application [2] D. Arnott, G. Pervan, Eight key issues for the decision support
systems discipline, Decision Support Systems 44 (2008)
reuses selected software components as sub-routines to 657–672.
develop large-scale software projects with different target [3] M. Blaze, J. Feigenbaum, J. Lacy, Decentralized trust manage-
usage settings: experimentation, internal development, ment, Proceeding of the 17th IEEE Symposium on Security and
routine use, and mission-critical use. Consequently, Privacy, Oakland, California, USA, 1996, pp. 164–173.
different categories of projects have the corresponding [4] J. Busemeyer, A. Diederich, Survey of decision field theory,
Mathematical Social Sciences 43 (3) (2002) 345–370.
requirements for their software components. In this case, [5] A. Caplin, J. Leahy, Psychological expected utility theory and
the internal application maps externally selected soft- anticipatory feelings, Quarterly Journal of Economics 116 (1)
ware with a high level of quality and security to a trust (2006) 55–79.
zone whose member software can be used as compo- [6] Y. Chu, J. Feigenbaum, B. LaMacchia, P. Resnick, M. Strauss,
nents to develop mission-critical projects (see below). REFEREE: trust management for web applications, World Wide
Web Journal 2 (1997) 127–139.
Trust Zone 1YDeveloping Mission−critical projects [7] T.P.V. Dyke, L.A. Kappelman, V.R. Prybutok, Measuring
information systems service quality: concerns on the use of the
6. Conclusions SERVQUAL questionnaire, MIS Quarterly 21 (2) (1997)
195–208.
[8] M. Elliott, W. Scacchi, Free software development: cooperation
This paper addresses the issue of information
and conflict in a virtual organizational culture, Free/Open Source
assurance in a virtual organization (VO) environment. Software Development, IDEA Group Publishing, 2004.
We present a two-level decision model to aid VO [9] J. Feigenbaum, Overview of the AT&T Labs Trust Management
participants in selecting external information with Project: position paper, Proceeding of the 1998 Cambridge
required level of quality and security. Evaluating the University Workshop on Trust and Delegation, Cambridge, UK,
trustworthiness of an object is challenging since it 1998.
[10] S. Gao, H. Wang, D. Xu, Y. Wang, An intelligent agent-assisted
requires the evaluator to have solid domain knowledge decision support system for family financial planning, Decision
about that object and have reliable resources to refer to. Support Systems 44 (2007) 60–78.
The proposed model guides users to go through two [11] D. Gefen, E. Karahanna, D. Strub, Trust and TAM in online
major steps to make the final decision. First, it allows shopping: an integrated model, MIS Quarterly 27 (1) (2003)
users to explicitly express what features of an external 51–90.
[12] J. Greene, J. Baron, Intuitions about declining marginal utility,
object are desired and what features are not acceptable. Journal of Behavioral Decision Making 14 (3) (2001) 243–255.
The selection criteria are expressed as a set of policy [13] R. Harris, Introduction to Decision Making, http://www.virtualsalt.
rules, which are pre-defined based on a set of trust- com/crebook5.htm (access date: December 1, 2007).
related attributes of the objects of interest. Secondly, the [14] A.C. Kerchhoff, K.E. Davis, Value consensus and need
complementarily in mate selection, American Sociological
initially selected objects with different characteristics are
Review 27 (1962) 295–303.
reorganized into different groups (called trust zones) in [15] W.J. Kettinger, C.C. Lee, Pragmatic perspectives on the measure-
such a way that those objects can be used in appropriate ment of information system service quality, MIS Quarterly 21 (2)
ways and/or access system resources in a controlled (1997) 223–240.
manner. Our framework guarantees that any selected [16] P. Herrmann, Trust-based protection of software component users
objects have the required levels of security and quality and designers, Proceeding of 1st International Conference on
Trust Management, Heraklion, Greece, 2003.
features. The framework also offers flexibility for users [17] D.J. Kim, D.L. Ferrin, H.R. Rao, A trust-based consumer
to specify their decision criteria. decision-making model in electronic commerce: the role of trust,
perceived risk, and their antecedents, Decision Support Systems
Acknowledgement 44 (2008) 544–564.
[18] K.J. Kim, I. Han, The extraction of trading rules from stock
market data using rough sets, Expert Systems 18 (4) (2001)
The authors are thankful to Dr. Robert L. Herklotz for 194–202.
his support and the editors and anonymous reviewers for [19] W. Lee, Decision Theory and Human Behavior, John Wiley &
their valuable comments. The research effort of Dr. Sons, Inc., New York, 1971.
Y. Zuo, B. Panda / Decision Support Systems 45 (2008) 291–309 309

[20] J. Leimeister, W. Ebner, H. Krcmar, Design, implementation, and [34] R.L. Wilson, R. Sharda, Bankruptcy prediction using neural
evaluation of trust-supporting components in virtual commu- networks, Decision Support Systems 11 (5) (1994) 545–557.
nities for patients, Journal of Management Information Systems [35] Z. Yan, R. Maclaverty, Automatic trust management in a
21 (4) (2005) 101–136. component based software system, Proceeding of the 3rd
[21] I. Linkov, A. Varghese, S. Jamil, T. Seager, G. Kiker, T. Bridges, International Conference on Autonomic and Trust Computing,
Multi-criteria decision analysis: a framework for structuring Wuhan, China, 2006.
remedial decisions at the contaminated sites, Comparative Risk [36] I. Yang, Utility-based decision support system for schedule
Assessment and Environmental Decision Making, Springer, New optimization, Decision Support Systems 44 (2008) 595–605.
York, 2004. [37] J. Zeleznikow, J.R. Nolan, Using soft computing to build real
[22] R. Meyer, A. Sathi, A multi-attribute model of consumer choice world intelligent decision systems in uncertain domains,
during product learning, Marketing Science, 4 (1) (1985) 41–46. Decision Support Systems 31 (2) (2001) 263–285.
[23] A. Parasuraman, V.A. Zeithaml, L.L. Berry, A conceptual model
of service quality and its implications for future research, Journal
of Marketing 49 (4) (1985) 41–50. Yanjun Zuo is an assistant professor at the University of North
[24] P.A. Pavlou, D. Gefen, Building effective online marketplace Dakota, Grand Forks, USA. He earned his Ph.D. in Computer Science
with institution-based trust, Information Systems Research 15 (1) from the University of Arkansas, Fayetteville, USA in 2005. He also
(2004) 37–59. holds two master’s degrees in Computer Science and Business
[25] A. Pilz, Policy-maker: a toolkit for policy-based security Administration from the University of Arkansas and the University of
management, Network Operation and Management Symposium North Dakota, Grand Forks, USA, respectively. His research interests
Vol. 1 (2004) 263–276. include information and computer security, trustworthy computing,
[26] W. Scacchi, J. Feller, B. Fitzgerald, S. Hissam, K. Lakhani, survivable and self-healing systems, and information privacy protec-
Understanding free/open source software development pro- tion. He has published numerous articles in referred journals and
cesses, Software Process – Improvement and Practice 11 (2) conference proceedings in these fields.
(2006) 95–105.
[27] S. Schocken, G. Ariav, Neural networks for decision support:
problems and opportunities, Decision Support Systems 11 (5) Brajendra Panda is a professor at the University of Arkansas,
(1994) 393–414. Fayetteville, USA. He received his Ph.D. in Computer Science from
[28] J. Seigneur, Security evaluation of free/open source software North Dakota State University, Fargo, USA in 1994 and a master’s
powered by a peer-to-peer ecosystem, Proceeding of EFOSS degree in mathematics from Utkal University, India in 1985. His
OpenBRR Workshop, Como, Italy, 2006. research interests include database systems, trusted database systems,
[29] J. Song, F. Zahedi, Trust in health infomediaries, Decision computer security, computer forensics, and information assurance. He
Support Systems 43 (2007) 390–407. has published extensively in these fields.
[30] T. Sueyoshi, G.R. Tadiparthi, An agent-based decision support
system for wholesale electricity market, Decision Support
Systems 44 (2008) 425–446.
[31] A. Wasserman, M. Pal, C. Chan, The business readiness rating
model: an evaluation framework for open source, Proceeding of
EFOSS OpenBRR Workshop, Como, Italy, 2006.
[32] R. Watt, Defending expected utility theory, Journal of Economic
Perspectives 16 (2) (2002) 227–229.
[33] A. Whinston, Intelligent agents as a basis for decision support
systems, Decision Support Systems 20 (1) (1997) 1.

Anda mungkin juga menyukai