Anda di halaman 1dari 154

KNOWLEDGE MANAGEMENT SYSTEM IMPROVEMENT TOWARDS

SERVICE DESK OF OUTSOURCING IN BANKING BUSINESS















MR PADEJ PHOMASAKHA NA SAKOLNAKORN














A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN INFORMATION TECHNOLOGY
DEPARTMENT OF INFORMATION TECHNOLOGY
GRADUATE COLLEGE
KING MONGKUT'S UNIVERSITY OF TECHNOLOGY NORTH BANGKOK
ACADEMIC YEAR 2007
COPYRIGHT OF KING MONGKUT'S UNIVERSITY OF TECHNOLOGY NORTH BANGKOK



ii
Name : Mr. Padej Phomasakha Na Sakolnakorn
Thesis Title : Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business
Major Field : Information Technology
King Mongkuts University of Technology North Bangkok
Thesis Advisor : Assistant Professor Dr. Phayung Meesad
Co-Advisor : Dr. Gareth Clayton
Academic Year : 2007
Abstract
In business, knowledge is an organizational asset that enables corporations to sustain
competitive advantages. In addition to increasing the demands of IT outsourcing to
deliver world-class services, the Information Technology Infrastructure Library
(ITIL) is a key concept to provide the high quality service, and the IT service desk is a
crucial function for a whole concept of IT service management.
Three current problems include 1) technical staff turnover is very high; 2) more
than sixty percent of all resolving time is spent to resolve the repeat incidents; and 3)
the assigned resolver group to deal with the incident may be inaccurate due to human
error. Thus, this thesis proposes a framework for a knowledge management system
with root cause analysis so, called KMRCA IT service desk system and evaluates its
performance. The system is composed of two main functions, a searching knowledge
function, and an automatic assignment function. This thesis evaluated the performance
of the searching knowledge function using a simulation study and concluded that the
system could significantly reduce time in resolving incidents. Moreover, my thesis
enhances the framework to select the most suitable resolver group to deal with the
incident using Text mining discovery methods. The ID3 decision tress method could
increase productivity and decrease reassignment turnaround times. Furthermore, the
rules resulting from the rule generation from the decision tree could be properly kept
in a knowledge database in order to support and assist with future assignments.
(Total 153 pages)
Keywords : knowledge management, service desk, outsourcing, text mining, ITIL,
performance evaluation, simulation study, and decision tree.

______________________________________________________________ Advisor



iii
:
:

:

: .
: .
: 2550



(ITIL)

1) 2) 60%
3)

KMRCA IT
service desk 2




ID3


( 153 )
:


___________________________________________________


iv
ACKNOWLEDGEMENTS

I wish to express my gratitude to a number of people who became involved with
this thesis. Foremost, I would like to thank my advisors, Assist. Prof. Dr. Phayung
Meesad, and Dr. Gareth Clayton for providing me with the opportunity to complete
my PhD thesis at King Monguts University of Technology North Bangkok.
I, especially, would like to thank at points on my advisor, Assist. Prof. Dr. Phayung
whose support and guidance made my thesis work possible. He has been actively
interested in my work and has always been available to advise me. I am very grateful
for his motivation, enthusiasm, and immense knowledge. He also contributes on my
work to be onboard of international publishing. I would like to thank Dr. Gareth
Clayton whose advances research methodology, particular statistics and simulation
techniques providing to me both concepts and real practices with consciously and
unconsciously ideas how good is good enough in experimental design should be taken
together that make him a great mentor. Moreover, I would like to show my faithful
thank to Assoc. Prof. Dr. Utomporn Phalavonk whose advocate of scheduling and
recommendations of graduate colleges regulations made me complete in my planning
and performing administrative tasks.
I would like to sincerely thank to Dr. Choochart Haruechaiyasak whose
knowledge and technical suggestions about text mining discovery algorithms in
particular word extraction and machine learning to facilitate the approach of
automatic resolve group assignment in place of the IT service desk agents tasks.
Thanks to Taweesak Suwanjaritkul and Pisit Thongngok whose knowledge with
regard to Visual Basic programming and SQL server 2005 database management that
made the prototype of KMRCA IT service desk system worked effectively.
Thanks to members of IT admin staff whose works made the most of my
administrative documents done during my study at the university.
This thesis could not be complete without my wife and all people in my family
particular Dad and Mom who have supported me since I was born.

Padej Phomasakha Na Sakolnakorn


v
TABLE OF CONTENTS
Page
Abstract (in English) ii
Abstract (in Thai) iii
Acknowledgements iv
List of Tables vii
List of Figures viii
Chapter 1 Introduction 1
1.1 Background and Statement of the Problem 1
1.2 Objectives 3
1.3 Hypothesis 3
1.4 Scope of the Study 3
1.5 Utilization of the Study 5
Chapter 2 Literature Review 7
2.1 Knowledge Management 7
2.2 Root Cause Analysis 10
2.3 Case-Based Reasoning 11
2.4 ITIL-Based IT Service Desk Function 14
2.5 Technologies for Service Desk 22
2.6 IT Service Desk Outsourcing 23
2.7 Decision Support System 24
2.8 Classification trees 25
2.9 Summary 28
Chapter 3 Methodology 31
3.1 Research Process 31
3.2 Information Collection and Requirement Analysis 32
3.3 Constructing an Instrument for Data Collection 34
3.4 The Proposed KMRCA IT Service Desk Framework 39
3.5 Methodology of Automatic Resolver Assignment 53
3.6 Summary 59




vi
TABLE OF CONTENTS (CONTINUED)
Page
Chapter 4 Experimental Results 61
4.1 The Results of Text Mining Discovery Methods of
Automatic Assign Function 61
4.2 The Results of Design of Experiment 63
4.3 The Results of Performance Evaluation 67
4.4 Summary 69
Chapter 5 Conclusion 71
5.1 Conclusion 71
5.2 Discussion 72
5.3 Future Work 73
References 75
Appendix A 81
Appendix B 89
Appendix C 129
Biography 153









vii
LIST OF TABLES

Table Page
3-1 The Rate of Incident Calls during Time in Business Day and Holiday 33
3-2 Percentage of Incident Calls by Severity 33
3-3 Classification of Calls by Incident Category 34
3-4 Summary of Probability Distributions for Computer Simulation 35
3-5 Comparison of Square Error by Function 36
3-6 A Good-of-fit Test of Time in Resolving Incidents by Severity 38
3-7 The Number of Incidents of System Types and Resolver Groups 53
4-1 The Number and Percentage of Correct Incident for Various Types
of Decision Trees 62
4-2 The Speed Compared with the Accuracy of Classification 62
4-3 Assigned Factor Values for Two-Level 64
4-4 2
3
Full Factorial Design of DOE for Responses Y of O
1
65
4-5 Coded Design Matrix of O
1
65
4-6 Absolute Value of Coefficients for Average O
1
and P-Value 66
4-7 Absolute Value of Coefficients for Average O
4
and P-Value 66
4-8 Comparison Tests of KMRCA and Typical IT Service Desk Systems 68
4-9 Comparison Outputs of KMRCA and Typical IT Service Desk Systems 68












viii
LIST OF FIGURES

Figure Page
2-1 The Case-Based Reasoning Cycle 12
2-2 Classification Hierarchy of Case-Based Reasoning Applications 13
2-3 Incident Management Process Overview 15
2-4 The Incident Life Cycle 17
2-5 First, Second, and Third Line Supports 18
2-6 Relationship between Incidents 19
2-7 Handling Incident Work-arounds and Resolutions 19
3-1 Input Analyzed Results 36
3-2 Probability Plot of Time between Arrivals 37
3-3 Probability Plot for Resolving Time by Severity 39
3-4 A Typical IT Service Desk Outsourcing Overview 40
3-5 Information Flow of IT Service Desk 41
3-6 A Conceptual Model of IT Service Desk System 42
3-7 A Proposed Framework of KMRCA IT Service Desk System 43
3-8 Information Flow of KMRCA IT Service Desk System 44
3-9 KMRCA IT Service Desk Process 45
3-10 Search Knowledge Procedure 46
3-11 Typical IT Service Desk and KMRCA IT Service Desk 48
3-12 The System Development Life Cycle (SDLC) 49
3-13 A Sample Display of Search Knowledge and Input Resolution 51
3-14 A Sample Display of Searching Results 52
3-15 A Sample Display of Assign Resolver Group 53
3-16 KMRCA IT Service Desk with Automatic Assignment Function 54
3-17 A Process of Automatic Resolver Group Assignment 54
3-18 Processes of Model Approach for Automatic Assignment 56
4-1 Pareto of Coefficients for Average Response Y of O
1
66
4-2 Pareto of Coefficients for Average Response Y of O
4
66

CHAPTER 1
INTRODUCTION

1.1 Background and Statement of the Problem
Knowledge management is the business process of managing the organizations
knowledge by means of systematic and organizational specific processes for
acquiring, organizing, sustaining, applying, sharing, and renewing both tacit
knowledge and explicit knowledge by employees not only to enhance the
organizational performance, but also to create value [1, 2, 3, 4].
Due to the rapid change in technology and competition among global financial
institutions, the banks in Thailand also need to reduce costs and to improve their
quality of services by strategic information technology (IT) outsourcing such as data
processing and system development to the third parties. IT outsourcings are
understood as a process in which certain service providers, external to organizations,
takes over IT functions formerly conducted within the boundaries of the firm [5, 6].
The IT service desk is a crucial function of incident management driven by alignment
with the business objectives of the enterprise that requires IT support, balancing theirs
operations and achieving desired service level targets while IT Infrastructure Library
(ITIL) has become a strategic tool for efficiency and effectiveness of IT outsourcing
providers to provide a competitive approach. The ITIL defines a set of the best
practice processes to align IT services to business needs and constitutes the
framework for IT service management [7, 8].
The primary objective of the IT service desk is to resolve incidents related to IT
in the organization. As the case study, it appears that the IT service desk outsourcings
role is not quite a single point of contact [9]. The bank takes ownership of the help
desk agent called the first level support (FLS) which acts as more than just an
interface for internal users and external customers. Consequently, IT service desk as a
second level support (SLS) will resolve the assigned incidents from the FLS by
ensuring that the incident is in the outsourcing scope and still owned, tracked, and
monitored throughout its life cycle.

2
For the technologies regarding service desk, many organizations have focused
on computer telephony integration (CTI). The basis of CTI is to integrate computers
and telephones so that they can work together seamlessly and intelligently [10].
The major hardware technologies are as follows: automatic call distributor (ACD),
voice response unit (VUR) and interactive voice response unit (IVR) [11]. These
technologies are used to make the existing process more efficient by focusing on
minimizing the agents idle time. In resolving the incident effectively, IT service desk
agents must be very knowledgeable of their service supports, applications, and
support teams. Most efforts at improving service desk performance have been to make
the current system more efficient through applications of information technologies.
Those technologies do not address the problem of resolving performance dropped due
to incorrect assignments.
This thesis identifies three problems as follows:
1.1.1 The employee turnover is very high, particularly for technical employees
[12]. For the reason that service desk staff store significant knowledge regarding the
systems such as business processes, and technologies and if they leave their
knowledge often goes with them.
1.1.2 More than sixty percent of all resolving time is spent to resolve the
repeat incident [13].
1.1.3 The assigned resolver group to deal with the incident may be mistaken
due to human errors. Because the resolver group assignments are still performed
manually by IT service desk agents.
The first of two problems can be resolved by keeping employees knowledge
along with the organization by knowledge management approach and to conduct the
way to prevent the recurring incidents by using root cause analysis. The activities are
becoming the primary internal IT service desk functions of the outsourcing and they
are the potential to provide the competitive advantages. The last problem of
underlying for the incorrect resolver group assignment can be resolved by means of
automatic assignment approach. The Text mining discovery methods can find out the
suitable methods such as decision trees to support the correct assign and the rule
resulting from the rule generation from the decision tree could be properly kept in a
knowledge database in order to support and assist with further assignments.

3

1.2 Objectives
The objectives of this dissertation are as follows:
1.2.1 To propose a framework for knowledge management system with root
cause analysis based on ITIL best practice for IT service desk outsourcing in the
banking business called KMRCA IT service desk system.
1.2.2 To evaluate the performance of the KMRCA IT service desk system
before-and-after usage by using experimental design and simulation study.

1.3 Hypothesis
For the reason that the performance of KMRCA IT service desk system will be
higher than the Typical IT service desk system in terms of speed in resolving
incidents. Therefore, the defined hypothesis of the alternative hypothesis (H
1
) is the
average time in resolving incidents for all calls except for critical calls will be lower
in KMRCA IT service desk system than the currently Typical IT service desk system
and null hypothesis (H
0
) is that the average time in resolving incident of the both
systems are the same. Two rival hypotheses are compared by a statistical hypothesis
test.
H
0
:
1
=
2
, and
H
1
:
1
<
2 ,
where
1
and
2
are the average time in resolving incidents of
KMRCA IT service desk system and the average time in resolving incidents of
Typical IT service desk, respectively.
The statistical hypothesis test approach is to calculate the probability that the
observed effect will occur if the null hypothesis is true. In other words, if the p-value
is small then the result is called statistically significant and the null hypothesis is
rejected in favour of the alternative hypothesis. If not, then the null hypothesis is not
rejected. Incorrectly rejecting the null hypothesis is a Type I error; incorrectly failing
to reject it is a Type II error.

1.4 Scope of the Study
The scope of this dissertation is as follows:
1.4.1 This study focuses on the performance evaluation in terms of throughput
and average time taken in resolving incidents.

4
1.4.2 The performance evaluation is to compare before-and-after employment
KMRCA IT service desk system by using simulation study within Arena[56] software
package and design of experiment of 2
3
factorial design.
1.4.3 For the framework, IT service desk outsourcing includes IT service desk
agents and five resolver groups, including EOS (enterprise operating service), IE-AMS
(application management service), NWS (network service), OS-EC (operation service),
and VEN (vendor service).
1.4.4 ITIL-based KMRCA IT service desk processes include IT service desk
function, Incident management process and problem management process.
1.4.5 The proposed KMRCA IT service desk system developed based on
system analysis, system development life cycle (SDLC) method. In addition, the
system composes of two main functions, a searching knowledge function based on
case-based reasoning, and an automatic resolver group assign function based on the
method generating from text mining discovery algorithms.
1.4.6 The text mining discovers algorithms is to find out the strongest
methods by comparing seven decision trees within WEKA [65] machine learning,
Decision stump, ID3, J48, NBTree, Random Forest, Random Tree and REPTree.
1.4.7 The resolver groups are always available when they receive the assigned
incidents from the IT service desk agents.
1.4.8 For performance evaluation, a sample of incident data collected from
Tivoli CTI system of IT service desk outsourcing of selected 12,198 calls (prime time
on the working days) for 4-month during April to July 2006.
1.4.9 For the study of automatic resolver assign, a sample of incident data
collected from Tivoli CTI system of IT service desk outsourcing of all 14,440 cases
for 4-month during April to July 2006.
Obviously, the sample sizes are different from each other because there are on
the different sides of the study objectives. For performance evaluation using
simulation study, a sample size is selected 12,198 calls during the prime time on the
working days since the aim needs the simulation output as real as possible. Another of
automatic resolver group assignment, a sample size is all 14,440 cases because the
main purpose of the study requires all data to execute to the system no matter what
time concerns, but determine to assign correctly as relevant symptoms of the incident.

5

1.5 Utilization of the Study
1.5.1 The Performance evaluation using simulation study and experimental
design can be adopted to find out the specification of the knowledge management
system. For example, the performance evaluation of KMRCA IT Service Desk can be
applied to the other service desk functions to identify the KMRCA specifications and
then it can be modified according to the organizations requirements.
1.5.2 The simulation study is also used to evaluate KMRCA IT service desk
systems performance without interrupting the daily IT service desks operations.
Moreover, the way of simulation can be applied in several industries processes in
time being concern in order to manage constrictions of the system.
1.5.3 The ITIL-based IT service desk function in incident management and
problem management processes can be adopted and adapted to the organizational
outsourcing to deal with the ITIL certification.
1.5.4 The data preparation process and text mining discovery algorithm
method can be applied to the empirical studies that need data pre-processing and
transforming the results to find the strongest method for the classification approach.
1.5.5 The suitable decision tree-based in the function of IT service desk
system provides not only automatic resolver group assign, but also the knowledge
acquisitions that are the rules resulting from the rule generating from the decision tree
method. The acquired knowledge can be kept to support and assist to the further
assignments.
This thesis organizes the remainders as follows. Chapter 2 describes literature
review, including knowledge management (KM), root cause analysis (RCA), case
based reasoning (CBR), ITIL-based IT service desk, technologies for IT service desk,
IT service desk outsourcing, decision support system (DSS) for resource assignments
and classification trees. The details of the proposed model frameworks are illustrated
in Chapter 3. Chapter 4 gives results of the study and discussion. Finally, conclusion
and future work are presented in Chapter 5.

CHAPTER 2
LITERATURE REVIEW

This chapter describes the review of several literatures with regard to the study,
including knowledge management, root cause analysis, and case-based reasoning
which are illustrated in Sections 2.1, 2.2, and 2.3. Sections 2.4 and 2.5 describe ITIL-
based service desk function, and technologies for service desks. The IT service desk
outsourcing is describes in Section 2.6. Decision support system considering resource
assignment and Classification trees are illustrated in Sections 2.7 and 2.8. Moreover,
the summary is shown in Section 2.9.

2.1 Knowledge Management
The study of knowledge management started from Polanyis Tacit Dimension.
His analysis emphasized several key concepts. Firstly, the ability to identify the
outside objects, and then to know, is learned through a process of personal experience.
Secondly, tacitness and explicitness are distinct dimensions; the increase of one does
not come at the decrease of the other. Thirdly, since tacit knowing is an essential
element of any kind of knowledge and is acquired through personal experience called
indwelling, any effort to achieve absolute detachment, the objective of knowledge is
misdirected and self defeating. Polanyis work was situated in a philosophical context,
and focused on the definition of knowledge but not on the systematic effort of
managing it [14].
The conceptualization of KM was not developed until knowledge became central
to production and innovation in the 1990s. Peter Drucker [15] is among the first who
advocated the advent of a knowledge society. In the Post-Capitalist Society [15],
Drucker [15] documented the transformation from a capitalist to a Knowledge Society,
which began shortly after World War II, noting that the foremost economic resource
is no longer capital, land, or labor. Rather, it is and will be knowledge [15]. The field
of knowledge management has also been developed by the experience and philosophy
of Eastern society.


8
Nonaka and Takeuchis Knowledge-Creating Company [1], based on
experience in Japanese companies, is a pioneer work in mapping explicit and implicit
knowledge, as well as individual, group, and organizational knowledge into one
matrix describing called the dynamics of knowledge creation. They introduced the
socialization, externalization, combination, and internalization processes by the SECI
model that becomes popular in knowledge management today. This SECI model or
SECI processes explain the organizational knowledge creation theory and serve as a
method of understanding how an organization creates a new product, new process, or
new organisation structure. This concept is easily understood by focusing on the
project in the system solution business in which creation of a new product or new
process that leads to success. Though many success cases in business activity indicate
efficient and effective implementation of SECI an innovative organization does not
simply solve the existing problems or process external information for adapting to
environmental changes. In order to find out the problem or solution, it recreates a new
environment while producing new knowledge or information are from the inside
organization. For this reason, the SECI processes of knowledge management may be
considered comparable to the project management for organizing a project and
guiding it to success [16].
Knowledge management (KM) is the process of managing the organizations
knowledge by means of systematic and organizational processes conducted by
employees to enhance the organizational performance and create value [1, 2, 3]. The
development of KM, on the other hand, has been driven by practices and development
in information and data management [4]. Organizations should therefore seek and
share a combination of tacit and explicit knowledge with suppliers and other parties in
the value chain to satisfy customer needs in a highly competitive environment. KM is
more than just the advantage of technology, intranet and internet, but includes
organizational issues, assumes information resource management together with the
cultural change which is important in the KM implementation process [17].
For the organizations, the knowledge management is about acquisition and
storage of employees' knowledge and making the knowledge accessible to other
employees within the organization [3, 18, 19, 20]. Nonaka and Takeuch [1] have
extensively studied knowledge in the organization and developed a model that







9
describes knowledge as existing in two forms. Tacit knowledge is defined as personal,
context-specific knowledge that is difficult to formalize and communicate. Explicit
knowledge is factual and easily codified so that it can be formally documented and
transmitted. Through knowledge management, a company changes individual's
knowledge into organizational knowledge [21]. Organizational knowledge is
knowledge held by the organization. The organization maintains the organizational
knowledge in organizational knowledge resources which are operated on by human or
computer processes that manipulate the knowledge to create value for the
organization [22]. Nonaka and Takeuchi [1] defined organizational learning as, a
process that amplifies the knowledge created by individuals and crystallizes it as part
of the knowledge network of the organization. In a service desk environment, much
of the knowledge is from experiential learning [23, 24]. A challenge is how to transfer
the knowledge gained by individuals into organizational knowledge.
Phomasakha and Meesad [9] reviewed several knowledge management system
(KMS) from several literatures regarding knowledge management systems and
proposed the KMS compose of five processes, (1) knowledge capturing or knowledge
discovery; (2) knowledge creation; (3) knowledge inventory or storing knowledge;
(4) knowledge sharing; and (5) knowledge transfer which are working in cycle and
the knowledge sharing and knowledge transfer are conveyed to the community of
practice (CoP) which people know how to use the real knowledge. However, the IT is
used to support only knowledge creation and knowledge inventory that are conducted
to the organizational memory (OM) [9].
For the service desk, the relevant knowledge management approach is of
problem solving. Gray [25] presented a framework that categorizes knowledge
management according to a problem solving perspective. The framework was defined
four cells according to the type of problem and the process supported. Along the
horizontal axis they defined two classes of problems as new problems and previously
solved problems. Along the vertical axis they define two processes of problem
recognition and problem solving. The primary function of the service desk is problem
solving of both new and previously solved problems. When solving new problems,
Gray [25] called this knowledge creation. Solving previously solved problems was
called knowledge acquisition.


10
Several characteristics can be defined that will make a KMS successful in the
service desk. The KMS must be able to gather knowledge from humans and other
sources. In an environment of IT outsourcing in banking business, IT service desk
outsourcing is a curial functions of an IT outsourcing provider who takes over IT
functions from its customer or the bank. However, the bank desires service level
targets based on service level agreement (SLA) to control the IT service desk
operations [26]. The purpose of the IT service desk outsourcing is to support customer
services on behalf of the banks business goals with technology driven. The role of IT
service desk is to ensure that IT incident tickets are owned, tracked, and monitored
throughout their life cycle.

2.2 Root Cause Analysis
A root cause analysis (RCA) is a structural investigation that aims to identify
the true cause of a problem, and the actions necessary to be taken to eliminate it [27].
The RCA is the process to identify effortless factors using structured approach with
techniques decided to provide a focus on identifying and resolving problems. The
RCA also provides objectivity for problem solving, assists in developing solutions,
predicts other problems, gathers contributing incidents, and focus attention on
preventing recurrences. The techniques of the root cause analysis are often applied for
input for decision making process. The root cause analysis identifies and prevents
future errors in the proactive mode [28]. However, root cause analysis will tell the real
reasons for problems [29]. The results of RCA, when eliminated or changed, will
prevent the recurrence of the specific or similar problems, and therefore the benefits
of the RCA are to improve the service level agreement (SLA) attainment and to
enhance quality services as well as customer satisfaction.
In this study is to develop not only knowledge management system (KMS), but
also the RCA embedded into the system in order to prevent the recurring incidents oin
the KMRCA IT service desk system. The KMS is designed to be incorporated into the
daily operation of the service desk to ensure high utilization and maintenance of the
knowledge stores [30]. Moreover, the knowledge-based library of RCA models could
be hierarchically structured and interconnected failure trees, the abnormalities in
process operations and output quality can originate from abnormalities in equipment
or in process conditions possibly due to basic failures [31].







11
2.3 Case-Based Reasoning
Case-Based Reasoning (CBR) is widely used in resolving incident that is able to
resolve a new incident by remembering a previous similar situation and by reusing
information and knowledge of that situation [32, 33]. More specifically, CBR uses a
database of incident to resolve new incidents. The database can be built through the
knowledge management process or it can be collected from the previous cases.
In resolving incident, each case would describe an incident and a resolution to that
incident occurred. The reasoner resolves new incidents by adapting relevant cases
from the library [34]. In addition, CBR can learn from previous experiences. When an
incident is resolved the case-based reasoner can add the incident description and the
solution to the case library. The new case that in general represented as a pair of
incident and resolution is immediately available and can be considered as a new piece
of knowledge.
According to Doyle et al. [35], Case-Based Reasoning is different from other
artificial intelligence (AI) approaches in following ways:
(a) Traditional AI approaches rely on general knowledge of an incident
domain and tend to solve incidents on a first-principle while CBR systems solve new
incidents by utilizing specific knowledge of past experiences.
(b) CBR supports incremental, sustained learning. After CBR solves an
incident, it will make the incident available for future incidents.
In 1977, Schank and Abelsons [36] work brought CBR from research into
cognitive science [37]. They proposed that general knowledge about situations be
recorded as scripts that allow us to set up expectations and perform inferences [36].
Schank [36] then investigated the role that the memory of previous situations and
situation patterns scripts, MOPS play in incident solving and learning [36]. Almost at
a similar time, Gentner [38] investigated analogy reasoning that is related to CBR
while Carbonell [39] explored the role of analogy in learning and plan generalization
[38, 39]. Subsequently, increasing numbers of research paper and applications were
published, and CBR has grown into a field of widespread interest. It has proven itself
to be a methodology suited to solving weak theory incidents where it is difficult or
impossible to elicit first principle rules from which solutions may be created [40].



12
2.3.1 The CBR Cycle
The CBR process can be represented by a schematic cycle, as shown in
Figure 2-1. Aamodt and Plaza [33] described CBR typically as 4-RE cyclical process
comprising as follows:
1) RETRIVE the most similar cases; during this process, the CB reasoner
searches the database to find the most approximate case to the current situation.
2) REUSE the cases to attempt to solve the incident; this process includes using
the retrieved case and adapting it to the new situation. At the end of this process, the
reasoner might propose a solution.
3) REVISE the proposed solution if necessary; since the proposed solution
could be inadequate, this process can correct the first proposed solution.
4) RETAIN the new solution as a part of a new case.



FIGURE 2-1 The Case-Based Reasoning Cycle [33].

This process enables CBR to learn and create a new solution and a new case that
should be added to the case base. It should be noted that the Retrieve process in CBR
is different from the process in a database. If you want to query data, the database
only retrieves some data using an exact matching while a CBR can retrieve data using
an approximate matching. As shown in Figure 2-1, the CBR cycle starts with the
description of a new incident, which can be solved by retrieving previous cases and
reusing solved cases, if possible, giving a suggested solution or revising the solution,
retaining the repaired case and incorporating it into the case base.







13
However, this cycle rarely occurs without human intervention that is usually
involved in the Retain step. Many application systems and tools act as a case retrieval
system, such as some help desk systems and customer support systems.
2.3.2 A Classification of CBR Applications
Althoff [41] suggested a classification method of CBR application as shown in
Figure 2-2. Under this classification scheme, CBR applications can be classified into
two categories as follows:
(a) Classification tasks
(b) Synthesis tasks



FIGURE 2-2 Classification Hierarchy of Case-Based Reasoning Applications [41].

Classification tasks are very common in business and everyday life. A new case
is matched against those in the case-base from which an answer can be given. The
solution from the best matching case is then reused. In fact, most commercial CBR
tools support classification tasks.
Synthesis tasks attempt to get a new solution by combining previous solutions
and there are a variety of constraints during synthesis. Usually, they are harder to
implement. CBR systems that perform synthesis tasks must make use of adaptation
and are usually hybrid systems combining CBR with other techniques [37].



14
2.4 ITIL-Based IT Service Desk Function
ITIL (Information Technology Infrastructure Library) documents industry best
practice guidance. It has proved its value from the very beginning. Initially, OGC
collected information on how various organisations addressed Service Management,
analysed this and filtered those issues that would prove useful to OGC and to its
customers in UK central government. Other organisations found that the guidance was
generally applicable and markets outside of government were very soon created by
the service industry. Being a framework, ITIL describes the contours of organizing
service management. The models show the goals, general activities, inputs and
outputs of the various processes, which can be incorporated within IT organisations.
ITIL is wildly accepted approach IT Service Management (ITSM). It provides a
comprehensive a set of best practice for the IT service management, promoting a
quality approach to archiving business effectiveness and efficiency in the use of
information system. ITIL is based on the collective experience of commercial and
governmental practitioners worldwide. This has been distilled into one reliable,
coherent approach, which is fast becoming a de facto stand used by some of the
worlds leading businesses [42].
2.4.1 IT Service Desk Function in Incident Management
ITIL-based IT service desk in incident management process provides a vital
day-to-day contact point between users, customers, IT services and third-party support
organisations. Service Level Management (SLM) is a prime business enable for this
function. Strategically, for internal users and external customers the IT service desk is
probably the most important function in an IT organisation. For many, the IT service
desk is their only window on the level of service and professionalism offered by the
whole organisation or a department. This delivers the prime service component of
customer perception and satisfaction. The following is given a brief of Incident
Management and Problem management processes which the details are in the Service
Support book of ITIL book series.
2.4.2 Incident Management Process
The primary goal of the Incident Management process is to restore normal
service operation as quickly as possible and minimise the adverse impact on business
operations, thus ensuring that the best possible levels of service quality and







15
availability are maintained. 'Normal service operation' is defined here as service
operation within Service Level Agreement (SLA) limits.
Examples of categories of Incidents are as follows:
(a) application; such as service not available, application bug or query
preventing Customer from working, disk-usage threshold exceeded, and so forth.
(b) hardware; such as system down, automatic alert, printer not printing,
configuration inaccessible,
(c) service requests; such as request for information or advice or
documentation, forgotten password.
A request for new or additional service (i.e. software or hardware) is often not
regarded as an incident but as a Request for Change (RFC). However, practice shows
that handling of both failures in the infrastructure and of service requests are similar,
and both are therefore included in the definition and scope of the process of Incident
Management. As the Figure 2-3 shows the Incident Management Process overview
which includes Inputs, Outputs, and its activities [42].


FIGURE 2-3 Incident Management Process Overview [42].



16
Inputs are as follows:
(a) Incident details sourced from service desk, networks or computer operations,
(b) configuration details from Configuration Management Database (CMDB),
(c) response from incident matching against problems and Known Errors
resolution details,
(d) response on RFC to effect resolution for incident(s).
Outputs are as follows:
(a) RFC for Incident resolution; updated Incident record, including resolution
and or Work-arounds,
(b) resolved and closed incidents,
(c) communication to Customers,
(d) management information reports.
Incident Management activities are as follows:
(a) Incident detection and recording,
(b) Classification and initial support,
(c) investigation and diagnosis,
(d) resolution and recovery,
(e) Incident closure,
(f) Incident ownership, monitoring, tracking and communication.
Most IT departments and specialist groups contribute to handling incidents at
some time. The service desk is responsible for the monitoring of the resolution
process of all registered incidents in effect the service desk is the owner of all
incidents. The process is mostly reactive. Actually, the incidents cannot be resolved
immediately by the service desk may be assigned to specialist groups. A resolution or
Work-around should be established as quickly as possible in order to restore the
service to Users with minimum disruption to their work. After resolution of the cause
of the incident and restoration of the agreed service, the incident is closed. Figure 2-4
illustrates the activities during an incident life cycle.







17


FIGURE 2-4 The Incident Life Cycle [42].

Throughout an incident life-cycle it is important that the Incident record is
maintained. This allows any member of the service team to provide a Customer with
an up-to-date progress report. Example update activities include:
(a) update history details
(b) modify status (e.g. 'new' to 'work-in-progress' or 'on hold')
(c) modify business impact/priority
(d) enter time spent and costs
(e) monitor escalation status
An originally reported Customer description may change as the Incident
progresses. It is, however, important to retain the description of the original
symptoms, both for analysis and so that you can refer to the complaint in the same
terms used in the initial report [42].
Often, departments and specialist support groups other than the service desk are
referred to as second or third line support groups, having more specialist skills, time
or other resources to resolve incidents. In this respect, the service desk would be first
line support. Figure 2-5 illustrates how this terminology relates to the Incident
management activities mentioned in previous paragraphs.


18



FIGURE 2-5 First, Second, and Third Line Supports [42].

The service desk plays an important role in the Incident Management process,
as follows:
(a) All incidents are reported to and registered by the service desk where the
incidents are generated automatically, the process should still include registration by
the service desk.
(b) The majority of incidents which are possible up to 85% in a highly skilled
requirement. Thus, they will be resolved at the service desk.
(c) The service desk is the independent function monitoring incident
resolution progress of all registered incidents.








19
Incidents, the result of failures or errors within the IT infrastructure, result in
actual or potential variations from the planned operation of the IT services. The cause
of incidents may be apparent and that cause can be addressed without the need for
further investigation, resulting in a repair, a Work-around or a RFC to remove the
error. Successful processing of a Problem record will result in the identification of the
underlying error, and the record can then be converted into a Known Error once a
Work-around has been developed, and or RFC [42]. This logical flow, from an initial
report to the resolution of an underlying problem, is shown in Figure 2-6.



FIGURE 2-6 Relationship between Incidents.

It can be noted that the problem is the unknown underlying cause of one or more
incidents. Known Error is a problem that is successfully diagnosed and for which a
Work-around is known. In addition to RFC as a Request For Change to any
component of an IT Infrastructure or to any aspect of IT services.
When incident Management finds a Work-around it will be analysed by the
Problem Management team who will update the associated Problem record as shown
in the Figure 2-7. An associated Problem record may not exist at this time, for
example, the Work-around may be to send a report by fax due to a communication
line failure, but at this point there may not be a Problem record for the communication
line failure, which the Problem Management team would have to create [42].





FIGURE 2-7 Handling Incident Work-arounds and Resolutions [42].


20
The process is then that the service desk will link incidents that are clearly the
result of an existing Problem record. It is also possible that the Problem Management
team, while investigating the problem associated with the incident, finds a Work-
around or a resolution for a problem and/or some related incidents [42].
In this case, the Problem Management team should inform the incident
Management process in order that open incidents have their status changed to 'Known
Error' or 'closed' as appropriate. For the next part it will be described the Problem
management process.
2.4.3 Problem Management Process
The goal of Problem Management is to minimise the adverse impact of incidents
and problems on the business that are caused by errors within the IT Infrastructure,
and to prevent recurrence of incidents related to these errors. In order to achieve this
goal, Problem Management seeks to get to the root cause of incidents and then initiate
actions to improve or correct the situation [42].
The Problem Management process has both reactive and proactive aspects. The
reactive aspect is concerned with solving problems in response to one or more
incidents. Proactive Problem Management is concerned with identifying and solving
problems and Known Errors before incidents occur in the first place. The process is
intended to reduce both the number and severity of incidents and problems on the
business. Therefore, part of Problem Management's responsibility is to ensure that
previous information is documented in such a way that it is readily available to first-
line and other second line staff.
The scope of Problem Management process includes Problem control, error
control and proactive Problem Management. In terms of formal definitions, a
'Problem' is an unknown underlying cause of one or more incidents, and a 'Known
Error' is a problem that is successfully diagnosed and for which a Work-around has
been identified.
Inputs to the Problem Management process are as follows:
(a) Incident details from Incident Management
(b) configuration details from the Configuration Management Database CMDB
(c) any defined Work-arounds from Incident Management.








21
The major activities of Problem Management are as follows:
(a) Problem control
(b) Error control
(c) Proactive prevention of problems
(d) Identifying trends
(e) Obtaining management information from Problem Management data
(f) Completion of major problem reviews.
Outputs of the process are as follows:
(a) Known Errors
(b) A Request for Change (RFC)
(c) An updated Problem record, including a solution and or any work-arounds
(d) for a resolved problem, a closed Problem record
(e) response from Incident matching to problems and Known Errors
(f) management information.
A problem is a condition often identified as a result of multiple incidents that
exhibit common symptoms. Problems can also be identified from a single significant
incident, indicative of a single error, for which the cause is unknown, but for which
the impact is significant. A Known Error is a condition identified by successful
diagnosis of the root cause of a problem, and the subsequent development of a Work-
around. Structural analysis of the IT infrastructure, reports generated from support
software, and User-group meetings can also result in the identification of problems
and Known Errors. This is proactive Problem Management. Problem control focuses
on transforming problems into Known Errors. Error control focuses on resolving
Known Errors structurally through the Change Management process [42].
The Problem Management differs from Incident Management in that its main
goal is the detection of the underlying causes of an incident and their subsequent
resolution and prevention. In many situations this goal can be in direct conflict with
the goals of Incident Management where the aim is to restore the service to the
Customer as quickly as possible, often through a Work-around, rather than through
the determination of a permanent resolution (for example, by searching for structural
improvements in the IT infrastructure, in order to prevent as many future incidents as
possible). In this respect, therefore, the speed with which a resolution is found is only


22
of secondary (albeit still of significant) importance. Investigation of the underlying
problem can require some time and can thus delay the restoration of service, causing
downtime but preventing recurrence [42].

2.5 Technologies for Service Desk
The service desk technology means a number of technologies are available to
assist the service desk functions, each with its advantages and drawbacks. It is
important to ensure that the blend of technology, process and service desk staff will
meet the needs of both the business and the User.
The technology needs to support business processes, adapting to both current and
future demands. It is also important to understand that with automation comes an
increased need for discipline and accountability. The below are the several
technologies of service desk.
(a) integrated Service Management and Operations Management systems,
(b) advanced telephone systems for example auto-routing, computer telephony
integration (CTI), voice over internet protocol (VOIP),
(c) interactive voice response (IVR) systems,
(d) electronic mail such voice, video, mobile com., internet, email systems,
(e) fax servers (supporting routing to email accounts),
(f) pager systems,
(g) knowledge, search and diagnostic tools, and
(h) automated operations and network management tools.
In automating the agent-centric help desk, many have focused on computer
telephony integration (CTI). The basis of CTI is to integrate computers and
telephones so they can work together seamlessly and intelligently [10]. The major
hardware technologies are as follows: Automatic call distributor (ACD); voice
response unit (VRU), Interactive voice response unit (IVR), predictive dialing,
headsets, and reader bounds [11]. These technologies are used to make the existing
process more efficient by minimizing the agent's idle time and evenly loading the
agents in the help desk. These technologies do not address the problem of knowledge
loss when agents leave nor do they provide information to the agent in helping to
resolve problems.







23
2.6 IT Service Desk Outsourcing
Information Technology (IT) outsourcing has been one of the critical issue in
organization management [43]. The Outsourcing is to dismantle internal IT
departments by transferring IT employees, facilities, hardware leases, and software
licenses to third-party vendors [44]. Hirschheim and Lacity [45] defined IT
outsourcing as the practice of transferring IT assets, leases, staff, and management
responsibility [45]. According to Linder [46] argued that the concept of
transformational outsourcing is an emerging practice, where companies are looking
outside to help for more fundamental reasons, including 1) to facilitate rapid
organizational change; 2) to launch new strategies; and 3) to reshape company
boundaries.
Most of the bank organizations trends to outsource IT work by hiring a
professional company to run their IT operations. IT service desk should be the
window of IT service and professionalism offered by the organisation. The
intellectual capital in supporting the users and customers is a valuable business asset
and should not be discarded without a clear understanding of the business requirement
[42]. There two objectives of the IT service desk, one is to provide a single point of
contact for users and customers and another is to facilitate the restoration of normal
operational service with minimal business impact on the user or customer within
agreed service levels and business priorities.
IT service desk performed by the outsourcing company called IT service desk or
Second Level Support (SLS) is the main service function. With a Bank Help Desk or
First Level Support (FLS) provides a day-to-day contact point between customers,
users, banks vendors, and IT services. There are two types of incidents, Non-IT and
IT incidents. FLS and Banks vendors will handle the Non-IT incident. For the IT
incident, the FLS will assign it to IT service desk or Second Level Support (SLS) to
resolve and the SLS may assign to Third Level Support teams, including AMS teams,
EOS teams, NWS teams and Vendors support teams. Service Level Management
(SLM) is a prime business enabler for this function.
IT service desk outsourcing is not an actual single point of contact [9], though
general service desks or help desks serve an important role of the information
technology department by providing the primary point of contact for users to contact


24
analysts to help them resolve problems with information technology including
hardware, software, and networks [30]. Because the IT service desk performs to take
in the assigned incidents from the bank help desk or First Level Support (FLS) that
not directly contact to users or customer at the first time. However, the IT service
desk abides in the middle of the FLS and Third Level Support (TLS).
The authorized third level supports should be allowed to have access to allow
them to update the service desk records. The process to update the records will ensure
that resource usage is properly accounted for. However, it should be aware of what
your supplier is performing closely.

2.7 Decision Support System
In the past decade, contributions of decision support systems (DSS) for resource
assignments were proposed in several areas. In R&D project selection, a hybrid
knowledge and model approach for reviewer assignments, Sun [47] presented a
hybrid knowledge and model approach which integrated mathematical decision
models for the assignment of external reviewers to R&D project proposals. The
purpose of the model was to assign the most appropriate expert to relevant proposals.
Before the research above, Fan [48] proposed a decision support system for proposal
grouping, which is a hybrid approach for proposal grouping, in which knowledge
rules were designed to deal with proposal identification and proposal classification,
and a genetic algorithm was used to search for the expected groupings. Next was in
the area of decision support for the single-depot vehicle rescheduling problem
presented by Li [49] the aim of the system was to minimize operation and delay costs.
It was designed to obtain optimal vehicle assignments and reassignments. In the Navy
works, the problem of assigning navy personnel to jobs was resolved by a guided
design search in the interval-bounded sailor assignment problem proposed by Lewis
[50]. The paper offers an expanded interval bounded network flow model of the sailor
assignment process creating teams of skilled sailors to be assigned to ships. In 2003, a
decision support system for multi-attribute utility evaluation based on imprecise
assignments was proposed by Jimnez et al. [51]. The paper describes a decision
support system based on an additive or multiplicative multi-attribute utility model for
identifying the optimal strategy. Last but not least, in research for a rule-based system







25
of automatic assignment of technicians to service faults, Lazarov and Shoval [52]
presented a model and prototype system for the assign of technicians to handle
computer faults, including hardware, software and communications. Selection of the
technician most suited to deal with the reported failure was based on the assignment
rules which are a correlations between the nature of the fault and the technicians
skills. The model was evaluated by using simulation test, comparing the results of the
model assignment process against assignment carried out by experts. The results
showed that the systems assignments were better than the experts.
The technologies that support service desks are described in Section 2.6.
However, the thesis met that those technologies do not address the issue of resolving
performance dropped due to incorrect assignments. Incorrect assignment is still taking
place because of human errors, because the assignment of resolver group to deal with
the incident is performed manually by IT service desk agents. In fact, technologies for
the service desk management do not focus on automatic assignment, although the
ITIL framework guides the IT service desk outsourcing to resolve incidents by putting
in place the best practice processes for IT service desk decision making regarding
assignment and reassignment. This thesis propose function of automatic resolver
group assignment that is based on text mining discovery methods, and implementing
the strongest method well as validating the selected method of the model.

2.8 Classification trees
A decision tree is a simple structure where a tree in which each branch node
represents a choice between a number of alternatives, and each leaf node represents a
classification or decision. The ordinary tree consists of one root, branches, nodes
(places where branches are divided), and leaves. In the same way the decision tree
consists of nodes which stand for circles or cones, the branches stand for segments
connecting the nodes. A decision tree is drawn from left to right or beginning from the
root downwards, so it is easier to draw it. The first node is a root. The end of the chain
root branch node node is called leaf. From each internal node (i.e. not a
leaf) may grow out two or more branches. Each node corresponds to a certain
characteristic and the branches correspond to a range of values. These ranges of
values must give a partition of the set of values of the given characteristic [53].


26
The decision tree algorithms can be applied to solve the problem under
discussion. The decision trees also represent a supervised approach to classification.
Several decision trees studied are from WEKA, a suite of machine learning software
written in Java, developed by the University of Waikato, New Zealand, in a book
describing data mining in practical machine learning tools and techniques of WEKA
software [54],
The study implemented several decision trees, including Decision Stump, ID3,
J48, NBTree, Random Forest, Random Tree, and REP Tree. The below are brief
descriptions of various decision tree methods.
2.8.1 Decision Stump
A Decision stump [54] is consisting of a decision tree with only a single depth
where the split at the root level is based on a specific attribute per value pair. A
decision stump is a weak machine learning model. The models are often used as
components in ensemble learning techniques such as bagging and boosting.
2.8.2 ID3
An ID3 [55] has been found to construct simple decision trees and can be
described using the information gain criterion which is essentially the same as one. It
splits the data in two parts. The exact criterion is determined by examining the
entropy of the two subsets. The split that results in the largest information gain or
decrease in entropy is executed. However, the approach it uses cannot guarantee that
better trees have not been overlooked.
2.8.3 J48
A J48 [55, 56] classifier generates an unpruned or a pruned C4.5 decision tree
with slightly modified C4.5 in WEKA machine learning. The C4.5 algorithm
generates a classificationdecision tree for the given dataset by recursive partitioning
of the data. The decision is grown using depth-first strategy. The algorithm considers
all the possible tests that can split the data set and selects a test that gives the best
information gain. For each discrete attribute, one test with outcomes as many as the
number of distinct values of the attribute is considered. For each continuous attribute,
binary tests involving every distinct values of the attribute are considered.









27
2.8.4 NBTree
The nave Bayesian tree learner, NBTree [57], combines nave Bayesian
classification and decision tree learning. In NBTree, a local nave Bayes is deployed
on each leaf of a traditional decision tree, and an instance is classified using the local
naive Bayes on the leaf into which it falls. The algorithm for learning an NBTree is
similar to C4.5. After a tree is grown, a naive Bayes is constructed for each leaf using
the data associated with that leaf. An NBTree classifies an example by sorting it to a
leaf and applying the nave Bayes to that leaf to assign a class label to it. NBTree
frequently achieves higher accuracy than either a nave Bayesian classifier or a
decision tree learner.
2.8.5 Random Forest
A random forest [58] is an ensemble of unpruned classification or regression
trees, induced from bootstrap samples of the training data, using random feature
selection in the tree induction process. Prediction is done by aggregating, majority
vote for classification or averaging for regression, the predictions of the ensemble. A
random forest generally exhibits a substantial performance improvement over the
single tree classifier such as CART and C4.5. It generalized error of classifiers
depends on the strength of the individual trees in the forest and the correlation
between them.
2.8.6 Random Tree
A random tree [54] is a tree drawn at random from a set of possible trees. The
random means that each tree in the set of trees has an equal chance of being sampled.
Another way of saying this is that the distribution of trees is uniform. A random trees
can be generated efficiently and the combination of large sets of random trees
generally leads to accurate models. Random Tree models have been extensively
developed in the field of Machine Learning in the recent years.
2.8.7 REPTree
A REPTree is a fast decision tree learner which builds a decision/regression tree
using information gain as the splitting criterion, and prunes it using reduced-error
pruning. It only sorts values for numeric attributes once. Missing values are dealt with
using the C4.5s method of using fractional instances.


28
2.9 Summary
According to the objectives of the thesis are relevant in two areas. First is the
performance evaluation of knowledge management system based on search
knowledge function in terms of speed in resolving incidents, and the second is the
automatic resolver group assignment based on text mining discovery methods which
are decision tree algorithms. The below are the summary of the review.
2.9.1 Knowledge management system and its performance evaluation
This section is to summaries the reviews of knowledge management, root cause
analysis, case base reasoning, ITIL-based IT service desk which includes service desk
function, incident management, and problem management. Technologies for service
desk in particular CTI system which is used in the IT service desk system.
Knowledge can be categorized into two different types, tacit and explicit, which
also differ in the level of structure of the organization [1]. Knowledge management
(KM) is the business process of managing the organizations knowledge by means of
systematic and organizational specific procedures for acquiring, organizing,
sustaining, applying, sharing, and renewing both tacit knowledge and explicit
knowledge by employees to enhance the organizational performance and to create
value [2, 3].
With highly competitive business environments, managing tacit knowledge,
which includes the true value added intellectual assets of an organization, is an
essential task to maintain the organizations core competency [4]. In addition to the
knowledge base is able to support the service desk environment. Thus, it can be
concluded that the Knowledge management system (KMS) composed of five
processes, including (1) knowledge capturing; (2) knowledge creation; (3) knowledge
storing or knowledge inventory; (4) knowledge sharing; and (5) knowledge transfer of
which are elaborated to the community of practice because this is how people develop
real knowledge. Both of knowledge creation and knowledge inventory are related to
IT therefore there is becoming organisational memory (OM) and this enables to be
organizations competitive advantage sources [9].
Knowledge management is a discipline that provides strategy, process, and
technology to share and leverage information and expertise that will increase humans
level of understanding to more effectively solve problems and make decisions [20].







29
According to the ITIL guidance processes, the main purpose of incident
management is to minimise interruption in business activities and ensure availability
of service. In addition, the ITIL best practice approach, regardless of who actually
manages the various tasks, the service desk owns the entire process. It appears
unlikely that the service desks roles in incident management will extend beyond an
interface of internal user and external customer [8].
The intention of this thesis is to propose the model of knowledge management
with root cause analysis called KMRCA IT service desk and develop the prototype of
the KMRCA IT service desk system for IT service desk outsourcing. The system is
able to improve a performance of IT service desk function in terms of speed in
resolving incidents. By the way of case-based reasoning in the literature review can be
applied to search for the similar previous cases to resolve the incident.
2.9.2 Decision support system of automatic resolver group assignment
This section is to summaries the reviews of decision support systems focusing
on resource assignments in various areas. Through there are several papers of decision
support system regarding resource assignment there is no the research that applied the
text mining discovery methods. For example the research of automatic assignment of
technicians to service faults [52] using the rule-based system which the rules are
created by the experts who have well-knowledgeable how to solve several service
faults.
The KMRCA IT service desk system was required the automatic resolver group
assignment function. The function attempts to match the most suited resolver group
with the symptom of the incident. The text mining discovery methods is widely used
to search the strongest method of the model to classify the suited resolver group.
In fact, text mining is data mining applied to information extracted from text. It can be
broadly defined as a knowledge-intensive process in which a user interacts with
documented collection overtime by using the suitable analysis tools.

CHAPTER 3
METHODOLOGY

The chapter is to outline research process, to provide a rationale for the research
methodologies which were chosen, and to demonstrate the proposed model and a
prototype of KMRCA IT service desk system.

3.1 Research Process
The below show the result of operational steps of a research process that this
thesis is done step-by-step.
3.1.1 Formulate research problems
The thesis reviewed several literatures which are described in Chapter 2 and
then formulated problems and identify hypothesises that are introduced in Chapter 1.
3.1.2 Conceptualize a research design
The purpose of the thesis is to evaluate the performance of the KMRCA IT
service desk system by using the design of experiment and simulation. The main
function of the system is a Search knowledge function. When the agents use the
function the system can resolve incident faster than the previous system. The design
of experiment 2
k
factorial design is widely used to find the factors that influence with
defined valuables as key performance indicators (KPIs). The simulation study is used
to represent the both systems and the results of simulation for two systems are
comparable in terms of speed in resolving incident.
3.1.3 Construct tools for data collection
The thesis is empirical study and the sample of incident record of 14,440 calls
collected for 4 months during April-July 2006 from Tivoli CTI system of IT service
desk outsourcing in the bank. The selected tools were used to analyse the data,
including Arena simulation software package, Input Analyzer in Arena, Minitab 15
statistical analysis, WEKA machine learning, and MS Excel spreadsheet and data
filtering.




32
3.1.4 Select a sample
This step is selecting a sample which the accuracy of the findings largely
depends upon the way of selecting sample. The thesis selected two samples to support
the two objectives of the study. Firstly, the selected sample of 12,198 calls were used
for performance evaluation in simulation study and design of experiment. Secondly,
from the same sample, selected sample of all 14,440 cases were executed in the text
mining discovery methods of automatic resolver group assignment approach.
3.1.5 Write a research proposal
After all the preparatory work is done, this step is put everything together in a
way that provides adequate information for the advisor(s) and others. The thesis was
proposed as the topic of Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business: Evaluation its Performance.
However, the final title has been the same as topic proposal but just without
Evaluation its Performance
The review of literatures is not only in the first step of formulating a research
problem, but also in several steps, including research design, data collection, and
writing the thesis document. Because literatures have been issued every time since the
research start formulating.

3.2 Information Collection and Requirement Analysis
3.2.1 Information Collection
The objectives of the study are to evaluate performance of KMRCA IT service
desk system and research hypothesis is the average time in resolving incident of all
severities exclude severity 1 is lower that the previous IT service desk system. Thus,
the underlying incident data collected for 12,198 calls from the Tivoli CTI system of
IT service desk for four separate weeks randomly selected from four-month period
during April to July 2006. A sample of the incident data shows in Appendix A, A-1
Figure A-1.
From the sample, the columns contain several information of IT incident,
including ticket no., open date, open time, resolve date, resolve time, severity, system-
type failures, assigned resolver group, incident descriptions, incident resolutions,
caller details and so forth. As the research objectives, the thesis is focusing on the
performance evaluation that data include several columns of time and severity.





33
3.2.2 Requirement Analysis
The data is analysed based on the objectives of performance evaluation using
computer simulation. The study selects Arena discrete-simulation software package to
analyse the data and to build the conceptual model for computer simulation.
3.2.2.1 The rate of incoming calls
The nature of data particular inter arrival time of calls coming to the bank help
desk and the agents create the IT incident ticket sending to IT service desk to resolve
and then the service time in resolving that incident to be analysed. The thesis analysed
data and met that rate of incoming calls during time in business day and holiday are
different. Table 3-1 shows the rate of calls during time in business day and holiday.

TABLE 3-1 The Rate of Incident Calls during Time in Business Day and Holiday
Time Business Day (calls/hr.) Holiday (calls/hr.)
8:00 - 10:00 25.75 1.68
10:01 - 12:00 18.15 2.53
12:01 - 13:00 8.83 0.92
13:01 - 15:00 16.38 2.79
15:01 - 17:00 12.55 2.28
17:01 - 18:00 6.16 0.68


3.2.2.2 The percentage of incident calls by severity
Next is the percentage of incident calls by severity that is the frequency of
number of incident calls by severity is shown in Table 3-2.

TABLE 3-2 Percentage of Incident Calls by Severity
Severity Number of Calls Percentage (%)
1 86 0.71
2 395 3.24
3 11,680 95.75
4 37 0.30


As shown in Table 3-2, the rank of number of calls and their percentage is
Severity 3 (11,680, 95.75%), Severity 2 (395, 3.24%), Severity 1 (86, 0.71%), and
Severity 4 (37, 0.30%).



34
3.2.2.3 Incident Classification
The incidents are classified into six categories as shown in Table 3-3 with their
frequency of occurrence by the Tivoli CTI system. A Pareto phenomenon is observed
whereby the top three-problem categories account for 98.02 % of the total types of
calls received.

TABLE 3-3 Classification of Calls by Incident Category
Incident Category No. of Incidents Percent of Frequency
1) Hardware 6,454 52.91
2) Software 3,981 32.63
3) Network 1,522 12.48
4) Power Supply 211 1.73
5) Operations 30 0.25

3.3 Constructing an Instrument for Data Collection
3.3.1 Goodness-of-fit Test Method
As the data in terms of time between arrival and service time in resolving
incidents, it is necessary to know the basis of methodology regarding curve fit to the
nature of data that represented the data pattern in the computer simulation.
The quality of a curve fit is based primarily on the square error criterion, which
is defined as the sum of {f
i
- f(x
i
) }, summed over all histogram intervals. In this
expression f
i
refers to the relative frequency of the data for the i
th
interval, and f(x
i
)
refers to the relative frequency for the fitted probability distribution function. This last
value is obtained by integrating the probability density across the interval. If the
cumulative distribution is known explicitly, then f(x
i
) is determined as F(x
i
) - F(x
i
-1),
where F refers to the cumulative distribution, x
i
is the right interval boundary and x
i
-1
is the left interval boundary. If the cumulative distribution is not known explicitly,
then f(x
i
) is determined by numerical integration.
The results of Chi-square and Kolmogorov-Smirnov provide goodness-of-fit
tests for non-integer data. These results are presented in form of p-value which is the
largest value of the type-I error probability that allows the distribution to fit the data.
The higher the p-value, the better the fit. For example, if the p-value is greater than
0.05, then it would not reject the null hypothesis of a good fit at level of 0.05.





35
Table 3-4 shows the summary of probability distributions that will be fitted to
the data. If an enabled distribution function is calculated by the Input Analyzer. This
summary file provides the most complete compilation of information describing the
curve fit. By selecting Fit All, summary item causes a dialog to appear, showing the
results of the best-fit calculations. All of the applicable distribution functions are
listed, along with their corresponding square errors, ranked from best to worst. This
listing permits one function to be compared with another for the current data file.

TABLE 3-4 Summary of Probability Distributions for Computer Simulation
Distribution Parameter
Beta BETA Beta, Alpha
Continuous CONT CumP
1
, Val
1
, CumP
n
, Val
n

Discrete DISC CumP
1
, Val
1
, CumP
n
, Val
n

Erlang ERLA ExpoMean, k
Exponential EXPO Mean
Gamma GAMM Beta, Alpha
Johnson JOHN Gamma, Delta, Lamda, Xi
Lognormal LONG LogMean, Log Std
Normal NORM Mean, StdDev
Poisson POIS Mean
Triangular TRIA Min, Mode, Max
Uniform UNIF Min, Max
Weibull WEIB Beta, Alpha


3.3.2 Goodness-of-fit Test of Time between incident arrivals
A discrete event simulation package called Arena [59] is used to imitate the
conceptual models of IT Service Desk system and KMRCA IT service desk system.
A full exposition of the simulation model is available in Simulation with Arena.
However, the time between arrivals of incident calls is analysed by using Input
analyzer that is a standard component of the Arena environment. Figure 3-1 shows
patterns of the time between arrivals of incident calls fitted of Weibull distribution.


36


FIGURE 3-1 Input Analyzed Results
The distribution summary from the Input Analyzer shows as follows:
(a) Distribution : Weibull
(b) Expression : WEIB (3.64, 0.905)
(c) Square Error : 0.001045
(d) Chi-Square test, corresponding p-value : 0.706
The Input analyzer can be used to determine the quality of fit of probability
distribution functions to the input data and be used to compare distribution functions
by square error (Sq. Error) as shown in Table 3-5.

TABLE 3-5 Comparison of Square Error by Function
Function Sq. Error
Weibull 0.00104
Gamma 0.00161
Lognormal 0.00181
Exponential 0.00279
Erlang 0.00279
Beta 0.00360
Normal 0.07030
Triangular 0.10300
Uniform 0.13200





37
However, the lowest square error does not mean that the distribution function is
suited for the data until the p-value is evaluated by of goodness-of-fit test. The
goodness-of-fit tests use the following hypotheses:
(a) H
0
: The distribution adequately describes the data
(b) H
1
: The distribution does not adequately describe the data
By the hypothesis, If p-value > 0.05 at 95 % confidence interval the H
0
will be
accepted that means distribution according to the test case.
Another view of good-of-fit test is illustrated by probability plot. Figure 3-2
shows the probability plot of time between incident arrivals. The graph was generated
from Minitab-15 statistical analysis software package. The data points follow the
straight line, the p-value > 0.250, and the AD statistic (Anderson-Darling statistic
measures how well the data follow a particular distribution) is 0.424, it can be
concluded that at an alpha-level of 0.05, Weibull distribution provide a good fit for
the time between incident arrivals. Therefore, it can be used the fitted line to estimate
for simulation with the distribution instead of a default of exponential time arrival.

100.000 10.000 1.000 0.100 0.010 0.001
99.9
99
90
80
70
60
50
40
30
20
10
5
3
2
1
0.1
Call Arrivals
P
e
r
c
e
n
t
Shape 1.011
Scale 3.318
N 98
AD 0.404
P-Value >0.250
Probabi l i t y Pl ot of Cal l Arri val s
Weibull - 95% CI


FIGURE 3-2 Probability Plot of Time between Arrivals

The simulation model was verified to ensure that the IT service desk system
works properly in terms of Arena functionalities and the entities of the incident calls
follow the same path as described in the conceptual model shown in Appendix C, C-1


38
The verification was done by using the trace element that is adopted within a
discrete model to generate a detailed trace report of entity processing. The simulation
was run for 4 replications of 22 working days during prime time during 8:00 am. to
8:00 pm. The Trace output allows following the sequence of an entity as it flows
through the system, from entity creation until entity disposal.
The entity is a incident ticket which its process flow was intended design and
verifying the output, the model run with different replication numbers to verify that it
works properly under different conditions. After verifying operation of the simulation
model it was validated. In order to reduce variation, four replications were conducted
with different random number streams on the simulation model. A t-test with a 95%
confidence level was conducted to compare the results of the simulation model with
the results recorded for the actual system based on the data collected from Tivoli CTI
system. For each variable the null hypothesis of no difference between the systems
was rejected with a 95% confidence level which indicates the simulation model
adequately represents the actual systems behaviors.
3.3.3 Goodness-of-fit Test of Service Time in Resolving Incidents
The simulation process requires expression of fitted distribution to the time in
resolving incidents therefore the resolving time by severity was analysed to fit the
suited distribution using Input analyzer. Figure 4-2 shows results of good-of-fit test.

TABLE 3-6 A Good-of-fit Test of Time in Resolving Incidents by Severity
Severity Distribution Expression Sq. Error p-value
1 Lognormal LOGN (2.37, 4.74) 0.002295 0.158
2 Lognormal LOGN (4.19, 6.46) 0.003581 0.078
3 Lognormal LOGN (7.87, 11.1) 0.015237 0.053
4 Beta 144*BETA(0.248,1.27) 0.037923 0.039

All the same, a probability plot of service time in resolving incidents can be
estimated the distribution fit by viewing how the points fall about the controlled line
as shown in Figure 3-3.








39
Lognormal distribution for Severity 1 Lognormal distribution for Severity 2
100.00 10.00 1.00 0.10 0.01
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S1 Resolving Time
P
e
r
c
e
n
t
Loc 0.1583
Scale 1.099
N 84
AD 0.543
P- Value 0.158
Pr obabi l i t y Pl ot of S1 Resol vi ng Ti me
Lognormal - 95% CI
100.0 10.0 1.0 0.1
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S2 Resolving Time
P
e
r
c
e
n
t
Loc 0.8753
Scale 1.071
N 90
AD 0.669
P- Value 0.078
Pr obabi l i t y Pl ot of S2 Resol vi ng Ti me
Lognormal - 95% CI
Anderson-Darling statistic = 0.543
p-value = 0.158
Anderson-Darling statistic = 0.669
p-value = 0.078
Lognormal distribution for Severity 3 Beta distribution for Severity 4
100.0 10.0 1.0 0.1
99.9
99
95
90
80
70
60
50
40
30
20
10
5
1
0.1
S3 Resolving Time
P
e
r
c
e
n
t
Loc 1.212
Scale 0.9604
N 89
AD 0.736
P- Value 0.053
Pr obabi l i t y Pl ot of S3 Resol vi ng Ti me
Lognormal - 95% CI
1000.000 100.000 10.000 1.000 0.100 0.010 0.001
99
95
90
80
70
60
50
40
30
20
10
5
3
2
1
S4 Resolving Time
P
e
r
c
e
n
t
Shape 0.6166
Scale 38.25
N 37
AD 0.842
P- Value 0.039
Pr obabi l i t y Pl ot of S4 Resol vi ng Ti me
Beta - 95% CI
Anderson-Darling statistic = 0.736
p-value = 0.053
Anderson-Darling statistic = 0.842
p-value = 0.039

FIGURE 3-3 Probability Plot for Resolving Time by Severity

3.4 The Proposed KMRCA IT Service Desk Framework
This section illustrates a typical IT service desk system, conceptual model of IT
service desk for simulation modeling, KMRCA IT service desk framework, Incident
management and Problem management processes, Search information procedure, and
comparison of both a typical IT service desk and KMRCA IT service desk systems.
3.4.1 A Typical IT Service Desk Outsourcing
IT service desk is a crucial function of an IT outsourcing provider who takes
over IT functions from a bank. However, the bank desires service level targets based
on the service level agreement (SLA) to control the IT service desk operations. The
purpose of the IT service desk outsourcing is to support customer services on behalf
of the banks technology driven business goals.


40
The role of the IT service desk is to ensure that IT incident tickets are owned,
tracked, and monitored throughout their life cycle. Figure 3-4 shows a Typical IT
service desk outsourcing overview.



FIGURE 3-4 A Typical IT Service Desk Outsourcing Overview

There are three main agent levels in resolving incident end-to-end process.
These are (1) First level support called FLS, which is the Bank help desk agents;
(2) Second level support called SLS, which is IT service desk outsourcing agents; and
(3) Third level support called TLS, which is Resolver groups. In thesis is focusing on
the IT service desk outsourcing which includes IT service desk agents and technical
resolver groups. The Tivoli CTI technology is the use of interface among the three
levels of agents in order to make them work simultaneously on the current incident
ticket to be resolved by the target time. The internal users or external customers
directly contact the FLS agents with various incident reports. They can contact to the
FLS by several ways such as telephone call, fax, email, and internet. The incident
reports can be divided into two types by FLS depending upon the IT related that
incident. One is Non-IT incidents and another type is IT incidents. Both are reported
to FLS agents and then the agents review the reports in terms of incident types,
initiate severity, complete necessary incident descriptions and then open the ticket
one-by-one without recurring.







41
The Non-IT incident tickets are resolved by banks resolvers while IT incident
tickets are assigned to the IT service desk outsourcing or SLS agents to resolve those
incidents. Consequently, the SLS agents review and validate the assigned IT incident
ticket for adequacy and correctness based on outsourcing scope, incident types, and
severity criteria. If the assignment is not correct both of FLS and SLS will be
requested to solve the issue. The valid IT incident ticket may be resolved by the SLS
agents using knowledge management system [9] or be assigned to the resolver groups
or TLS to resolve the incident. TLS agents include five main resolver groups; (1) EOS,
(2) IE-AMS, (3) NWS, (4) OS-EC, and (5) VEN.
In resolving incident effectively, IT service desk agents perform actions based
on Incident management process and Problem management process which their
details are described in the next section. However, IT service desk agents take owner
of that assigned incident and attempt to resolve the incident by searching essential
information from several sources such as Data store, File Server, and Internet. If the
incident needed a high technical resolver the IT service desk agent will determine to
assign the incident to the technical resolver groups. Figure 3-5 shows Information
flow of IT service desk.

IT Service Desk
of IT Outsourcing
(Second Level Support)
Customers / Users Bank Help Desk
(First Level Support)
Internet
File Server
Data Store
Resolver Groups
(Third Level Support)
1) AMS Support,
2) EOS Support.
3) NWS Support
4) Operation Support
5) Vendor Support
Assign
Resolver?
SLS
Resolution
TLS


FIGURE 3-5 Information Flow of IT Service Desk




42
3.4.2 Conceptual Model of IT Service Desk
Figure 3-6 shows the conceptual model of IT service desk system, which the
incidents flow through the three agents, 1) FLS agents; 2) SLS agents; and 3) Resolver
groups. The conceptual model is conveyed to the simulation model.

FIGURE 3-6 A Conceptual Model of IT Service Desk System

However, the determination of severity based on impact banks business and
urgent required is assigned according the following criteria.
Severity 1 means a critical severity problem, (a major system, application or
network failure impacting on a large number of users and having a critical impact to
the users business) and where no workaround is available.
Severity 2 means a high severity problem and a workaround may be available.
In other words, one component of a system application or network has failed
impacting on a small number of users; or a fault which may have a potential critical
impact if not resolved quickly; or a problem impacting 1 user and the impact is
significant, such as end of month financials.
Severity 3 means a moderate severity problem (impact is moderate and only
to 1 user) and a workaround is available.
Severity 4 means a low severity problem (no impact to the user) and a
workaround is available.
According to the severity criteria, when the FLS agents create the incident ticket
they also initiate the severity to the ticket. If the incident ticket related to IT, so called





43
IT incident ticket it will be assigned to SLS agents to resolve that incident. Likewise,
the SLS agents check if the incident ticket is within the outsourcing scope and the
assigned severity is correct. However, the agent attempts to resolve the incident. If the
ticket is solved in the second level the incident will be closed. If the incident cannot
be complete at the second level it will be assigned to the relevant technical resolver
group who is responsible for resolving the incident.
3.4.3 KMRCA IT Service Desk Outsourcing Model
For the reason that he Bank takes the owner of first level supports called the
Bank help desk agents to initiate support, providing a vital day-to-day contact point
between internal users and external customers. Therefore, IT service desk agents is
not quite a single point of contact (SPOC) [9] and the resolver groups have more
specialist skills that can be available time or resources to resolve the assigned
incidents. The issues of resource high turnover especially technical staff of IT service
desk and recurring incidents are in the IT service desk system. Thus, the thesis
proposes the framework of KMRCA IT service desk as shown in the Figure 3-7.



FIGURE 3-7 A Proposed Framework of KMRCA IT Service Desk System

The model of the IT service desk outsourcing by putting the KMRCA into the
IT service desk functions. In fact, KMRCA is the KMS of organizational outsourcing
memory to provide resolution and results of root cause analysis in order to prevent the
recurring incidents or problems. Besides, the KMS enables IT service desk agents to


44
increase the speed in resolving incident. With the KMRCA, the agents can search the
similar cases from the knowledge database so that time taken to resolve incident is
reduced. As the Figure 3-1 shows the model for KMRCA IT service desk outsourcing.



FIGURE 3-8 Information Flow of KMRCA IT Service Desk System

Figure 3-8 shows the information flow of KMRCA IT service desk. The
KMRCA database includes knowledge of incident resolutions, results of root cause
analysis, and so forth. The IT service desk resolve the incident by accessing many
different information and knowledge sources via the KMRCA.
The KMRCA IT service desk approach serves as an intermediary between the
service desk agent and all data, information, and knowledge sources. The sources
range from files on the agent's computer, access to the database, communication with
other agents, and access to the Internet. While case-based reasoning systems enable
help desks to store and share knowledge in the form of cases. To resolve an incident is
the responsibility of the IT service desk agent.
However, the incident may be assigned to the relevant revolver group to resolve
that incident. No matter who resolves the incident, the resolution is provided and kept
into the Knowledge database after finishing resolving.






45
3.4.4 Incident Management and Problem Management processes
IT Service Desk function based on ITIL is in Incident management process. The
implementation of the KMS IT service desk system changes the process to the
incident management and problem management that performed by IT service desk
agents and the process is shown in Figure 3-9. A short process flow shows several
activities of incident management and problem management processes. However, the
details of the processes of Incident management and Problem management are shown
in Appendix B.



FIGURE 3-9 KMRCA IT Service Desk Process



46
3.4.5 Search Knowledge Procedure of KMRCA IT Service Desk
When IT service desk agents use KMRCA IT service desk system, they shall
perform searching by using search knowledge procedure as shown in Figure 3-10.




FIGURE 3-10 Search Knowledge Procedure

The narrative of Search Information Procedure has steps as follows:
1) IT Service desk agent reviews incident information and urgent required.
2) IT service desk agent determines if the ticket required escalation.
(a) If yes, proceed to Step 3 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 Search for the similar cases from KMRCA.





47
3) IT service desk agent escalates the ticket to the relevant resolver groups.
4) IT service desk agent searches similar cases from KMRCA database.
5) Was the incident resolved?
(a) If yes, proceed to Step 6 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 search similar cases from KMRCA database.
6) IT service desk agent provides resolution to FLS or Bank help desk and
updates into the KMRCA repository.
7) Recover group reviews the assigned ticket from SLS.
8) Recover group determines if KMRCA is require in resolving incident.
(a) If yes, proceed to Step 9 resolve incident without KMRCA.
(b) If no, proceed to Step 10 search similar cases from KMRCA database.
9) Resolver group resolves incident without KMRCA.
10) Resolver group searches similar cases from KMRCA database
11) Was the incident resolved ?
(a) If yes, proceed to Step 6 escalate the ticket to the relevant resolver groups.
(b) If no, proceed to Step 4 search similar case from KMRCA database.
12) Recover group provides resolution to FLS or Bank help desk and updates
into the KMRCA repository.
13) End


48
3.4.6 Comparison of Typical and KMRCA IT Service Desk systems
The comparison of a Typical IT Service Desk against the KMRCA IT Service
Desk is shown in Figure 3-11. Obviously, the difference between both of IT service
desk is that the KMRCA IT Service Desk includes KMRCA system as center point of
information. IT service desk agents search several information by the KMRCA.
The KMRCA system is connecting to several sources for acquiring several sources of
information such as Data store, File server, and Internet as well as receiving the
update resolution from the resolver group. However, the essential information such
the update incident resolutions have to be validated by IT experts via a domain expert.



FIGURE 3-11 Typical IT Service Desk and KMRCA IT Service Desk






49
3.4.7 Methodology of System Development
There are many methodologies for the development of information systems:
Systems Development Life Cycle (SDLC), Data Structure-Oriented design, Object-
Oriented design, Prototyping, among others. However, this thesis is concerned here
primarily with SDLC.
The Systems Development Life Cycle is referred to variously as the waterfall
model and linear cycle that methodology is a coherent description of the steps taken in
the development of information systems. Figure 3-12 shows the system development
life cycle (SDLC).


FIGURE 3-12 The System Development Life Cycle (SDLC)

The methodology SDLC is closely associated to what has come to be known as
structured systems analysis and design. It involves a series of steps to be undertaken
in the development of information systems as follows:
(a) Problem definition
On receiving a request from the user for systems development, an investigation
is conducted to state the problem to be solved and deliverable is Problem statement.


50
(b) Feasibility study
The objective here is to clearly define the scope and objectives of the systems
project, and to identify alternative solutions to the problem defined earlier and
deliverables is Feasibility report.
(c) Systems analysis phase:
The present system is investigated and its specifications documented. They
should contain our understanding of HOW the present system works and WHAT it
does. In addition, the deliverables are specifications of the present system.
(d) Systems design phase
The specifications of the present system are studied to determine what changes
will be needed to incorporate the user needs not met by the system presently. The
output of this phase will consist of the specifications, which must describe both
WHAT the proposed system will do and HOW it will work of the proposed system. In
addition, deliverables are the specifications of the proposed system.
(e) Systems construction
Systems construction includes Programming the system and development of
user documentation for the system as well as the programs. The deliverables are
programs, their documentation, and user manuals.
(f) System testing and evaluation
System testing and evaluation include testing, verification and validation of the
system just built as well as the deliverables are test and evaluation results, and the
system ready to be delivered to the user or client.
Note that the model has many attractive features such 1) clearly defined
deliverables at the end of each phase so that the client can take decisions on
continuing the project; 2) incremental resource commitment, the client does not have
to make a full commitment on the project at the beginning; and 3) Isolation of the
problem early in the process.










51
3.4.8 The Prototype of KMRCA IT Service Desk System
The prototype of KMRCA IT service desk system was developed using the
SDLC from problem definition to the system testing and evaluation. It includes
several functions based on the whole concept end-to-end of the IT service desks
functionalities. However, GUI menus for multi-agents can be connecting via internet
and logging on as client machines. In this chapter, two core functions of the system
are Searching Knowledge function and Decision support function of automatic
assignment.
The purpose of searching knowledge function is to find similar cases so that the
agents can select one or more of them in resolving the incident. Figure 3-14 displays
the Search knowledge and Input resolutions. On the left-hand side of Convex lens
icon, the agents can double-click on it in order to get in the search knowledge menu.
Then the search menu is displayed pop-up and agents can put some keywords on the
input search space, For example, the input search keyword of Printer and then click
on search button that is giving a several results of similar cases with regard to printer
failures and it can be drilled down cases-by-case to get its details.



FIGURE 3-13 A Sample Display of Search Knowledge and Input Resolution
Input Resolution
< Search Knowledge


52
As the function of the knowledge is organized by the scope of dealing incidents
with system-type failures. The classification is to help IT service desk agents to
identity how to solve the incident by whom effectively. The incident scope is
described the general type of incident failures such as software, hardware, network,
operations and power supply.
The accessible required knowledge is relevant several menus, including search
menu and input resolutions as shown in Figure 3-14.
Some identified cases such the previous incidents that match the present one
may or may not help the agent in resolving the call. In this thesis, the knowledge
database store several cases that will be used in the case-based reasoning approach.



FIGURE 3-14 A Sample Display of Searching Results

In fact, the function of automatic resolver group is able to initiate the automatic
resolver group assignment by setting which severity is need automatic assignment.
Figure 3-15 shows the decision support function of assign resolver group.





53


FIGURE 3-15 A Sample Display of Assign Resolver Group

3.5 Methodology of Automatic Resolver Assignment
3.5.1. Sample and requirement analysis
Raw datasets are provided by the Tivoli system in a spreadsheet for 14,440
incident cases. They were collected for 4 months (April to July 2006). A sample of
the data is shown in Appendix A - Figure A-1. Each column (or attributes) contains
information about several IT incident tickets. However, in this study, we focus on the
information of four columns: incident descriptions, system-type failures, component
failures, and the assigned resolver groups who are related to those system-type
failures. A sample of the incident data shows in Appendix A-Figure A-1. Table 3-7
shows the number of incidents of various system types and their resolver groups.

TABLE 3-7 The Number of Incidents of System Types and Resolver Groups
System types EOS IE-AMS NWS OS-EC VEN Total
Hardware 0 0 5,605 1,841 294 7,740
Software 376 400 3,307 148 61 4,292
Network 0 0 308 593 1,120 2,021
Operation 0 6 6 0 18 30
Power Supply 0 0 0 357 0 357
Total 376 406 9,226 2,939 1,493 14,440

< Assign Resolver
Group


54
3.5.2 The proposed automatic resolver group assignment
The thesis improved the KMRCA IT service desk system by proposing the
automatic resolver group assignment function in the system. Figure 3-16 shows the
function of IT service desk outsourcing with automatic resolver group assignment and
the details of automatic resolver group assignments can be illustrated in terms of
process as shown in Figure 3-17.



FIGURE 3-16 KMRCA IT Service Desk with Automatic Assignment Function



FIGURE 3-17 A Process of Automatic Resolver Group Assignment

The automatic resolver group assignment function is one of the core functions
in the KMRCA IT service desk system. The focal point is the resolver group which
handles the proper allocation of resources to deal with the assigned incident.





55
The below are the narratives of automatic resolver group assignment process.
Step 1 : Start entering IT incident ticket of which includes text document.
Step 2 : Perform keyword-based word extraction.
Step 3 : Perform Text measures and case-terms data transformed through the
model classification.
Step 4 : Implement the ID3-based method to generate a pattern and to identify
a suitable resolver group(s). The generation rules from the ID3 method are shown in
Appendix A, A4 : An extended part of ID3 decision tree results and A5 : A sample of
ID3-based generation gules.
Step 5 : Calculate the percentages of matching words in the assigned resolver
group and display the results.
Step 6 : Determine if the percentage matching is equal or more than the
specified criteria.
(a) If yes, proceed to Step 8 Assign resolver group to deal with the incident.
(b) If no, proceed to Step 7 Notify IT service desk or SLS to make decision.
Step 7 : Notify IT service desk or SLS to make a decision
Step 8 : Assign resolver group to deal with the incident
Step 9 : Display the results of assignment
Step 10: Validate the assigned results and generated rules by IT experts
Step 11: Check if the IT expert has validated the result yet.
(a) If yes, proceed to Step 13 Check if the result is changed.
(b) If no, proceed to Step 12 Check if duration time is valid.
Step 12: Check if duration time is valid.
(a) If yes, proceed to End.
(b) If no, proceed to Step 10 Validate the assigned results.
Step 13: Check if the result is changed.
(a) If yes, parallel paths; proceed to Step 14 Update keywords
(b) If no, proceed to End.
Step 14: Update Keywords to keep generated rules and assignment results in
Knowledge database
Step 15: End of the process



56
3.5.3 Data preparation and selected model procedure
The raw dataset contains structured information about incident cases as
previously described in Section 3.2.



FIGURE 3-18 Processes of Model Approach for Automatic Assignment
The six steps of processes of model approach for automatic assignments include;
1) Data preparation with text documents of incident records; 2) Document collection
or Text corpus; 3) Data divided for training documents and testing documents; 4) Text
measures; 5) Method selection based on the training documents; and 6) Model
validation based on the testing documents. Figure 3-18 shows the processes of this
model approach for automatic assignment.
3.5.3.1 Data preparation
Data preparation processes [60] include data recognition, parsing, filtering, data
cleansing [61], and transformation. The study added Data grouping by keywords.
Hence, in this case, the data preparation processes are as follows:
(a) Data Recognition; This identifies the incident records
collected from Tivoli CTI system as the sample of raw structured data in spreadsheet
format.





57
(b) Data parsing; the purpose of data parsing is to resolve a
sentence into its component parts of speech. In fact, statements in computer language
have to be parsed. Therefore, the statements will be broken down and individual
words of which the incident report is composed are identified. The study modified
LexTo to break down the incident documents (Thai and English) into text. LexTo is
Java program of word extraction for both languages. The program was developed by
National Electronic Computer Technology Center of Thailand or NECTEC. The
program works with Lexitron dictionary. The study created another keyword
dictionary and modified the program to execute both dictionaries. Therefore, the
correctness of word extraction is more than 98.7 % of all words. The result of
keywords extract from the incident dataset are shown in Appendix A, A-2 Figure A-2
(c) Data filtering; it involves selecting rows and columns of
data for further Document collection or Text corpus. Consequently, the Text corpus
includes several columns, including System failure types, Sub-system or Component
failures, Incident descriptions, and Assigned resolver group.
(d) Data cleaning; the study makes correct inconsistent data,
checking to see the data are conforming across its columns and filling in missing
values in particular for the component failures and assigned resolver groups.
(e) Data grouping; from the word extraction that gives many
words and then grouped them into the words of component and system-type failures.
There are two types of data, 1) words with the same meaning, for example of a
keyword of Hard Disk being the same meaning with Hard Drive or HD, and 2)
the relevant words either singular or plural [62].
(f) Data transformation; the study transforms data prior to
data analysis. Several steps need data transformation such as Word extraction, Text
measurement, Text mining via WEKA machine learning, which is applied to discover
algorithms or methods, comparing several decision tree algorithms to find out the
most suitable method for the nature of incident data.
3.5.3.2 Dataset separation for training and testing
The sample of dataset is divided into two documents, (1) A training document
consisting of 66% of the samples and (2) A testing document consisting of 34% of the
cases.


58
3.5.3.3 Document collection
Document collection or so called Text corpus is the database containing text
fields, which include a sample of data. The data is a subset of the incident database.
The Textual fields are selected columns such as system type failures, component
failures, incident descriptions, and assigned resolver group [63].
3.5.3.4 Text Measures
The purpose of text measures is to find attributes that describe text in order to
know how many keywords (KW1, KW2, , KWn, where n is the number of words)
related to the assigned groups are in the documents. The study developed program
that provides text measures based upon word counts across the sample of the text
documents. It displays the text measures.
3.5.3.5 Method Selection
Method discovery is the core of text mining algorithms. Several decision tree
methods of Decision Stump, ID3, J48, NBTree, Random Forest, Random Tree, and
REPTree were implemented within the WEKA framework by Written and Frank [54]
based upon the training dataset. Finally, the ID3 decision tree method was found to be
the strongest method for the nature of that dataset.
Text mining is data mining applied to information extracted from text. It can be
broadly defined as a knowledge-intensive process in which a user interacts with
documented collection overtime by using suitable analysis tools [64]. A text mining
handbook written by Feldman and Sanger [64] presents a comprehensive discussion
in text mining and link detection algorithms and their operations.
3.5.3.6 Model Validation
The proposed ID3-based model for the function of automatic resolver group
assignment. The model is illustrated in Figure 3-13. In order to validate the model,
Thesis implemented the ID3 within the WEKA based on the testing dataset and then
the details of the validation results of the ID3 method are shown in Appendix A, A-3 :
Evaluation result of ID3 decision tree method.
To estimate the classification evaluation approaches, it will be commonly uses
10-fold smooth out cross-validation [57]. The 10-fold cross validation which is
helpful to prevent over fitting and the result of accuracy is an average of any 9 divided
by 10 sample as training set and the rest as testing set for 10 times.





59
3.6 Summary
The purpose of IT service desk is to support services on behalf of the banks
technology driven business goals. The role of IT service desk is to ensure that IT
incident tickets are owned, tracked, and monitored throughout their life cycle.
Knowledge management is used as the framework to integrate the technology, people,
and process for improved service desk performance.
The purpose of this methodology is to demonstrate the proposed model and a
prototype of KMRCA IT Service desk system. In addition, the descriptions of
information collection and data analysis focused on the simulation study which are
used in the performance evaluation. To perform the new system IT service desk
agents and resolver groups have to perform the proposed processes particular search
knowledge procedure so that the agents can leverage the organization's knowledge
and solve the incident faster than working without the knowledge management system.
For the automatic assignment, this is another core function of the system. The
aim of the function is to demonstrate the proposed enhance model of decision support
system of automatic resolver group assignment and a prototype of ARGA-ID3 IT
Service desk system. The system was improved from the KMRCA IT service desk
system by embracing the automatic resolver group assignment. A sample is analysed
in terms of correlation between the system type failures and resolver group related the
failures. In addition to the core methodologies of text mining discovery methods of
classification trees, the strongest method is evaluated by 10-fold cross validation. The
10-fold cross validation is helpful to prevent over fitting.
The text mining discovery algorithm gives the optimized pattern discovery
framework to text. In particular, the class of simple combinatorial patterns over
phrases, and consider the problem of finding the patterns that optimize a given
statistical measure within the whole class of patterns in a large collection of
unstructured texts.

CHAPTER 4
EXPERIMENTAL RESULTS

This chapter provides experimental results of which is describing in terms of
performance evaluation. Section 4.1 shows the results of text mining discovery
methods of automatic assignment function. The results of experimental design with
screening design is to identified which factors are important on each influence
variable are illustrated in Section 4.2. Section 4.3 shows the performance evaluation
of KMRCA IT service desk system that is analysed and compared versus the previous
system of a typical IT service desk by using simulation study based on actual data.
Besides, the summary is presented in Section 4.4.

4.1 The Results of Text Mining Discovery Methods of Automatic Assign Function
In this section, the results were divided into two parts, (1) the comparison results;
and (2) Selected method evaluation. The experimental results particular the time taken
to build models are based on a notebook computer IBM ThinkPad model R50e with
memory 768 MB and 80 MB Hard disk with running speed at 5,400 rpm. In addition,
the software tool used in the experiment is WEKA machine learning software version
3.4.12 by changing the parameter of the maxheap in RunWega.ini to the max value at
1,280 MB instead of the default by 128 MB that is to support our immense dataset.
4.1.1 Comparison results
The comparison of various decision tree methods was conducted and
implemented within the WEKA framework. Based on the 66% of the sample dataset
of 9,530 records, There are seven classification trees were implemented, including
Random Tree, Random Forest, ID3, J48, NBTree, REPTree, and Decision Stump
within WEKA [54] with default parameters. In the experiment, the accuracy on the
sample was obtained using 10-fold cross validation, which is to prevent over fitting.
All the experimental results are shown in Tables 4-1 and 4-2. Table 4-1 shows the
number and percentage of correct incidents for various types of decision trees.
Table 4-2 shows the speed to build models, Size of trees, and accuracy of
classification for the individual classifiers, respectively.

62
TABLE 4-1 The Number and Percentage of Correct Incident for Various Types of
Decision Trees
Decision Tree
Classifiers
No. of
Correct instances
No. of
Incorrect instances
Accuracy of
Classification (%)
ID3 8914 616 93.5362
Random Tree 8914 616 93.5362
Random Forest 8913 617 93.5257
J48 8896 634 93.3473
NBTree 8890 640 92.2844
REPTree 8866 664 92.0325
Decision Stump 7587 1943 80.3746

From Table 4-1, it can be seen that ID3 and random tree were equally good in
terms of proportion of correct allocations with random forest not far behind. Decision
stump was worst

TABLE 4-2 The Speed compared with the Accuracy of Classification.
Decision Tree
classifiers
Time Taken to
Build Models (seconds)
Size of Tree
Accuracy of
Classification (%)
ID3 5.15 134 93.5362
Random Tree 20.89 167 93.5362
Random Forest 46.96 10 93.5257
J48 19.58 83 93.3473
NBTree 190.54 1 92.2844
REPTRee 10.39 85 92.0325
Decision Stump 0.59 1 80.3746

From Table 4-2, decision stump is by far the fastest classifier, by an order of
magnitude, but the highest proportion of misclassifications also it produces only one
tree. ID3 is the second fastest classifier, about twice as fast as the next one and it also
had the lowest proportion of misclassifications.


63
The comparison of decision tree methods is considered in terms of accuracy and
performance as shown in Tables 4-1 and 4-2, respectively. ID3 and Random Tree give
the highest accuracy among the others. However, the Random Tree is not fit to deal
with imbalanced samples, through it is easy to obtain rules from large datasets like
Random Forest. The Random Tree gives high accuracy, but it is poor performance in
terms of speed to build the model. Thus, the performance of ID3, J48, NBTree,
REPTree, and Decision Stump are comparable. Decision Stump gives the highest
speed, but the lowest accuracy. It generates of one tree like NBTree that cannot
support for the knowledge-based classification. Thus, considering both accuracy and
speed, the ID3 is the best choice.
4.1.2 Method evaluation
To validate the method of the automatic assignment function, using the testing
data by default value 10-fold cross validation within WEKA platform. The testing
data consisting of 34% of the sample dataset of 4,910 cases. In addition, the IT
experts who participate in the experiments also validate the result of validation.
The results show the accuracy assignment was 93.06 % of the cases, which indicates
the ID3-based method is significantly suited for the model of decision support system
for automatic resolution of group assignment. However, the details of results
generating by WEKA machine learning are shown in Appendix A, A-3.

4.2 The Results of Design of Experiment
4.2.1 Design of Experiment and Analysis
The use of design of experiment (DOE) and optimization technique was conduct
when the experimental is execution simulation models of both a currently typical IT
service desk and KMRCA IT service desk configurations and comparing their results.
The experiments include the study of three factors. They are often used to study
the performance of the process and the system [65]. The objective of the experimental
design is to determine the factors are most influential on the response of the system.
By using the experimental 2
3
full-factorial design which is to identify the effects of
three different interesting factors on eight dependent variables. Each factor at two
levels and then eight treatment combinations run in the 2
3
design. To perform
screening experiments is selecting the key factors affecting a response.

64
4.2.2 The Key Factors and Output Variables
According to Gonzlez [30] argued that dependent variables are performance
variables tracked by the service desk which are common performance measurement.
The dependent three factors are as follows:
(a) Factor A: Time to type incident information and search the relevant
knowledge from the KMRCA system (minutes).
(b) Factor B: Time to resolve an incident using the KMRCA system (minutes).
(c) Factor C: Time to add new information into the KMRCA system (minutes).
In addition, the dependent Output variables are as follows:
O
1
: Throughput, total number of calls resolved in a period of time
O
2
: Time in resolving incidents of Severity 1 (minutes)
O
3
: Time in resolving incidents of Severity 2 (minutes)
O
4
: Time in resolving incidents of Severity 3 (minutes)
O
5
: Time in resolving incidents of Severity 4 (minutes)
O
6
: Number of incident calls in AMS queue.
O
7
: Number of incident calls in EOS queue.
O
8
: Number of incident calls in NWS queue.
The factors values were calculated from the average time consumed by the five
IT service desk staff who used the KMRCA IT service desk system in searching,
resolving, and keeping resolutions. In addition, the IT service desk manager as an IT
expert confirmed the results. Table 4-3 shows the assigned factor values for two-level.

TABLE 4-3 Assigned Factor Values for Two-Level
Factor Low (minutes) High (minutes)
A 0.8 1.2
B TRIA(1.0, 1.6, 3.3) TRIA(2.0, 3.0, 4.8)
C 1.5 2.4


However, a different output variable is needed to test for each incident severity
since they follow different paths through the IT service desk. The analysis of variance
(ANOVA) for full-factorial design is done to test that the main effects or interaction
parameters are equal to zero. In statistical analysis, the factors with a p-value lower
than 0.05 are considered as important factors that significantly influence the results.


65
The ANOVA analysis shows that the dependent variable of throughput (O
1
) and
variable of average time in resolving incidents of severity 3 (O
4
) are significantly
influenced by three key factors which were significant because the p-value lower than
0.05 and the other dependent variables do not have any factors that affect them
significantly that the others in all cases p-value are more than 0.05. Thus, the study
focused on the five variables, including throughput, average time in resolving
incidents of severities 1, 2. 3. and 4. Table 4-4 shows 2
3
factorial design of design of
experiment (DOE) for responses of throughput. However, the details of that result
shown in Appendix C, C-3 and C-4.

TABLE 4-4 2
3
Full Factorial Design of DOE for Responses Y of O
1

Run Factor Throughput (no. of calls / time period)
Order A B C
Yrep 1 Yrep 2 Yrep 3 Yrep 4
1 - - - 3585 3628 3585 3558
2 + - - 3585 3626 3585 3558
3 - + - 3584 3616 3584 3556
4 + + - 3584 3615 3584 3556
5 - - + 3584 3624 3585 3558
6 + - + 3584 3620 3584 3556
7 - + + 3584 3581 3583 3555
8 + + + 3533 3487 3513 3529

Table 4-5 shows coded design matrix of Throughput (O
1
)

TABLE 4-5 Coded Design Matrix of O
1

Run
Order
A B C AB AC BC ABC
Ave.
(Y)
SD.
(Y)
Var.
(Y)
1 - - - + + + - 3589.0 28.9 838.0
2 + - - - - + + 3588.5 28.1 787.0
3 - + - - + - + 3585.0 24.5 601.3
4 + + - + - - - 3584.8 24.1 580.9
5 - - + + - - + 3587.8 27.2 470.3
6 + - + - + - - 3586.0 26.2 688.0
7 - + + - - + - 3575.8 13.9 192.9
8 + + + + + + + 3515.5 20.9 435.7

As shown in Table 4-6 is the summary of absolute value of coefficients for
average response of Throughput (O
1
) and p-value by factors and their interactions.
From the Table 4-6, Factor A, Factor B and interaction AB are the most influence to
the Throughput, accordingly. In addition, the Figure 4-1 shows Pareto of coefficients
for average Response Y of O
1
.

66
TABLE 4-6 Absolute Value of Coefficients for Average O
1
and P-Value
A B C AB AC BC ABC
Absolute of Coeff. 7.844 11.281 10.281 7.281 7.656 9.344 7.344
p-value 0.0845 0.0161 0.0268 0.1078 0.0918 0.0424 0.1050


FIGURE 4-1 Pareto of Coefficients for Average Response Y of O
1


Another response is the time in resolving incidents of severity 3 that Table 4-7
shows the absolute value of coefficient for Average Time in resolving incidents of
severity 3 (O
4
) which all three factors are significant for the response of Time in
resolving incidents of severity 3 (O
4
). Therefore, a Pareto of coefficients of average
Time in resolving incidents of severity 3 (O
4
) as shown in Figure 4-2.

TABLE 4-7 Absolute Value of Coefficients for Average of O
4
and P-Value
A B C AB AC BC ABC
Absolute of Coeff. 0.188 0.638 0.438 0.012 0.012 0.012 0.012
p-value 3e-33 6e-46 5e-42 7e-07 7e-07 7e-07 7e-07


FIGURE 4-2 Pareto of Coefficients for Average Response Y of O
4



67
4.3 The Results of Performance Evaluation
The objective of thesis is to evaluate performance of the KMRCA IT service
desk system by using a simulation study. To demonstrate the concept of KMRCA IT
service desk system which has more speed in resolving incidents than the previous
Typical IT service desk system, therefore the research hypothesis is that the system
will have a shorter incident resolution time. A shorter incident resolution time will
occur because the knowledge management system with root cause analysis will
facilitate organizational learning and will enable IT service desk agents and resolver
groups to share knowledge sources to resolve the incident faster as well as it will be
preventing the recurring incidents. As the reason of reducing time in resolving
incidents therefore, it would be a higher throughput.
According to the hypothesis is that time in resolving incidents of all severities
except for critical incident that will be lower in KMRCA IT service desk system than
the previously Typical IT service desk system.
The developed simulation model is to test the hypothesis that describes Typical
IT service desk system and KMRCA IT service desk system. A simulation enables
service desk agents to perform analysis that captures the entire interrelationship
between callers, agents, skills, and technology [66]. In this case, the simulation model
research approach is adopted so that it can be conducted by experiments and
evaluated the knowledge management system without interrupting the IT service
desks daily operations. Furthermore, the simulation model will help to analyze the
advantages that can be obtained with the implementation of the knowledge
management system. The concept of KMRCA IT Service Desk can be evaluated its
performance using a simulation study. According to the research hypothesis is that the
new system will have a shorter time in resolving incident than the previous system.

4.3.1 Comparison of Test of KMRCA and Typical IT Service Desk Systems
The factors are analyzed with two levels (low or - and high or +) and their
were replaced with the resolving incident by severity in the assign in simulation
model so that the results of responses are shown in Table 4-8. Four replications of
each experiment were run for 22 working days in a random order and the results were
recorded for further statistical analysis. The details of comparison test are shown in
Appendix C, C-5 and C-6.

68
TABLE 4-8 Comparison Tests of KMRCA and Typical IT Service Desk Systems
Variables
Observed
t-value
Critical
t-value

p-value
1) Throughput 22.68 3.182 0.001
2) Average Resolving Time of severity 1 -0.83 3.182 0.466
3) Average Resolving Time of severity 2 0.16 3.182 0.882
4) Average Resolving Time of severity 3 3.26 3.182 0.047
5) Average Resolving Time of severity 4 -0.40 3.182 0.716


As the hypothesis is the average time in resolving incidents for all calls except
for critical calls will be lower in KMRCA IT service desk system than the current
agent of service desk system. Figure 4-8 shows the values of the observed t-value and
the value of Critical t-value with two-tail (/2 = 0.025 and degree of freedom = 3) for
each dependent variable. As shown in Table 4-8, it can be noticed that in Throughput
and Time in resolving incidents of Severity 3, since the observed t-value is higher
than the critical t-value this means that H
0
is rejected. In other words, the means are
not equal. On the other hand, for Time in resolving incidents of Severity 1 and Time
in resolving incidents of Severity 2 the observed t-value is lower than the Critical
t-statistic, then H
0
is not rejected therefore it is concluded that those means are equal.
4.3.2 Comparison Output of KMRCA and Typical Service Desk Systems
Table 4-9 shows the comparison outputs of KMRCA and Typical IT service desk
system. The simulation of KMRCA IT service desk system gave more throughputs of
16.9 % and decreased the average resolving time in severity 3 of 4.8 %, but the results
of the others were not significant because they failed in the t-test.

TABLE 4-9 Comparison Outputs of KMRCA and Typical IT Service Desk Systems
Variables

KMRCA
IT service desk
Typical
IT service desk
Throughput (no. of calls per period) 3,531 3,019
Average Resolving Time of severity 1 (min.) 2.75 1.84
Average Resolving Time of severity 2 (min.) 4.26 5.43
Average Resolving Time of severity 3 (min.) 7.11 6.77
Average Resolving Time of severity 4 (min.) 25.22 21.54


69
4.4 Summary
In this chapter, the thesis shows the shows the results of Text mining discovery
methods of automatic assignment function and the results of performance evaluation
of KMRCA IT service desk system.
For the results Text mining discovery methods, this was to discover suitable
decision tree methods based on WEKA machine learning by comparing several
decision tree methods. Finally, the ID3 decision tree is the strongest algorithm. The
comparing results of decision tree methods show correctively classified instance more
than 93% of the cases. The ID3 classifier has the best performance in terms of speed
to build the model combined with a high accuracy of a classification. The model was
validated based on the training dataset within WEKA platform with 10-fold cross
validation and the accuracy of the results of the model was 93.06 % of the cases.
For the results of performance evaluation of KMRCA IT service desk system,
the summary from a computer simulation to quantitatively compare the currently
Typical IT Service Desk and proposed KMRCA IT Service Desk systems.
The simulation study result showed almost 17 % increase in throughput, and 4.8 %
decrease in just the average time in resolving incidents of severity 3. For the average
time in resolving incidents of severities 1, 2, and 4, the results of the t-test were failed
and no statistically significant difference could be concluded with confidence for of
critical , high, and low priority incidents. The improvements are significant and
provide justification for implementing the knowledge management system with root
cause analysis to the moderate-priority incident or incident of severity 3. With the
design of experiment, it can be used to design the specifications of the knowledge
management system. Furthermore, the advantage of the simulation can be performed
studying without interrupting the daily IT service desk operations.














CHAPTER 5
CONCLUSION

This chapter concludes the experiment results from evaluating performance and
comparing methods and discusses the advantages of the proposed framework. It also
suggests the ways to improve system as proposing in the further work.

5.1 Conclusion
This thesis makes three contributions. Firstly, the thesis proposes a framework
of knowledge management system and root cause analysis, so called KMRCA IT
service desk system. Secondly, the thesis evaluates a performance of KMRCA IT
service desk system by using a simulation study based on actual incident data and
compared the results with a previously typical IT service desk system. Thirdly, the
thesis proposes the process of text mining to discover methods which include data
preparation, document collection, text measurement, method selection, and method
evaluation through classification approach.
The proposed framework of KMRCA IT service desk system composes of two
main functions, 1) searching knowledge function; and 2) automatic resolver group
assignment function. The performance of KMRCA IT service desk system was
evaluated in terms of speed in resolving incidents. The experimental results indicated
that KMRCA IT service desk approached significantly enhance the performance of
the typical IT service desk system by giving more throughput and reducing time in
resolving incidents. In the study, the computer simulation was conducted to compare
the typical IT service desk system against KMRCA IT service desk system. The
simulation study result showed almost 17 % increase in throughput, and 4.8 %
decrease in average resolving time of Severity 3. At Severity 1, Severity 2, and
Severity 4 the t-test failed and then no statistically significant difference can be
concluded with confidence for critical, high and low priority incidents. Thus, the
advantages are significant and provide justification for implementing the knowledge
management system with root cause analysis on the moderate priority incidents.

72
For the Text mining discovery methods, the thesis discovers the suitable
methods within WEKA machine learning by comparing several decision tree methods.
Finally, the ID3 decision tree method is the strongest algorithm. The comparing
results of decision tree methods show correctively classified instance more than 93%
of the cases. In addition, the ID3 classifier has more performance in terms of speed to
build the model meanwhile the size of tree does not affect on accuracy classification.
The proposed ID3-Based model for automatic resolver group assignment of IT service
desk outsourcing in the bank. The comprehensibility of ID-3 decision tree indicates
the appropriate assigned resolver group to deal with the type of the incident. The
method of the model is validated based on the training dataset within WEKA platform
with 10-fold cross validation and the creativeness results of the model was 93.03% of
the cases. The experimental results indicate that the ID3 in terms of generated tree
rules and speed is the optimal method to deal the model with automatic resolver
assignment that would significantly increasing productivity in terms of more
assignments that are correct and then decreasing reassignment turnaround time.
Furthermore, the rules resulting from the rule generation from the decision tree could
be properly kept in knowledge database in order to support and assist with future
incident resolver assignments.

5.2 Discussion
The simulation output shows the IT service desk system yielded 17 % higher
throughput, but the t-test failed at the critical and high priority levels since resolving
time is quite limited that makes IT service desk agents urgently assign to the resolver
group without using the knowledge management system. For severity 4, there have a
lot of time in resolving incident low priority consequently the agent leave this
incident until resolver available to resolve that incident. Thus, the KMRCA IT service
desk system is not designed to support those severities. However, the throughput can
be improved by training the staff before they use the KMRCA system so that the
staffs skill can make more decreasing time in resolving incident than without training.
Although the thesis proved that knowledge management with root cause
analysis is able to enhance the IT service desk outsourcing in banking business there
are several ways to continue improving the system. Firstly, the IT service desk system




73
should be automatic resolver group assignment because a manual assign may make
mistaken when agents select resolver or group to deal with the incident manually.
The circumstance when IT service desk agents received critical incidents of which
urgently required in resolving therefore they often suddenly assign to the relevant
resolver group without using the knowledge management system. The number of the
critical incident tickets is less than one percent, but they are significant impact on the
whole banks business processes. In addition, the specification of the knowledge
management system can be defined from the experimental design by three factors of
which time consumed when the agents perform using the system.

5.3 Future Work
The remaining issue of which one ticket is assigned to the most suited resolver,
it does not indicate that the incident ticket closed completely, since some incidents
may require resolver more than one. For example, the incident on ATM broken down
and hence customers cannot withdraw their money. These may cause of several
failures such as applications, networks and electrical power supply that impact on
many parties to be concerned. Thus, we will improve the model focusing on the multi-
resolver group assignments.
Another improvement of IT service desk is to search the relevant knowledge
automatically by using the text mining of transforming search to discover knowledge
in which the process extracting key words and then proceed to discover the relevant
knowledge. Through the search engines can help finding relevant documents a new
technology goes beyond simple document retrieval. The text mining make it possible
to discover new knowledge in the form of trends, anomalies, relationships, and
patterns that span multiple knowledge collections. By extending the way text
databases can be explored, text mining can contribute valuable content analysis and
decision support to the existing knowledge in the organization.


REFERENCES

1. Nonaka, I. and Takeuchi, H. The Knowledge-Creating Company. New York :
Oxford Press, 1995.
2. Allee, V. The Knowledge Evolution: Expanding Organizational Intelligence.
New York : Butterworth-Heinemann, 1997.
3. Alavi, M. and Leidner, D. E. Knowledge Management Systems: Emerging
Views and Practices From The Field. Proceedings of the 32
nd
Hawaii
International Conference on System, IEEE Computer Society (1999) : 239.
4. Davenport, T. H. and Prusak, L. Working Knowledge: How Organizations
Manage What They Know. Boston, Massachusetts : Harvard Business
School Publishing, 2000.
5. Grote, M. H. and Tube, F. A. When Outsourcing is not an Option: International
Relocation of Investment Bank Research - Or isn't it? Journal of
International Management. 1-13(2007) : 57-77.
6. Mahnke, V., Overby, M. and Vang, J. Strategic Outsourcing of IT Services:
Theoretical Stocktaking and Empirical Challenges. Industry and Innovation.
2-12(2005) : 205253.
7. Behr, K., Castner, G. and Kim, G. The Value, Effectiveness, Efficiency, and
Security of IT Controls: An Empirical Analysis. University of Oregon, 2004.
8. Forte, D. Security Standardization in Incident Management: the ITIL Approach.
Network Security. 1 (2007, January) : 14-16.
9. Phomasakha, P. and Meesad, P. Knowledge Management System with Root
Cause Analysis for IT Service Desk in Banking Business. Proceedings of
the 2007 Electrical Engineering/Electronics, Computer, Telecommunications
and Information Technology (ECTI) International Conference, 2(2007), Mae
Fah Luang University, Chiang Rai, Thailand, (2007, May 9-12) : 1209-1212.
10. Clevel, B. and Mayben, J. Call Center Management on Fast Forward: Succeeding
Todays Dynamic Inbound environment. Maryland : Call Center Press, 1997.
11. Anton, J. and Gusting, D. Call Center Benchmarking: How Good Is Good
Enough. Indiana : Purdue University Press, 2000.

76
12. Dawson, K. The Complete Guide to Starting, Running, and Improving Your Call
Center. CMP Books, New York : Focal Press, 1999.
13. Sandborn, S. Structuring the service desk. Information World. 23-52(2001) :
28
14. Zhang, J. and Faerman, S. R. Divergent Approaches and Converging Views :
Drawing Sensible Linkages between Knowledge Management and
Organizational Learning. Proceedings of the 36
th
Hawaii International
Conference on System Sciences, 2003.
15. Drucker, P. F. The Post-Capitalist Executive Managing in a Time of Great
Change. New York : Penguin, 1995.
16. Suzuki, Y. and Toyama, R. A Self-evaluation Method of SECI Process in
Knowledge Management. IEEE International Engineering Management
Conference. 2(2004) : 491- 494.
17. Chen, F. and Burstein, F. A Dynamic Model of Knowledge Management for
Higher Education Development. Proceedings of the 7
th
International
conference on Information Technology Based Higher Education and
Training , 2006 : 173-180.
18. Mertins, K., Heisig, P. and Vorbeck, J. Knowledge Management: Best Practices
in Europe. Berlin : Springer-Verlag, 2001.
19. Meso, P. and Smith, R. A Resource-based View of Organizational Knowledge
Management Systems. Journal of Knowledge Management. 3-4(2000) :
224234.
20. Satyadas, A. and Harigopal, U. Knowledge Management Tutorial: An Editorial
Overview. IEEE Transactions on Systems, Man, and Cybernetics-Part C :
Applications and Reviews. 31-4(2001) : 429437.
21. Sveiby, K.E. The New Organizational Wealth. Managing and Measuring
Knowledge-Based Assets. San Francisco : Berrett Koehler Publisher, 1997.
22. Holsapple, C.W. and Joshi K.D. Organizational knowledge resources.
Decision Support Systems. 31(2001) : 3954.
23. Taylor, M.J., Gresty, D. and Askwith, R. Knowledge for Network Support.
Information and Software Technology. 43(2001) : 469475.

77
24. Marcella, R. and Middleton, I. The Role of the Help Desk in the Strategic
Management of Information Systems. OCLC Systems and Services. 12-4
(1996) : 419.
25. Gray, P.H. A Problem-solving Perspective on Knowledge Management
Processes. Decision Support Systems. 31(2001) : 87102.
26. Frey, N., Matlus, R. and Maure, W. A Guide to Successful SLA Development
and Management. Gartner Group Research Strategic Analysis Report, 2000,
October.
27. Anderson, B. and Fagerhaug, T. Root Cause Analysis: Simplified Tools and
Techniques. Milwaukee : ASQ Quality Press, 2000.
28. Doggett, A. M. Selected Collaborative Problem-Solving Method for Industry.
Selected paper (2004). Humboldt State University, 2004.
29. Wilson, P. F., Dell, L. D. and Anderson, G. F. Root Cause Analysis : A Tool for
Total Quality Management. Milwaukee : ASQ Quality Press, 1993.
30. Gonzalez, L. M., Giachetti, R. E. and Ramirez, G. Knowledge Management-
centric Help Desk : Specification and Performance Evaluation, Elsevier,
Decision Support Systems. 40(2005) : 389 405.
31. Weidl, G., Madsen, A. L. and Israelson, S. Applications of Object-oriented
Bayesian Networks for Condition Monitoring, Root Cause Analysis and
Decision Support on Operation of Complex Continuous Processes.
Elsevier, Computer and Chemical Engineering. 9-29(2005, 15 August) :
1996-2009.
32. Aamodt, A. A Knowledge Intensive Approach to Problem Solving and Sustained
Learning. PhD. dissertation, University of Trondheim, Norwegian Institute
of Technology, May 1991.
33. Aamodt, A. and Plaza, E. Case-Based Reasoning: Foundational Issues,
Methodological Variations, and System Approaches. AI Communications.
7(1994) : 39-59.
34. Reisbeck, C. K. and Schank, R.C. Inside Case-Based Reasoning. Hillsdale,
New Jersey : Lawrence Erlbaum Associates, 1989.
35. Doyle, M., et al. CBR Net: Smart Technology over a Network. TCD
Technical Report, 1998, July.

78
36. Schank, R. C. Inside Case Based Reasoning. New Jersey : Erlbaum, 1989.
37. Watson, I. Applying Case-Based Reasoning : Techniques for Enterprise Systems.
San Mateo, California : Morgan Kaufmann, 1997.
38. Gentner, D. Are Scientific Analogies Metaphors? Problems and perspectives.
Brighton, UK : Harvester Press, 1982 : 106-132.
39. Carbonell, J. G. Derivational Analogy in PRODIGY : Automating Case
Acquisition, Storage, and Utilization. Boston : Kluwer Academic Publishers,
1993.
40. Kolodner, J. L. Case-Based Reasoning. San Mateo, California : Morgan
Kaufmann, 1993.
41. Althoff, K. -D., et al. A Review of Industrial Case-Based Reasoning Tools.
Oxford : AI Intelligence, 1995.
42. Office of Government Commerce (OGC). Service Support. ITIL Version 2
Library, UK : TSO (The Stationery Office) publisher, 2005.
43. Yang, D.-H., et al. Developing a decision model for business process
outsourcing. Elsevier, Computers and Operations Research, 34-12(2007) :
3769-3778.
44. Lacity, M., Willcocks, L. and Feeny, D. Sourcing Information Technology
Capability. A Decision-Making Framework. Information Management:
The Organizational Dimension, Oxford : Oxford University Press, 1996.
45. Hirschheim, R.A. and Lacity, M.C. The myths and realities of information
technology insourcing. Communications of the ACM. 2-43(2000) : 99-107.
46. Linder, J. C., Cole, M. I. and Jacobson, A. L. Business transformation through
outsourcing. Emerald Strategy and Leadership. 30-4 (2002) : 23-28.
47. Sun, Y. H., et al. A hybrid knowledge and model approach for reviewer
assignment. Elsevier, Expert Systems with Applications. 34-2(2008) :
817-824.
48. Fan, Z.-P., et al. Decision support for proposal grouping: A hybrid approach
using knowledge rule and genetic algorithm. Elsevier, Expert Systems with
Applications, 2007.

79
49. Li, J.-Q., Borenstein, D. and Mirchandani, P. B. A decision support system for
the single-depot vehicle rescheduling problem. Elsevier, Computers &
Operations Research. 34-4(2007) : 1008-1032.
50. Lewis, M. W., Lewis, K. R. and White, B. J. Guided design search in the
interval-bounded sailor assignment problem. Elsevier, Computers &
Operations Research. 33-6(2006) : 1664-1680.
51. Jimnez, A., Ros-Insua, S. and Mateos, A. A decision support system for
multi-attribute utility evaluation based on imprecise assignments. Elsevier,
Decision Support Systems. 36- (2003) : 65-79.
52. Lazarov, A. and Shoval, P. A rule-based system for automatic assignment of
technicians to service faults. Elsevier, Decision Support Systems.
32(2002) : 343-360.
53. Zhao, Y. and Zhang, Y. Comparison of decision tree model of finding active
objects. Advances in Space Research, 2007.
54. Witten, I. and Frank, E. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations. 2nd ed. San Mateo, California :
Morgan Kaufmann, c2005.
55. Quinlan, J. R. Induction of Decision Trees, Readings in Machine Learning.
Morgan Kaufmann, 1990 : 81-106.
56. Mitchell, T. M. Machine Learning. McGraw-Hill, 1997.
57. R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation
and model selection. Proceedings of the Fourteenth International Joint
Conference on Artificial Intelligence, 2-12(1995) : 11371143.
58. Breiman, L. Random Forests. Springer, Machine Learning, 45-1(2001) : 5-32.
59. Kelton, W. D., Sadowski, R. P. and Sturrock, D. T. Simulation with Arena. 3rd
ed. Series in Industrial Engineering and Management Science. Singapore :
McGraw- Hill, c2003.
60. Pyle, D. Data Preparation for Data Mining. San Mateo, California : Morgan
Kaufmann, 1999.
61. Miller, T.W. Data Text Mining: A business applications approach. Prentice Hall,
2005.

80
62. Riloff, E. Little Words Can Make a Big Difference for Text Classification.
Proceedings of the 18th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval. 1995 : 130-136.
63. Liu Y., et al. Handling of Imbalanced Data in Txt Classification: Category-
Based Term Weights. Natural Language Processing and Text Mining,
London : Springer, (2007, March 6) : 171-192.
64. Feldman, R. and Sanger, J. The Text Mining Handbook : Advanced Approaches
in Analysing Unstructured Data. New York : Cambridge University Press,
2007.
65. Law, A.M. and Kelton, W.D. Simulation Modeling and Analysis. 3rd ed.
Singapore : McGraw-Hill Press, c2000.
66. Miller, K. and Bapat, V. Case Study : Simulation of the Call Center
Environment for Comparing Competing Call Routing Technologies for
Business Case ROI Projection. IEEE Winter Simulation Conference
Proceedings, Washington DC : IEEE Press, 1999 : 16941700.















APPENDIX A

A SAMPLE OF INCIDENT DATASET, SEVERAL RESULS FOR ANALYSIS OF
TEXT MINING DISCOVERY METHODS, AND METHOD VALIDATION















82
A-1 A Sample of Incident Dataset
Figure A-1 shows a sample of incident data in spreadsheet (Excel).

No. Incident Id. Open Date Open Time Resolve Date Resolve Time Incident Code Assigned Gr. Severity System Component Incident Descriptions Resolution Results
2587 TFB-00897593 1/4/2006 16:43:14 3/4/2006 18:04:45 CLOSED OS_EC 3 Hardware ATM : S1A1444 (IP) (811): HAS BEENDISCONNECTED
2586 TFB-00897595 1/4/2006 16:55:00 2/4/2006 7:17:46 CLOSED VEN 3 Network WAN : S1A2192 (IP) link up
2585 TFB-00897596 1/4/2006 16:57:28 3/4/2006 18:06:59 CLOSED OS_EC 3 Hardware ATM : S1B1331 (IP) 2 ( ): HASBEENDISCONN
3514 TFB-00897594 1/4/2006 17:32:18 3/4/2006 13:08:40 CLOSED OS_EC 2 Power Supply : Link EDC " ... LINEDOWN . /
2583 TFB-00897598 1/4/2006 18:35:44 3/4/2006 18:09:08 CLOSED OS_EC 3 Hardware ATM : CDM21235 (IP) CDM (256): HAS BEENDISCONNECTE
2581 TFB-00897600 1/4/2006 19:18:14 3/4/2006 18:18:37 CLOSED OS_EC 3 Hardware ATM : S1A2120 (IP) 4 (812): Has been marked down
2577 TFB-00897602 1/4/2006 19:23:08 3/4/2006 18:20:32 CLOSED OS_EC 3 Hardware ATM : S1A2201 (IP) 4 (811): HASBEENDISCON
2563 TFB-00897623 2/4/2006 7:42:29 3/4/2006 11:42:41 CLOSED OS_EC 2 Software Data Warehouse : 4300 Datawarehouse Job EDWPBOTFM
2562 TFB-00897624 2/4/2006 7:51:34 4/4/2006 17:24:44 CLOSED OS_EC 3 Power Supply : S1A2366 / LINE
2561 TFB-00897625 2/4/2006 8:42:47 4/4/2006 17:42:04 CLOSED OS_EC 4 Hardware ATM : s1a1052 / LINE
2560 TFB-00897626 2/4/2006 8:46:46 4/4/2006 17:40:07 CLOSED OS_EC 3 Hardware ATM : s1B2431 . / DOWN LINE
2558 TFB-00897628 2/4/2006 10:00:28 2/4/2006 13:55:42 CLOSED VEN 3 Network ATM : S1A2264 /True k
2557 TFB-00897630 2/4/2006 10:18:41 2/4/2006 10:57:47 CLOSED VEN 3 Network WAN : S1A2015 (IP) . Active 10.48
2556 TFB-00897634 2/4/2006 12:41:52 4/4/2006 17:46:48 CLOSED OS_EC 3 Hardware ATM : S1A1142 LINE
4103 TFB-00897709 3/4/2006 8:24:14 3/4/2006 16:06:48 CLOSED NWS 3 Software WIN2000 : RAT32 . Notebook user k test ok
237 TFB-00897713 3/4/2006 8:26:20 3/4/2006 16:39:04 CLOSED NWS 3 Software WINNT : 8 IBMSD(theppitat) install windows
3998 TFB-00897717 3/4/2006 8:29:18 3/4/2006 12:13:42 CLOSED NWS 3 Network HQ : 1403003A0956 // 19 Confirmby K.Kripit.
811 TFB-00897657 3/4/2006 8:30:27 3/4/2006 13:17:55 CLOSED NWS 2 Network Branch : PU270 Server COM695 user k test ok
4006 TFB-00897720 3/4/2006 8:31:10 3/4/2006 14:27:54 CLOSED NWS 3 Software WIN2000 : 5 0recovery data /user test ok
559 TFB-00897725 3/4/2006 8:34:17 3/4/2006 12:18:41 CLOSED NWS 3 Hardware Personal Comp. : PCStandalone / / M Monoter Dijital 1K63309
43 TFB-00897731 3/4/2006 8:38:29 3/4/2006 14:55:23 CLOSED NWS 3 Hardware Personal Comp. : ram8 mb batter
60 TFB-00897742 3/4/2006 8:43:38 4/4/2006 12:05:35 CLOSED NWS 3 Hardware Printer : Cash service / type 4722 s/n 41- mechanic
212 TFB-00898641 4/4/2006 11:12:39 4/4/2006 14:22:51 CLOSED VEN 3 Operation Update Passbook : 024 ibmth k
551 TFB-00898795 4/4/2006 11:14:18 4/4/2006 17:24:55 CLOSED NWS 3 Hardware Printer : cashier / 1-2
550 TFB-00898797 4/4/2006 11:15:24 4/4/2006 17:24:24 CLOSED NWS 3 Hardware Printer : GBS/
439 TFB-00898798 4/4/2006 11:15:47 4/4/2006 13:25:54 CLOSED NWS 3 Hardware Printer : 9055 CSO motor
2884 TFB-00898788 4/4/2006 11:17:35 7/4/2006 14:08:03 RESTORE IE_AMS 3 Software Push Info.DelSy : . Push AISaccount 0991208631 text
2955 TFB-00898801 4/4/2006 11:18:28 5/4/2006 14:53:44 CLOSED NWS 3 Hardware Printer : 05/04/2006 14.25
3149 TFB-00898806 4/4/2006 11:24:58 5/4/2006 14:55:50 CLOSED NWS 3 Software WIN2000 : PC=> Blue Screen 5/04/06 14.55 reinstall w2k\\user
2716 TFB-00898807 4/4/2006 11:25:10 17/4/2006 17:09:29 CLOSED NWS 3 Software Lotus Notes DB : PHA15 0re-install lotus notes R6 - user te
3150 TFB-00898808 4/4/2006 11:25:48 5/4/2006 14:56:12 CLOSED NWS 3 Software WIN2000 : PC=> User 5/04/06 14.55 reinstall w2k\\user
1114 TFB-00898818 4/4/2006 11:33:51 5/4/2006 10:43:37 CLOSED NWS 3 Hardware Personal Comp. : PBO M monitor s/n 5
2926 TFB-00898819 4/4/2006 11:35:33 4/4/2006 12:45:00 CLOSED EOS 1 Software Home Banking : . Home banking EOS Check Web
3926 TFB-00898835 4/4/2006 11:46:27 4/4/2006 14:37:03 CLOSED NWS 3 Software WIN2000 : RAT19 . Map //Config Win2000


FIGURE A-1 A Sample of Incident Data

A-2 Pareto histogram of keywords extracted from the incident dataset
Figure A-2 shows a Pareto histogram of keywords extracted from the incident
dataset


0
500
1000
1500
2000
2500
3000
3500
4000
P
r
in
t
e
r
A
T
M
P
e
r
s
o
n
a
l
C
o
m
p
.
W
A
N
W
I
N

2
0
0
0
L
o
t
u
s
N
o
t
e
C
it
r
ix
L
o
t
u
s
N
o
t
e
s
C
lie
n

U
p
d
a
t
e

P
a
s
s
b
o
o
k
W
I
N

N
T
K
B
A
N
K
N
E
T
D
a
t
a

W
a
r
e
h
o
u
s
e
B
r
a
n
c
h
S
e
r
v
e
r
H
Q
M
S

O
f
f
ic
e

2
O
O
O
A
p
p
-
N
o
n
P
C
B
r
o
w
s
e
r
M
a
g
n
e
t
ic

S
t
r
ip
L
o
t
u
s

N
o
t
e
s

D
B
K
-
C
y
b
e
r

B
a
n
k
in
g
N
o
t
e
b
o
o
k
C
D
M
L
P
M
I
n
t
e
r
n
e
t

B
a
n
k
in
O
S
/
2
M
F
A

M
R
A
V
lin
k
C
a
r
d
L
in
k
B
r
a
n
c
h

A
p
p
.
S
t
a
t
e
m
e
n
t
W
I
N

X
P
A
n
t
i
V
ir
u
s
C
M
A
S L
I
C
I
S
L
o
t
u
s
N
o
t
e
s
S
e
r
v
e
M
S

O
f
f
ic
e

9
7
I
n
f
o

C
e
n
t
r
ix

C
T
S
c
a
n
n
e
r
T
r
a
n
s
a
c
t

B
P
B
a
n
k

R
e
f
e
r
e
n
c
e
H
o
s
t

o
n

D
e
m
a
n
d
C
a
s
h

C
o
n
n
e
c
t
C
a
s
h
A
d
m
in
.
o
n

W
e
D
C
S
D
M
S
H
o
m
e

B
a
n
k
in
g
C
T
R
P
e
o
p
le
S
o
f
t
F
C
D
S
S
M
M
W
I
N

9
8
B
a
r

C
o
d
e
F
I
C
S
F
X

o
n

w
e
b
B
ill
P
a
y
m
e
n
t
E
D
W
S
A
F
E
C
A
T
C
T
D

(
E
-
R
e
p
o
r
t
)
K
-
P
-
G
a
t
e
w
a
y
C
I
P
S
I
B
M
-
E
O
S
B
r

A
p
p
-
R
e
F
in
.
A
c
c
e
p
t
.
C
e
r
.
R
O
S
S
C
A
e
-
B
o
o
t
h
K
-
B
iz
N
e
t
N
A
V

(
P
C
)
S
Q
C
u
r
r
e
n
t
I
B
I
V
R
P
u
s
h

I
n
f
o
.
D
e
lS
y
B
L

E
n
t
r
y
L
M
S
-
R
e
p
o
r
t

M
g
n
.
P
r
in
t

S
e
r
v
e
r
C
a
ll
C
e
n
t
e
r
E
B
P
P
P
A
S
a
v
in
g

A
c
c
o
u
n
t
S
h
a
r
e

S
e
r
v
e
r
T
r
a
n
s
a
c
t

C
C
&
C
L
L
o
a
n
R
e
v
ie
w
(
H
o
s
t
M
I
S
E
x
im
b
ills
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%

FIGURE A-2 A Pareto histogram of keywords extracted from the incident dataset



83
A-3 Evaluation Results of Id3 Decision Tree Method
The evaluation results of Id3 decision tree method based on the Testing
documents of 4,909 records.
=== Run information ===
Scheme: weka.classifiers.trees.Id3
Relation: ID3- based Automatic Resolver Group Assignment
Instances: 4909
Attributes:
Anti-Virus
App-NonPC K-Cyber-Banking
ATM K-P-Gateway
Bank-Reference LI
Bar-Code LMS-Report-Mgn.
Bill-Payment LoanReview(Host
BL-Entry Lotus-Notes-DB
Br-App-Re LotusNoteCitrix
Branch LotusNotesClien
Branch-App. LotusNotesServe
Browser LPM
CA Magnetic-Strip
Call-Center MFA-MRA
CardLink MIS
Cash-Connect MS-Office-2OOO
CashAdmin.on-We MS-Office-97
CAT NAV-(PC)
CDM Notebook
CIPS OS/2
CIS PA
CMAS PeopleSoft
CTD-(E-Report) Personal-Comp.
CTR Print-Server
Current Printer
Data-Warehouse Push-Info.DelSy
DCS ROSS
DMS SAFE
e-Booth Saving-Account
EBPP Scanner
EDW Server
FCD Share-Server
FICS SQ
Fin.Accept.Cer. SSMM
FX-on-web Statement
Home-Banking Transact-BP
Host-on-Demand Transact-CC&CL
HQ Update-Passbook
IB Vlink
IBM-EOS WAN
Info-Centrix-CT WIN-2000
Internet-Bankin WIN-98
IVR WIN-NT
KBANKNET WIN-XP
K-BizNet Electrical-Supply

Assign-Group



Test mode: 10-fold cross-validation


=== Classifier model (full training set) ===
Id3
ATM = 0

84
| WAN = 0
| | Electrical-Supply = 0
| | | Update-Passbook = 0
| | | | Printer = 0
| | | | | Data-Warehouse = 0
| | | | | | LotusNoteCitrix = 0
| | | | | | | Personal-Comp. = 0
| | | | | | | | WIN-2000 = 0
| | | | | | | | | Branch = 0
| | | | | | | | | | CDM = 0
| | | | | | | | | | | Internet-Bankin = 0
| | | | | | | | | | | | LotusNotesClien = 0
| | | | | | | | | | | | | K-Cyber-Banking = 0
| | | | | | | | | | | | | | CTR = 0
| | | | | | | | | | | | | | | CardLink = 0
| | | | | | | | | | | | | | | | Home-Banking = 0
| | | | | | | | | | | | | | | | | IB = 0
| | | | | | | | | | | | | | | | | | WIN-NT = 0
| | | | | | | | | | | | | | | | | | | HQ = 0
| | | | | | | | | | | | | | | | | | | | Server = 0
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 0
| | | | | | | | | | | | | | | | | | | | | | Browser = 0
| | | | | | | | | | | | | | | | | | | | | | | CAT = 0
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 0
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.on-We = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.DelSy = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin.Accept.Cer. = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 0: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e-Booth = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Br-App-Re = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-98 = 1: NWS

85
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NAV-(PC) = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bar-Code = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fin.Accept.Cer. = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Transact-BP = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CTD-(E-Report) = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LI = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Statement = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host-on-Demand = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LotusNotesServe = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bank-Reference = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-97 = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scanner = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | OS/2 = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anti-Virus = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WIN-XP = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notebook = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lotus-Notes-DB = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | App-NonPC = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Magnetic-Strip = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MFA-MRA = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Saving-Account = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Push-Info.DelSy = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CashAdmin.on-We = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MIS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LMS-Report-Mgn. = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IVR = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DCS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Cash-Connect = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CMAS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CA = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BL-Entry = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bill-Payment = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vlink = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PeopleSoft = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FX-on-web = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EBPP = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CIPS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LPM = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ROSS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gateway = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | CAT = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | Browser = 1: NWS
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 1: NWS
| | | | | | | | | | | | | | | | | | | | Server = 1
| | | | | | | | | | | | | | | | | | | | | Print-Server = 0
| | | | | | | | | | | | | | | | | | | | | | Share-Server = 0: NWS
| | | | | | | | | | | | | | | | | | | | | | Share-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | | | Print-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | HQ = 1: NWS
| | | | | | | | | | | | | | | | | | WIN-NT = 1: NWS
| | | | | | | | | | | | | | | | | IB = 1: EOS
| | | | | | | | | | | | | | | | Home-Banking = 1: EOS
| | | | | | | | | | | | | | | CardLink = 1: VEN
| | | | | | | | | | | | | | CTR = 1: EOS
| | | | | | | | | | | | | K-Cyber-Banking = 1: EOS
| | | | | | | | | | | | LotusNotesClien = 1: NWS
| | | | | | | | | | | Internet-Bankin = 1: EOS
| | | | | | | | | | CDM = 1: OS-EC
| | | | | | | | | Branch = 1
| | | | | | | | | | Branch-App. = 0: NWS
| | | | | | | | | | Branch-App. = 1: IE-AMS
| | | | | | | | WIN-2000 = 1: NWS
| | | | | | | Personal-Comp. = 1: NWS
| | | | | | LotusNoteCitrix = 1: NWS
| | | | | Data-Warehouse = 1: IE-AMS
| | | | Printer = 1: NWS
| | | Update-Passbook = 1: VEN
| | Electrical-Supply = 1: OS-EC
| WAN = 1: VEN

86
ATM = 1: OS-EC
Time taken to build model: 1.57 seconds
=== Stratified cross-validation ===
=== Summary ===

Correctly Classified Instances 4567 93.0332 %
Incorrectly Classified Instances 342 6.9668 %
Kappa statistic 0.8668
K&B Relative Info Score 404071.9478 %
K&B Information Score 6120.7864 bits 1.2468 bits/instance
Class complexity | order 0 7425.008 bits 1.5125 bits/instance
Class complexity | scheme 11293.8523 bits 2.3006 bits/instance
Complexity improvement (Sf) -3868.8443 bits -0.7881 bits/instance
Mean absolute error 0.0456
Root mean squared error 0.1526
Relative absolute error 20.9496 %
Root relative squared error 46.2673 %
Total Number of Instances 4909

=== Detailed Accuracy By Class ===

TP Rate FP Rate Precision Recall F-Measure Class
0.324 0.003 0.759 0.324 0.454 EOS
0.866 0.003 0.88 0.866 0.873 IE-AMS
0.99 0.129 0.93 0.99 0.959 NWS
0.884 0.01 0.961 0.884 0.921 OS-EC
0.837 0.01 0.91 0.837 0.872 VEN

=== Confusion Matrix ===

a b c d e <-- classified as
44 3 89 0 0 | a = EOS
10 110 7 0 0 | b = IE-AMS
0 9 3074 0 21 | c = NWS
4 3 89 903 22 | d = OS-EC
0 0 48 37 436 | e = VEN

87
A-4 An Extended Part of ID3 Decision Tree Results
Figure A-3 shows an extended part of ID3 decision tree results.



FIGURE A-3 An Extended Part of ID3 Decision Tree

A-5 A Sample of ID3-Based Generating Rules
Figure A-4 shows a sample of ID3-based generating rules.

Class
KW1 KW2 KW3 KW4 KW5 KW6 KW7 KW8 KW9 KW10 KW11 KW12 --- Assign Groups
ATM WAN E-Supply Passbook Printer D-WarehouLotusNote P-Comput Win2000 Branch Branch-Ap CDM
1 0 0 0 0 0 0 0 0 0 0 0 --- OS-EC
0 1 0 0 0 0 0 0 0 0 0 0 --- VEN
0 0 1 0 0 0 0 0 0 0 0 0 --- OS-EC
0 0 0 1 0 0 0 0 0 0 0 0 --- VEN
0 0 0 0 1 0 0 0 0 0 0 0 --- NWS
0 0 0 0 0 1 0 0 0 0 0 0 --- IE-AMS
0 0 0 0 0 0 1 0 0 0 0 0 --- NWS
0 0 0 0 0 0 0 1 0 0 0 0 --- NWS
0 0 0 0 0 0 0 0 1 0 0 0 --- NWS
0 0 0 0 0 0 0 0 0 1 1 0 --- IE-AMS
0 0 0 0 0 0 0 0 0 1 0 0 --- NWS
0 0 0 0 0 0 0 0 0 0 0 1 --- OS-EC
--- --- --- --- --- --- --- --- --- --- --- --- --- ---
Attributes


FIGURE A-4 A Sample of ID3-Based Pattern Kept in Knowledge Database
The IF-THEN Rules could be presented as in the following:
1. IF keyword (KW) = ATM THEN Assigned Group is OS-EC ELSE Go to 2,
2. IF keyword (KW) = WAN THEN Assigned Group is VEN ELSE Go to 3,

10. IF keyword (KW) = Branch AND Branch-App THEN Assigned Group is
IE-AMS ELSE Go to 11,
11. IF keyword (KW) = Branch THEN Assigned Group is NWS ELSE Go to 12,
12. IF keyword (KW) = CDM THEN Assigned Group is OS-EC ELSE Go to 13,


A T M = 0
| W A N = 0
| | E l e c t r i c a l - S u p p l y = 0
| | | U p d a t e - P a s s b o o k = 0
| | | | P r i n t e r = 0
| | | | | D a t a - W a r e h o u s e = 0
| | | | | | L o t u s N o t e C i t r i x = 0
| | | | | | | P e r s o n a l - C o m p . = 0
| | | | | | | | W I N - 2 0 0 0 = 0
| | | | | | | | | B r a n c h = 0
| | | | | | | | | | C D M = 0
| | | | | | | | | | | I n t e r n e t - B a n k i n = 0
| | | | | | | | | | | | L o t u s N o t e s C l i e n = 0
| | | | | | | | | | | | | K - C y b e r - B a n k i n g = 0
| | | | | | | | | | | | | | C T R = 0
| | | | | | | | | | | | | | | C a r d L i n k = 0
| | | | | | | | | | | | | | | | H o m e - B a n k i n g = 0
| | | | | | | | | | | | | | | | | I B = 0
| | | | | | | | | | | | | | | | | | W I N - N T = 0
| | | | | | | | | | | | | | | | | | | H Q = 0
| | | | | | | | | | | | | | | | | | | | S e r v e r = 0
| | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 0
| | | | | | | | | | | | | | | | | | | | | | B r o w s e r = 0
| | | | | | | | | | | | | | | | | | | | | | | C A T = 0
| | | | | | | | | | | | | | | | | | | | | | | | S S M M = 0
| | | | | | | | | | | | | | | | | | | | | | | | | K - P - G a t e w a y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 2 O O O = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F I C S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I P S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X - o n - w e b = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p l e S o f t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V l i n k = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B i l l - P a y m e n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L - E n t r y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h - C o n n e c t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I V R = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S - R e p o r t - M g n . = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M I S = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m i n . o n - W e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h - I n f o . D e l S y = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v i n g - A c c o u n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A - M R A = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e t i c - S t r i p = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p - N o n P C = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s - N o t e s - D B = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o t e b o o k = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - X P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n t i - V i r u s = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S / 2 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S c a n n e r = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 9 7 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k - R e f e r e n c e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s N o t e s S e r v e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t - o n - D e m a n d = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S t a t e m e n t = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D - ( E - R e p o r t ) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T r a n s a c t - B P = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F i n . A c c e p t . C e r . = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r - C o d e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V - ( P C ) = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - 9 8 = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r - A p p - R e = 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e - B o o t h = 0 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | e - B o o t h = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B r - A p p - R e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - 9 8 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N A V - ( P C ) = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a r - C o d e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F i n . A c c e p t . C e r . = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | T r a n s a c t - B P = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C T D - ( E - R e p o r t ) = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L I = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S t a t e m e n t = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | H o s t - o n - D e m a n d = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s N o t e s S e r v e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B a n k - R e f e r e n c e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 9 7 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S c a n n e r = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | O S / 2 = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A n t i - V i r u s = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | W I N - X P = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | N o t e b o o k = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L o t u s - N o t e s - D B = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A p p - N o n P C = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M a g n e t i c - S t r i p = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M F A - M R A = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | S a v i n g - A c c o u n t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P u s h - I n f o . D e l S y = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P A = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h A d m i n . o n - W e = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | M I S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L M S - R e p o r t - M g n . = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I V R = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | D C S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C a s h - C o n n e c t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C M A S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C A = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B L - E n t r y = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | B i l l - P a y m e n t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | V l i n k = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | P e o p l e S o f t = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F X - o n - w e b = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E B P P = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C I P S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | L P M = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | R O S S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | F I C S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | E D W = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | S A F E = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | | F C D = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | | | M S - O f f i c e - 2 O O O = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | | | | | | D M S = 1 : I E - A M S
| | | | | | | | | | | | | | | | | | | | | | | | | K - P - G a t e w a y = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | | | S S M M = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | | C A T = 1 : V E N
| | | | | | | | | | | | | | | | | | | | | | B r o w s e r = 1 : N W S
| | | | | | | | | | | | | | | | | | | | | K B A N K N E T = 1 : N W S
| | | | | | | | | | | | | | | | | | | | S e r v e r = 1
| | | | | | | | | | | | | | | | | | | | | P r i n t - S e r v e r = 0
| | | | | | | | | | | | | | | | | | | | | | S h a r e - S e r v e r = 0 : N W S
| | | | | | | | | | | | | | | | | | | | | | S h a r e - S e r v e r = 1 : E O S
| | | | | | | | | | | | | | | | | | | | | P r i n t - S e r v e r = 1 : E O S
| | | | | | | | | | | | | | | | | | | H Q = 1 : N W S
| | | | | | | | | | | | | | | | | | W I N - N T = 1 : N W S
| | | | | | | | | | | | | | | | | I B = 1 : E O S
| | | | | | | | | | | | | | | | H o m e - B a n k i n g = 1 : E O S
| | | | | | | | | | | | | | | C a r d L i n k = 1 : V E N
| | | | | | | | | | | | | | C T R = 1 : E O S
| | | | | | | | | | | | | K - C y b e r - B a n k i n g = 1 : E O S
| | | | | | | | | | | | L o t u s N o t e s C l i e n = 1 : N W S
| | | | | | | | | | | I n t e r n e t - B a n k i n = 1 : E O S
| | | | | | | | | | C D M = 1 : O S - E C
| | | | | | | | | B r a n c h = 1
| | | | | | | | | | B r a n c h - A p p . = 0 : N W S
| | | | | | | | | | B r a n c h - A p p . = 1 : I E - A M S
| | | | | | | | W I N - 2 0 0 0 = 1 : N W S
| | | | | | | P e r s o n a l - C o m p . = 1 : N W S
| | | | | | L o t u s N o t e C i t r i x = 1 : N W S
| | | | | D a t a - W a r e h o u s e = 1 : I E - A M S
| | | | P r i n t e r = 1 : N W S
| | | U p d a t e - P a s s b o o k = 1 : V E N
| | E l e c t r i c a l - S u p p l y = 1 : O S - E C
| W A N = 1 : V E N
A T M = 1 : O S - E C
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FICS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | EDW = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | SAFE = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FCD = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | | | MS-Office-2OOO = 1: NWS
| | | | | | | | | | | | | | | | | | | | | | | | | | DMS = 1: IE-AMS
| | | | | | | | | | | | | | | | | | | | | | | | | K-P-Gatewa y = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | | SSMM = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | | CAT = 1: VEN
| | | | | | | | | | | | | | | | | | | | | | Br owser = 1: NWS
| | | | | | | | | | | | | | | | | | | | | KBANKNET = 1: NWS
| | | | | | | | | | | | | | | | | | | | Server = 1
| | | | | | | | | | | | | | | | | | | | | Print-Ser ver = 0
| | | | | | | | | | | | | | | | | | | | | | Shar e-Server = 0: NWS
| | | | | | | | | | | | | | | | | | | | | | Shar e-Server = 1: EOS
| | | | | | | | | | | | | | | | | | | | | Print-Ser ver = 1: EOS
| | | | | | | | | | | | | | | | | | | HQ = 1: NWS
| | | | | | | | | | | | | | | | | | WIN-NT = 1: NWS
| | | | | | | | | | | | | | | | | IB = 1: EOS
| | | | | | | | | | | | | | | | Home-Banking = 1: EOS
| | | | | | | | | | | | | | | CardLink = 1: VEN
| | | | | | | | | | | | | | CTR = 1: EOS
| | | | | | | | | | | | | K-Cyber-Banking = 1: EOS
| | | | | | | | | | | | LotusNotesClien = 1: NWS
| | | | | | | | | | | Internet-Bankin = 1: EOS
| | | | | | | | | | CDM = 1: OS-EC
| | | | | | | | | Branch = 1
| | | | | | | | | | Br anch-App. = 0: NWS
| | | | | | | | | | Br anch-App. = 1: IE-AMS
| | | | | | | | WIN-2000 = 1: NWS
| | | | | | | Personal-Comp. = 1: NWS
| | | | | | LotusNoteCitrix = 1: NWS
| | | | | Data-Warehouse = 1: IE-AMS
| | | | Printer = 1: NWS
| | | Update-Passbook = 1: VEN
| | Electrical-Supply = 1: OS-EC
| WAN = 1: VEN
ATM = 1: OS-EC













APPENDIX B

ITIL-BASED KMRCA IT SERVICE DESK PROCESS


90

B-1 ITIL-Based Incident Management Process
The incident is any event that deviates from normal operation of a service and
that causes, or may cause, an interruption to, or a reduction in, the quality of that
service. The goal of the Incident Management is to recover standard service operation
as quickly as possible. It may be that because of incident analysis and resolution, the
incident cause is discovered. If this is not the case and if further investigation is
justified in respect of cost and effort, then the Problems Management process is
solicited and a problem record is raised. The process defines activities to investigate
the problem, which is defined as the unknown underlying cause of one or more
incidents. The status of the problem is transformed to known error when both the root
cause is known and a workaround or a permanent resolution has been identified.
The scope of the Incident Management process includes:
(a) Opening an incident record
(b) Updating the incident record throughout the process to reflect its status
(c) Assigning the incident to an incident resolver
(d) Analyzing the incident and performing incident determination
(e) Implementing a workaround or resolution for the incident to perform
recovery of the service
(f) Monitoring incident (request) queues to ensure that all incidents are
resolved within committed service levels and reprioritizing or reassigning or escalating
as necessary.
Note that during the implementation of the workaround or resolution for the
incident, the Incidents Management process is not directly responsible for the
implementation of the solution but it will monitor and record the progress and results
of the solution implementation.
(g) Updating the incident knowledge database to assist with future incident and
problem investigation and diagnosis
(h) Closing the incident record
(i) The Handle and Control Problems operational process has been called where
the root cause of the incident or problem has not been identified.
Figure B-1 shows the Incident Management Process Flow.

91

FIGURE B-1 IT Incident Management Process Flow

92
Narrative of Incident Management Process
The following Step 1 through Step 7 are performed by Banks help desk or
called FLS (first level support), and Step 8 through Step 31 are performed by IT
service desk outsourcing or called SLS (second level support), and the rest Steps are
performed by Resolver Groups or called TLS (third level support) as follows:
1. Open Incident Record Procedure
Refer to the Open Incident Record procedure to open an incident record for
the incident information.
1. Major Incident?
Based on Incident Policy has been defined that the incident severity 1 is the
Major incident. Follow the policy to determine if the incident is a major
incident.
(a) If it is Yes, proceed to Handle Major Incident Procedure.
(b) If it is No, proceed to IT Outsourcing Scope?
2. Handle Major Incident Procedure
Refer to the Handle Major Incident procedure to assign a major incident
owner to handle all required notifications and escalations.
3. IT Outsourcing Scope?
Determine whether the incident is IT incident and its description is in an IT
outsourcing scope, referring to the IT outsourcing contract.
(a) If it is Yes, proceed to Assign Incident to SLS Resolver.
(b) If it is No, proceed to Assign Incident to Bank Resolver.
4. Assign Incident to Bank Resolver
Assign a non-IT incident to Bank resolver.
Proceed to End.
5. Assign Incident to SLS Incident Resolver
Assign an IT Incident to SLS Resolver who is responsible for resolving IT
incidents of this type.
6. Update Incident Record with Current Status
Update the incident record to indicate that the incident has been assigned to
a SLS Resolver and is awaiting until the incident is closed.





93
7. Review Incident Record For Completeness
Review the incident record to ensure that its contents are complete
The incident information include:
(a) Incident ID
(b) When the incident opened (date and time)
(c) Identified incident severity (1, 2, 3, or 4)
(d) Incident status (open/ assign to/ resolving steps/ close)
(e) System, component, item failure
(f) Caller, Requester (name/ location/ contact no.)
(g) Incident descriptions
(h) SLS owner (who/ when )
(i) TLS owner (who/ when)
8. IT Outsourcing Scope ?
Check if the incident is in IT outsourcing scope.
(a) If it is Yes, proceed to Additional Information Needed.
(b) If it is No, proceed to Indicate Incident Type.
9. Indicate Incident Type
If the incident was initially wrong assigned due to the assigned wrong
scope and or wrong resolver, indicate the request type of the incident and, if
it is known the details of whom the incident most appropriate reassigned to
and request for reassignment.
10. Request for Reassignment
Request FLS to review the scope for the incident and reassign as the
provided reasons.
11. Additional Information Needed?
Determine if additional information is needed to complete the incident
record.
(a) If it is Yes, proceed to Contact Appropriate Parties to get More
Information.
(b) If it is No, proceed to Validate Initial Severity.





94
12. Validate Initial Severity
Refer to the defined severity based on policy; severity 1 is a critical
incident, severity 2 is a high incident, severity 3 is a normal incident, and
severity 4 is a low incident, validate the initially assigned severity
according to the severity policy.
13. Contact Appropriate Parties to get More Information
Contact the most appropriate parties to get more information. Policy should
dictate how many attempts or how long the incident resolver should spend
trying to obtain additional information before this becomes an issue.
14. Required Information Obtained?
Check if the parties were contacted if the required information is obtained.
(a) If it is Yes, proceed to Up date Incident Record with Any
Additional Information
(b) If it is No, proceed to Document Issue
15. Update Incident Record with Any Additional Information
Update the incident record with any additional information.
16. Document Issue
Document the issue when the required information do not receiving on time.
17. Perform Escalation
Handles escalations of issues associated with requests. SLS or personnel
may escalate request handling at any time by notifying to the higher level
of the contact party at that the issue was not resolved and document
unsuccessful resolution.
18. Issue Resolved?
Check if the issue is resolved.
(a) If it is Yes, proceed to Update Incident Record with Any
Additional Information
(b) If it is No, proceed to Close Incident?
19. Major Incident?
Determines the update incident is the major incident based on major incident
policy.
(a) If it is Yes, proceed to Handle Major Incident Procedure.
(b) If it is No, proceed to Perform Incident Analysis Procedure.



95
20. Handel Major Incident Procedure
Refer to the Handle Major Incident procedure. It needs to assign a Major
Incident owner who handles all required notifications and escalations the
request until the major incident is complete.
21. Perform Incident Analysis Procedure
Refer to the Incident Analysis procedure to gather all required information
about the incident and related incidents and to perform incident determination,
investigation and diagnosis activities.
22. TLS Required?
Determine to whether the TLS resolver groups are required to resolve the
assigned incident. The determination of resolver groups whom it should be
assigned to. In particular, compare the incident to the database of incident
records to determine if this is a repeat occurrence of a previous incident. It
may be more effective if the same resolver handles all related incidents.
(a) If it is Yes, proceed to Assign/ Reassign incident to Appropriate
Incident Resolver Group.
(b) If it is No, proceed to Attempt to Resolve Incident.
23. Attempt to Resolve Incident
Attempt to resolve the incident with SLS resolves skills and availability.
24. Knowledge-Based Required?
Determine if the Knowledge-based is requited to resolve the incident,
searching similar cases and getting their resolutions of the previous incident
in the knowledge database.
(a) If it is Yes, proceed to Search Required Information from
Knowledge-Based.
(b) If it is No, proceed to Perform incident Determination Procedure
25. Search Required Information from Knowledge-Based
The knowledge database is required to search the required information to
resolve the incident.
26. Perform Incident Determination Procedure
Refer to Perform Incident Procedure






96
27. Close Incident?
For an actual incident, determine if the incident should be closed due to the
lack of information required to proceed with resolution of the incident.
(a) If it is Yes, proceed to Inform Requester that Incident will be Closed
(b) If it is No, proceed to Take Incident Out of SLA Criteria
28. Take Incident Out of SLA Criteria
If the incident should not be closed due to lack of information needed to
proceed with resolution of the incident, take the incident out of SLA criteria
so that it will not be included in SLA attainment reports.
Return to Contact Appropriate Parties to obtain the additional information
required to proceed with resolution of the incident.
29. Inform Requester that Incident will be Closed
If the incident should be closed due to the lack of information needed to
proceed with resolution of the incident, inform the Requester that the
incident will be closed.
30. Update Incident Record with its Close
Update the incident record to indicate that the required information could
not be obtained and that the incident will be closed.
Proceed to End.
31. Assign/ Reassign incident to Appropriate Incident Resolver Group
Determine if the result of Incident Analysis reassigned the incident to a
different Resolver Group.
(a) If it is Yes, return to Assign Incident to Incident Resolver to assign
the incident to a new Incident Resolver.
(b) If it is No, proceed to Actual Incident?
Note that the Assign and/ or reassign the incident to the most appropriate
TLS incident resolver based on skill level and availability within the TLS
Resolver Group.
32. Review for Corrective Assignment
Review the assigned incident for corrective resolver group.
33. Correct Assignment?
Determine if the review indicates that the assigned incident is correct
assignment.
(a) If it is Yes, proceed to Perform Incident Analysis Procedure to
analyse the incident.
(b) If it is No, proceed to Indicate Request Type and Reassignment Details


97
34. Indicate Request Type and Reassignment Details
If there is incorrect assignment, indicate request type and provide
reassignment details such as who is appropriate to resolve.
35. Request SLS for Reassignment
Request for reassignment, SLS will review and reassignment
36. Perform Incident Analysis Procedure
Refer to the Incident Analysis procedure to gather all required information
about the incident and related incidents and to perform incident
determination, investigation and diagnosis activities.
37. Knowledge-based Required?
Determine if the Knowledge-based is required to get the required
information.
(a) If it is Yes, proceed to Search Required Information from
Knowledge-Based
(b) If it is No, proceed to Attempt to Resolve Incident
38. Search Required Information from Knowledge-Based
Search the required information from the Knowledge database.
39. Attempt to Resolve Incident
Attempt to resolve the incident based on skills and availability.
40. Close Incident?
Determine to close incident when processing incident has been complete.
(a) If it is Yes, proceed to Close Incident Procedure
(b) If it is No, proceed to Recovery Required?
41. Recovery Required?
If the incident is an actual incident, determine if recovery from the incident
is required prior to implementation of a permanent solution.
(a) If it is Yes, proceed to Perform Incident Recovery.
(b) If it is No, proceed to Handle and Control Problems.
42. Perform Incident Recovery
If recovery of the incident is required prior to permanent resolution of the
incident, proceed to the Perform Incident Recovery as the following.
(a) Review the Recovery Plan with affected parties
(b) Check if the required recovery is entitlement
(c) Check if the service request is required

98
(d) Determine to request for change
(e) Update incident record to indicate recovery result either successful or
unsuccessful
43. Was Incident Recover?
Determine if the Perform Incident Recovery was successful in recovering
from the incident.
(a) If it is Yes, proceed to Incident Permanently Resolve or Agree to
Workaround Applied?
(b) If it is No, proceed to Close Incident Record Procedure.
44. Incident Permanently Resolve or Agree to Workaround Applied?
Determine if the Perform Incident Recovery provided a permanent
resolution for the incident. That is, is the recovery action or bypass
acceptable as a permanent solution?
(a) If it is Yes, proceed to Add Resolution to Knowledge-Based.
(b) If it is No, proceed to Problem Management Process
Refer to the Problem Management process to develop a permanent
solution for the problem.
Note that a problem is the unknown underlying cause of one or more
incidents. The status of the problem is transformed to known error when
both the root cause is known and a temporary workaround or a permanent
resolution has been identified.
Proceed to End.
45. Add Resolution to Knowledge-Based
Add the resolutions to the knowledge database to assist with future incident
and problem investigation and diagnosis.
46. RCA Required?
Follow the policy to determine if a RCA is required for the recovered incident
for which the recovery action is acceptable as a permanent resolution.
(a) If it is Yes, proceed to Handle and Control Problems (RCA).
(b) If it is No, proceed to Close Incident Record.
47. Close Incident Record Procedure
When processing of the incident has completed either successfully or
unsuccessfully, proceed according to the Close Incident Record procedure
to close the associated incident record.
48. End
End of Incident Management Process


99
Figure B-2 shows Open Incident Record Procedure


FIGURE B-2 Open Incident Record Flow

Narrative of Open Incident Record Procedure
1. Incident Record Already Open?
Check if an incident record has already been opened for the incident.
(a) If it is Yes, proceed to Review Open Incident Policy
(b) If it is No, proceed to Return
2. Review Open Incident Policy
Review the Open Incident policy particular the details for items such as:
(a) Who has authorized to open incident records?
(b) What information is required when opening an incident?
3. Open an incident record for the incident.
Open an incident record for the incident with required information.
The required information to be included in an incident record is:
(a) Incident ID
(b) Date and Time when open incident Record

100
(c) Incident description
(d) Outage detail particular on failing component /resource, date / time
incident occurred
(e) Incident severity based on business impact
(f) Incident requester (requesters name, location and contact no.)
(g) Incident status (open/ assign resolver/ necessary resolving steps/ close)
4. Gather Required Information
Gather required information based on policy to complete the incident record.
5. Entitle?
Follow the policy to determine if the Requester is entitled to raise this incident.
(a) If it is Yes, proceed to Match Severity to Incident
(b) If it is No, proceed to Document Entitle Failure Detail
6. Document Entitle failure Detail
If the Requester was not entitled to raise this incident, document the details
of the entitlement failure in preparation for calling the Handle Service
Entitlement Failure.
7. Handle Service Entitlement Failure
Handel Service Entitlement Failure is to resolves entitlement failures for
requested services and update request records to reflect the disposition of
entitlement failures. It shall be determined the incident against the service
contracts particular IT outsourcing contact. It may propose the alternative
for entitlement with authorized approval.
8. Continue?
Determine if the decision was made in the Handle Service Entitlement
Failure to continue with the incident.
(a) If it is Yes, proceed to Assign Severity to Incident
(b) If it is No, proceed to Return
9. Assign Severity to Incident
Assign severity based on severity definition and its policy to the incident.
Proceed to Return.
10. Return
Return to the Incident Management Process

101
Figure B-3 shows Handle Major Incident Procedure



FIGURE B-3 Handle Major Incident Flow

Narrative of Handle Major Incident Procedure
1. Gather Information for Major incident
If the incident is associated with a major incident, collect all related
information regarding the incident such as:
(a) Services/ applications/ resources affected
(b) Affected service owners
(c) Estimated duration of any associated outages
2. Major Incident Criteria Met?
Determine if the criteria for conducting an incident review have been met
based on major incident severity 1, which is the most business impact in
terms of the availability of specific service, application, or network.

102
(a) If it is Yes, proceed to Assign major incident Owner
(b) If it is No, proceed to Inform Requester that Incident Not Major
Incident with Reasons
3. Inform Requester that Incident Not Major Incident with Reasons
Inform the Requester that the incident is not a major incident with reason
why the incident was not assigned to severity 1.
4. Assign Major Incident Owner
Assign a major incident owner who handles all required notifications and
escalations until the resolution is complete.
5. Coordinate Recovery for Major incident.
Coordinate relevant resources for major incident recovery and effectively
manage the recovery activities to minimize the duration of the incident.
6. Major Incident Notification
Perform the major incident notification as the following:
(a) Analyze the incident in detail, take whatever actions are necessary to
confirm whether or not the associated service is actually down or is
severely degraded.
(b) If the service is actually down, urgently provide notification to all
affected parties of the service outage (management team and service
recovery teams) by short massaging and or email with an ongoing status
as required.
(c) If the service is not actually down or severely degraded, notify the
appropriate service providers so that they may handle the incident.
7. Perform Problem Management Process
Perform Problem Management process to permanently resolve.
8. Major Incident Review Required?
Determine if the criteria for conducting an incident review have been met
based on incident severity 1 that business impact in a particular the
availability of specific service, application, or network.
(a) If it is Yes, proceed to Assign major incident Owner
(b) If it is No, proceed to Inform Requester that Incident Not Major
Incident with Reasons

103
9. Perform Major Incident Review
Assemble appropriate parties in preparation to conduct a review of an
incident.
10. Notify All Parties
Inform all participants either that a major incident review is not needed or
that the criteria for conducting an incident review have not been met.
Proceed to Return.
11. Return
Return to the Incident Management process

Figure B-4 shows Perform Incident Analysis Procedure



FIGURE B-4 Perform Incident Analysis Flow

Narrative of Perform Incident Analysis Procedure
1. Collect Incident Symptom and Configuration Item Impact Info
Collect all available data about the incident, its symptoms, severity and
associated configuration data based on its component.



104
2. Identify Any Related Occurrence
Identify any related occurrences of the incident and analyze with similar
previous cases.
3. Need To Reproduce Incident?
Determine if there is a need to reproduce the incident to obtain additional
information to understand the exact environment in which the incident
occurred.
(a) If it is Yes, proceed to Reproduce Proper Incident
(b) If it is No, proceed to Analyse Available Incident Data
4. Reproduce Proper Incident
If there is a need to reproduce the incident to gather additional insight about
the incident, attempt to reproduce the incident.
5. Incident Reproducible?
Determine if the incident is reproducible.
(a) If it is Yes, proceed to Update Incident Record with Additional
Details
(b) If it is No, proceed to Analyse Available Incident Data
6. Update Incident Record with Additional Details
Update the incident record with additional details.
7. Analyse Available Incident Data
Analyze all available incident data to validate that the incident was assigned
to the correct resolver group.
8. Correct Assignment?
Determine if the incident was assigned to the correct resolver group based on
the review of the incident record and all incident data.
(a) If it is Yes, proceed to Perform Incident Determination Procedure
(b) If it is No, proceed to Indicate Request Type
9. Indicate Request Type
If the incident record was incorrectly assigned, indicate request type and
document the reassignment details in preparation for calling the reassign
request


105
10. Request for Reassignment
Request for reassignment to reassign the incident to the correct resolver
group and to return to the Assign/ reassign Incident to Appropriate
Incident Resolver to assign the incident to a new incident resolver
11. Perform Incident Determination Procedure
If the incident was assigned to the correct resolver, proceed to perform
Incident Determination procedure to continue with incident analysis and
development of a Recovery Plan.
12. Return
Return to the Incident Management Process

Figure B-5 shows Incident Determination Procedure



FIGURE B-5 Incident Determination Flow


106
Narrative of Incident Determination Procedure
1. Initiate Incident Determination
Analyze all available incident data and initiate normal incident
determination activities. It should identify by all single points of failure.
2. Actual Incident?
Determine if the reported incident is indeed an actual incident.
(a) If it is Yes, proceed to Determine Incident Impact
(b) If it is No, proceed to Action Required?
3. Action Required?
Determine if any action is required.
(a) If it is Yes, proceed to Perform Appropriate Action
(b) If it is No, proceed to Update Incident Record to Indicate that
Incident is Not an Actual Incident
4. Update Incident Record to Indicate that Incident is Not an Actual Incident
Update incident record to indicate the incident that is not an actual incident.
Proceed to Return
5. Perform Appropriate Action
Perform appropriate action for non actual incident details to check if notification
is required.
6. Notification Required?
Determine if the notification is required.
(a) If it is Yes, proceed to Notify Appropriate Parties to Perform Action
(b) If it is No, proceed to Update Incident Record with Current Status
7. Notify Appropriate parties to perform Action
Notify appropriate parties to perform action for non-actual incident.
8. Determine Incident Impact
Determine of which are the incident impact to particular crucial services,
components, application, and networks.
9. Determine to Adjust Severity
Determine to adjust the assigned severity. Negotiable severity either up or
down will be notify to the FLS to determine with the negotiation.


107
10. Major Incident?
Base on the Major Incident policy, determine if the incident is a major incident.
(a) If it is Yes, proceed to Handle Major Incident Procedure
(b) If it is No, proceed to Recovery required
11. Handle Major Incident Procedure
Refer to the Handle Major Incident procedure to assign a major incident
owner to the incident and to handle all required notifications and escalations.
12. Recovery Required?
Determine if there is any recovery required to the incident.
(a) If it is Yes, proceed to Perform Backup and Recovery
(b) If it is No, proceed to Update Incident Record with Current Status
13. Perform Backup and Recovery
Perform recovery according to Backup and Recovery procedure
14. Update Incident Record with Current Status
Update incident record with the current status.
15. Return
Return to Incident Management Process


108
Figure B-6 shows Close Incident Record Procedure



FIGURE B-6 Close Incident Record Flow

Narrative of Close Incident Record Procedure
1. Review Close Incident Policy
Review the Close Incident policy for the account. The policy shall define:
(a) Who can close incident records
(b) Required closure concurrence, if any
(c) Required notifications, if any
2. Closure Concurrence Required?
Follow the policy to determine if concurrence to close the incident is
required.
(a) If it is Yes, proceed to Obtain Closure Concurrence from Appropriate
Parties.
(b) If it is No, proceed to Close Incident Record.

109

3. Obtain Closure Concurrence from Appropriate Parties
If concurrence to close the incident is required, follow the Close Incident
policy to obtain concurrence from the appropriate parties.
4. Concurrence Obtained?
Determine if concurrence to close the incident was obtained from all
appropriate parties.
(a) If it is Yes, proceed to Close Incident Record.
(b) If it is No, proceed to Document Closure Issue.
5. Close Incident Record
Close the incident record, ensuring that the incident record contains all the
required information, including the closing status, code and recovery and
resolution dates and times.
6. Notification Required?
Follow the Notification policy to determine if notification is required that the
incident has been closed.
(a) If it is Yes, proceed to Notify Appropriate Parties.
(b) If it is No, proceed to Return.
7. Notify Appropriate Parties
If notification is required, follow the Notification policy to notify the
appropriate parties that the incident has been closed and its closing status.
The following personnel to be notified that a severity 1 incident has been
closed:
(a) Incident Coordinator
(b) Requester/ User
(c) Designated customer incident liaison
8. Return
Proceed to Return.

110
B-7 ITIL-Based Problem Management Process
The scope of the Problem Management process includes:
(a) Review problem and incident trend analysis
(b) Opening an problem record
(c) Performing RCA (root cause analysis)
(d) Assigning problem to appropriate problem resolver
(e) Developing permanent resolution plan
(f) Implementing permanent resolution plan
(g) Close incident record

Figure B-7 shows the Problem management process flow



FIGURE B-7 IT Problem Management Process Flow

Narrative of Problem Management Process:
There are two purposes of the problem management process. One is to perform
the preventive action by analyzing problem and incident trends to determine to provide
the action plan (path ongoing). Another is to handle for each problem as required
from the incident management process (path as required for each problem).

111
The Ongoing path includes one procedure.
1. Review Problem and Incident Trend Analysis procedure
Refer to the Review Problem and Incident Trend Analysis procedure to
analyse the negative trend of incident and problem process. It will determine
to provide the action plan in terms of preventive action.
Proceed to End
As Required for each problem path includes the following.
1. Open Problem Record Procedure
Refer to Open Problem Record procedure.
2. Request for RCA?
Determine if the problem was opened for a request to perform a Root Cause
Analysis for a negative process trend.
(a) If it is Yes, proceed to Perform Root Cause Analysis Procedure.
(b) If it is No, proceed to Assign to Problem Resolver Procedure.
3. Perform Root Cause Analysis Procedure
Refer to Perform Root Cause Analysis procedure
Proceed to End
4. Assign to Problem Resolver Procedure
Refer to Assign to Problem Resolver procedure
5. Develop Permanent Resolution Plan Procedure
Refer to Develop Permanent Resolution Plan procedure
6. Was Resolution Developed?
Determine if the resolution Plan was developed.
(a) If it is Yes, proceed to Implement Permanent Resolution Plan
Procedure
(b) If it is No, proceed to Close Problem Record Procedure
7. Implement Permanent Resolution Plan Procedure.
Refer to Implement Permanent Resolution Plan Procedure.
8. Was Resolution Successful?
Determine if the resolution is successful?
(a) If it is Yes, proceed to Close Problem Record procedure.
(b) If it is No, proceed to Proceed to Another Effective Resolution Plan

112
9. Proceed to Another Effective Resolution Plan
If the resolution plan was implemented unsuccessful documented issue and
proceed to another effective resolution plan.
Proceed to Develop Permanent Resolution Plan Procedure
10. Close Problem Record procedure
Refer to Close Problem Record procedure
11. End
End of Problem Management Process

Figure B-8 shows Review Problem and Incident Trend Analysis Procedure

1. Review Problem and
Incident Analyses
Start
5.
Action Plan
Required?
3. Document Require for
Preventive Action
2.
Preventive Action
Required?
4. Review Action Plan in
Regular Management Meeting
6. Develop Action Plan
7. Handel Action for
Completion
Return
Yes
No
Yes
No


FIGURE B-8 Review Problem and Incident Trend Analysis

Narrative of Review Problem and Incident Trend Analysis
1. Review problem and incident trend analysis
Review problem and incident trend analysis to proactively determine
potential problems that have not yet been identified by the occurrence of an
incident or recurring data that might indicate an unidentified problem.


113
2. Preventive Action Required?
Determine whether specific targeted actions need to be taken to investigate,
resolve and prevent a potential problem, based on the outcome of data
gathering and trend analysis.
(a) If it is Yes, proceed to Document Required for Preventive Action.
(b) If it is No, proceed to Review Action Plan in Regular Management
Meeting.
3. Document Required for Preventive Action.
Document the required for preventative action with the trend analysis output.
Notify the preventive action result to the services of emerging trends and
possible improvement areas.
4. Review Action Plan in Regular Management Meeting
Review the action plan information with management at regular review
meetings to ensure that the information is understood and acted on.
5. Action Plan Required?
Does the review indicate that a further action plan is required to handle any
service issues?
(a) If it is Yes, proceed to Develop Action plan.
(b) If it is No, proceed to End.
6. Develop Action plan
Develop the required action plan.
7. Handle Action Plan Implementation for Completion
Handle the action plan implementation to monitor implementation and
completion of the action plan.
8. Return
Return to the Problem Management Process


114
Figure B-9 shows Open Problem Record Procedure



FIGURE B-9 Open Problem Record Flow

Narrative of Open Incident Record
1. Problem Record Already Open?
Check if a problem record has already been opened for the incident.
(a) If it is Yes, proceed to Review Open Problem Policy
(b) If it is No, proceed to Return
2. Update Problem Record which It Is Already Open
Update the problem record that the problem is ready opened.
3. Review Open Problem Policy
Review the Open Problem policy particular the details for items such as:
(a) Who has authorized to open incident records?
(b) What information is required when opening a problem

115
4. Open Problem Record
Open a problem record for the problem with required information.
The information required to open a problem record as the following:
(a) Incident details gathered and recorded in the incident record
(b) Associated incidents
5. Multiple Incidents?
Determine if the incident is a multiple incidents
6. Coordinate Incident to Problem Record
Coordinate the incident to the problem record.
7. Gather Required Information
Gather required information based on policy to complete the problem record
8. Entitle?
Follow the policy to determine if the problem requester is entitled to raise
this problem.
9. Document Entitle failure Detail
If the Requester was not entitled to raise this problem, document the details
of the entitlement failure in preparation for handling service entitlement
failure.
10. Handle Service Entitlement Failure
Handle Service Entitlement Failure is to resolves entitlement failures for
requested services and update request records to reflect the disposition of
entitlement failures. It shall be determined the problem against the service
contracts particular IT outsourcing contact. It may propose the alternative
for entitlement with authorized approval.
11. Continue?
Determine if the decision was made in the handle service entitlement failure
to continue with the problem.
12. Match Severity to Incident
Match problem severity based on definition to the problem.
Proceed to Return.
13. Return
Return to the Problem Management Process

116
Figure B-10 shows Perform Root Cause Analysis Procedure



FIGURE B-10 Perform Root Cause Analysis Flow

Narrative of Perform Root Cause Analysis
1. Assign RCA Owner
Assign an ownership for the Root Cause Analysis. The owner is responsible
for managing the Root Cause Analysis through its completion.
2. Gather Problem Related RCA
Gather all available problem data related to RCA, including:
(a) The problem record
(b) Any details about associated service outage


117

Steps 3 through 5 and Steps 6 through 8 are performed in parallel.
3. Analyse Problem
Analyze the problem data. In particular, look for common:
(a) Symptoms, patterns of occurrence, user environments, etc.
(b) Exception events
4. Identify Contribution Factors
Based on the problem data analysis, identify any factors that contributed to
the problem.
5. Determine Probable Cause
Choose the most likely problem cause or causes from the contributing
factors.
Proceed to Analysis Complete?
6. Monitor RCA
Monitor the progress of the Root Cause Analysis to ensure that it is on
schedule.
7. Action Required?
Determine if any action is required to complete the Root Cause Analysis.
(a) If it is Yes, proceed to Take Appropriate Actions.
(b) If it is No, proceed to Analysis Complete?
8. Take Appropriate Actions
Take whatever actions are necessary to complete the Root Cause Analysis on
schedule.
Return to Monitor Root Cause Analysis to continue to monitor the progress
of the Root Cause Analysis.
9. Analysis Complete?
Determine if the Root Cause Analysis has been completed.
(a) If it is Yes, proceed to Document Final RCA Result
(b) If it is No, proceed to Prepare Interim RCA Result
10. Prepare Interim RCA Result
If the analysis is not yet complete, prepare an interim report that documents
the Root Cause Analysis findings to date.
Return in parallel to Analyze Problem and Monitor RCA to complete the
analysis.

118
11. Document Final RCA Result
If the analysis is complete, document the results of the Root Cause Analysis.
Include findings from the problem data analysis, explanations of
contributing factors, and an indication of the probable cause(s).
12. Review RCA with Appropriate Parties
Review the Root Cause Analysis results with the appropriate parties; for
example, the Problem Coordinator and all affected service owners.
13. Result Accepted?
Determine if the Root Cause Analysis results were accepted.
(a) If it is Yes, proceed to Root Cause Found?
(b) If it is No, return in parallel to Analyze Problem and Monitor RCA
to repeat the Root Cause Analysis.
14. Root Cause Found?
Determine if a root cause of a problem was found.
(a) If it is Yes, proceed to Update Final RCA Results to Knowledge Database
(b) If it is No, proceed to Update Problem Record with Current Status
15. Update Final RCA Results to Knowledge Database
Update the root cause analysis result to knowledge database. Based on the
update knowledge database policy, it may be updated to reflect the RCA
results for all problems and negative process trends.
16. Update Problem Record with Current Status
Update the problem record with the current status of the problem; either:
(a) Root cause of the problem identified
(b) No root cause found
Proceed to Return.
17. Notify RCA Result to Appropriate Parties
Follow the notification policy to notify the appropriate parties of the RCA
results particular the service accounts that the RCA is applicable.
Proceed to Return.
18. Return
Return to either the Problem Management Process or
Development Resolution Plan

119
Figure B-11 shows Assign Problem to Appropriate Problem Resolver Procedure



FIGURE B-11 Assign Problem to Appropriate Problem Resolver Flow

Narrative of Assign Problem to Appropriate Problem Resolver
1. Review Problem Record
Review the problem record to determine whom it should be assigned to.
2. Correct Assignment?
Determine if the problem was initially assigned to the correct Resolver Group
when the problem was opened.
(a) If it is Yes, proceed to Indicate Request Type.
(b) If it is No, proceed to Assign Problem to Problem Resolver.
3. Indicate Request Type
If the problem was initially assigned to the wrong resolver, indicate the
request problem type and, if known, the details of whom the problem should
be reassigned to in preparation for calling the reassign request.
4. Request for Reassignment
Request for reassignment, assign the problem to the most appropriate resolver.
Proceed to Review Problem Record
5. Assign Problem to Problem Resolver
Assign problem to the problem resolver based on skill level and availability.


120
6. Update Problem Record with Current Status
Update the problem record to indicate that the problem has been assigned to
an appropriate problem resolver and is awaiting problem analysis and
development of a permanent resolution plan.
7. Return
Return to the Problem Management Process

Figure B-12 shows Developing Permanent Resolution Plan Procedure



FIGURE B-12 Developing Permanent Resolution Plan

121
Narrative of Developing Permanent Resolution Plan
1. Review Associated incident and Related Configuration Items (CIs)
Review all recorded available data about the incident(s), symptoms, severity
and associated configuration items based on component or application or
network categorization.
2. Identify Any Related Concurrences
Identify any related occurrences of the problem and analyze similar
problems, comparing the problem to the database of records to determine if
this is a repeat occurrence of a previous problem or known error.
3. RCA Required?
Determine if a Root Cause Analysis is required for the problem.
(a) If it is Yes, proceed to Perform root Cause Analysis Procedure
(b) If it is No, proceed to Investigate Possible Solution
4. Perform Root Cause Analysis Procedure
If a RCA is required, proceed to the Perform Root Cause Analysis procedure
to determine the most likely cause of the problem.
5. Investigate Possible Solutions
Investigate possible permanent solutions for the problem. It may search and
select potential resolution from the Knowledge Database.
6. Potential Resolution Identified
Determine if any potential resolutions were identified.
(a) If it is Yes, proceed to Select Resolution.
(b) If it is No, proceed to Update Problem Record to be Closed without
any Resolution.
7. Update Problem Record to be Closed without Any Resolution
If there is no any potential resolutions was identified, update the problem
record to indicate that the problem will be closed due to the lack of a known
error or possible resolution.
8. Select Resolution
If potential resolutions were found, select what appears to be the best
permanent solution for the problem.


122
9. Finalize Resolution
Finalize possible resolution
Proceed to Return.
10. Develop Resolution Plan and Test Resolution Plan
Match problem severity based on definition to the problem.
Proceed to Return.
11. Review Resolution plan with Appropriate Parties
Match problem severity based on definition to the problem.
Proceed to Return.
12. Issue Occurred?
Check if a problem record has already been opened for the incident.
(a) If it is Yes, proceed to Review Open Problem Policy
(b) If it is No, proceed to Return
13. Document Issue
Match problem severity based on definition to the problem.
Proceed to Return.
14. Issue Resolved?
Check if a problem record has already been opened for the incident.
(a) If it is Yes, proceed to Review Open Problem Policy
(b) If it is No, proceed to Return
15. Update Problem Record with Current Status
If the Permanent Resolution Plan is acceptable, update the problem record to
indicate that the solution is ready to be implemented to permanently resolve
the problem. Change the status of the problem to Known Error.
Proceed to Return.
16. Return
Return to the Problem Management Process







123
Figure B-13 shows Implement Permanent Resolution Plan Procedure



FIGURE B-13 Implement Permanent Resolution Plan Flow

Narrative of Implement Permanent Resolution Plan
1. Initiate Resolution Plan
Initiate the Permanent Resolution Plan involves two parallel procedures:
(a) Implementation performed by external operational processes
(b) Coordination: performed by the Problem Resolver to monitor the
overall execution of the Permanent Resolution Plan and to record the
implementation results.
2. Monitor Resolution plan Implementation
Monitor the implementation of the Permanent Resolution Plan against the
target schedule.



124
3. Adjustment Required?
Determine if any adjustment to the Permanent Resolution Plan is needed to
ensure resolution of the problem in known error status within committed
service levels.
(a) If it is Yes, proceed to Adjust Resolution Plan
(b) If it is No, proceed to Implement Resolution Plan
4. Adjust Resolution Plan
If adjustments to the Permanent Resolution Plan are needed to resolve the
problem in known error status within committed service levels, escalate the
implementers as required to apply corrective action and adjust the plan
accordingly.
5. Review Resolution Plan Adjustment with Appropriate Resolver
Coordinate the adjusted plan with all affected resolver to review the
resolution plan adjustment.
6. Update Problem Record with Adjusted Resolution Plan Details
Update the problem record with details of the modified Permanent
Resolution Plan.
7. Implement Resolution Plan
Perform Implementation of Resolution Plan to continue with the resolution
of the problem in known error status.
8. Implement Complete?
Determine if implementation of the solution is complete.
(a) If it is Yes, proceed to Successful?
(b) If it is No, proceed to Update Problem Record with Implemented
Resolution Unsuccessful
9. Successful?
Determine if the decision was made in the handle service entitlement failure
to continue with the problem.
(a) If it is Yes, proceed to Update Problem Record with Implemented
Resolution Successful
(b) If it is No, proceed to Update Problem Record with Implemented
Resolution Unsuccessful

125
10. Update Problem Record with Implemented Resolution Unsuccessful
If the problem was not resolved, update the problem record to indicate that
the Permanent Resolution Plan was not successful.
Note: The problem remains in known error status until it is permanently
fixed by a change.
11. Update Problem Record with Implemented Resolution Successful
If the problem was resolved successfully, update the problem record to
indicate that the problem in known error status has been resolved. Be sure to
enter the resolution date and time. The record should brief details of the
resolution so that these are available to assist with future incident and
problem investigation and diagnosis.
12. Notify Appropriate Parties
Notify the Requester, the Problem Coordinator, affected service owners, and
a customer-designated problem liaison of the outcome of implementing the
Permanent Resolution Plan. It should be following g the notification policy
to notify the appropriate parties of the outcome of implementing the
Permanent Resolution Plan.
Proceed to Return.
13. Return
Return to the Problem Management Process


126
Figure B-14 shows Close Problem Record Procedure



FIGURE B-14 Close Problem Record Flow

Narrative of Close Problem Procedure
1. Review Close Problem Policy

Review the Close Problem policy for the account. The policy shall define:
(a) Who can close problem records
(b) Required closure concurrence, if any
(c) Required notifications, if any
2. Closure Concurrence Required?
Follow the policy to determine if concurrence to close the problem is
required.
(a) If it is Yes, proceed to Obtain Closure Concurrence from
Appropriate Parties.
(b) If it is No, proceed to Close Incident Record.



127
3. Obtain Closure Concurrence from Appropriate Parties
If concurrence to close the problem is required, follow the Close Problem
policy to obtain concurrence from the appropriate parties.
4. Concurrence Obtained?
Determine if concurrence to close the problem was obtained from all
appropriate parties.
(a) If it is Yes, proceed to Close Problem Record.
(b) If it is No, proceed to Document Closure Issue.
5. Close Problem Record
Close the problem record. Ensure that the problem record contains all the
required information, including the closing status, code and recovery, and
resolution dates and times.
6. Notification Required?
Follow the Notification policy to determine if notification is required that
the incident has been closed.
(a) If it is Yes, proceed to Notify Appropriate Parties.
(b) If it is No, proceed to Return.
7. Notify Appropriate Parties
If notification is required, follow the Notification policy to notify the
appropriate parties that the problem has been closed and its closing status.
(a) Problem Coordinator
(b) Requester/ User
(c) Designated customer problem liaison
8. Return
Return to the Problem Management Process














APPENDIX C

SIMULATION MODELS AND SIMULATION RESULTS


130
C-1 Simulation Model of Typical IT Service Desk System
A simulation model of IT service desk system is shown in Figure C-1.

Arrivals
IT Incident Call
Assign Severity
0.303
95.753
3.238
El s e
Severity 1
Resolving
Severity 2
Resolving
Severity 3
Resolving
Severity 4
Resolving
Resolved
Ticket Severity 4
Resolved
Ticket Severity 3
Resolved
Ticket Severity 2
Resolved
Ticket Severity 1
Ticket
Assign IT Incident
4
Assign Servirity
3
Assign Servirity
2
Assign Servirity
1
Assign Servirity
0
0
0
0
0
0
0
0
0


FIGURE C-1 Simulation Model for IT Service Desk System

The details of the simulation model can be described by the SIMAN Code is in
the following:


;
;
; Model statements for module: BasicProcess.Create 1 (IT Incident Call
Arrivals)
;

14$ CREATE, 1,MinutesToBaseTime(0.0),Entity
1:MinutesToBaseTime(WEIB( 3.64, 0.903 )):NEXT(15$);

15$ ASSIGN: IT Incident Call Arrivals.NumberOut=IT Incident
Call Arrivals.NumberOut + 1:NEXT(9$);


;
;
; Model statements for module: BasicProcess.Assign 1 (Assign IT
Incident Ticket)
;
9$ ASSIGN: Picture=Picture.Ball:NEXT(0$);





131
;
;
; Model statements for module: BasicProcess.Decide 1 (Assign Severity)
;
0$ BRANCH, 1:
With,(0.303)/100,10$,Yes:
With,(95.753)/100,11$,Yes:
With,(3.238)/100,12$,Yes:
Else,13$,Yes;

;
;
; Model statements for module: BasicProcess.Assign 5 (Assign Servirity
1)
;
13$ ASSIGN: Entity.Type=Severity 1:
Picture=Picture.Red Ball:
S1 resolving time=LOGN(2.37, 4.74):
S1 time arrival=TNOW:NEXT(1$);


;
;
; Model statements for module: BasicProcess.Process 1 (Resolving
Severity 1)
;
1$ ASSIGN: Resolving Severity 1.NumberIn=Resolving
Severity 1.NumberIn + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP+1;
23$ QUEUE, Resolving Severity 1.Queue;
22$ SEIZE, 1,VA:
Resource 1,1:NEXT(21$);

21$ DELAY: MinutesToBaseTime(S1 resolving time),,VA;
20$ RELEASE: Resource 1,1;
68$ ASSIGN: Resolving Severity 1.NumberOut=Resolving
Severity 1.NumberOut + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP-1:NEXT(8$);


;
;
; Model statements for module: BasicProcess.Dispose 4 (Ticket Severity
1 Resolved)
;
8$ ASSIGN: Ticket Severity 1 Resolved.NumberOut=Ticket
Severity 1 Resolved.NumberOut + 1;
71$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 2 (Assign Servirity
4)
;
10$ ASSIGN: Entity.Type=Severity 4:
Picture=Picture.Green Ball:
S4 time arrival=TNOW:
S4 resolving
time=144*BETA(0.248,1.27):NEXT(4$);




132
;
;
; Model statements for module: BasicProcess.Process 4 (Resolving
Severity 4)
;
4$ ASSIGN: Resolving Severity 4.NumberIn=Resolving
Severity 4.NumberIn + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP+1;
75$ QUEUE, Resolving Severity 4.Queue;
74$ SEIZE, 3,VA:
Resource 1,1:NEXT(73$);

73$ DELAY: MinutesToBaseTime(S4 resolving time),,VA;
72$ RELEASE: Resource 1,1;
120$ ASSIGN: Resolving Severity 4.NumberOut=Resolving
Severity 4.NumberOut + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP-1:NEXT(5$);


;
;
; Model statements for module: BasicProcess.Dispose 1 (Ticket Severity
4 Resolved)
;
5$ ASSIGN: Ticket Severity 4 Resolved.NumberOut=Ticket
Severity 4 Resolved.NumberOut + 1;
123$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 3 (Assign Servirity
3)
;
11$ ASSIGN: S3 resolving time T2=LOGN(7.87, 11.1):
S3 resolving time T1=WEIB(5.94, 0.67):
Entity.Type=Severity 3:
Picture=Picture.Blue Ball:
S3 time arrival=TNOW:NEXT(3$);


;
;
; Model statements for module: BasicProcess.Process 3 (Resolving
Severity 3)
;
3$ ASSIGN: Resolving Severity 3.NumberIn=Resolving
Severity 3.NumberIn + 1:
Resolving Severity 3.WIP=Resolving Severity
3.WIP+1;
127$ QUEUE, Resolving Severity 3.Queue;
126$ SEIZE, 2,VA:
Resource 1,1:NEXT(125$);

125$ DELAY: MinutesToBaseTime(S3 resolving time T2),,VA;
124$ RELEASE: Resource 1,1;
172$ ASSIGN: Resolving Severity 3.NumberOut=Resolving
Severity 3.NumberOut + 1:
Resolving Severity 3.WIP=Resolving Severity
3.WIP-1:NEXT(6$);




133
;
;
; Model statements for module: BasicProcess.Dispose 2 (Ticket Severity
3 Resolved)
;
6$ ASSIGN: Ticket Severity 3 Resolved.NumberOut=Ticket
Severity 3 Resolved.NumberOut + 1;
175$ DISPOSE: Yes;


;
;
; Model statements for module: BasicProcess.Assign 4 (Assign Servirity
2)
;
12$ ASSIGN: Picture=Picture.Yellow Ball:
Entity.Type=Severity 2:
S2 time arrival=TNOW:
S2 resolving time=LOGN(4.61, 9.4):NEXT(2$);


;
;
; Model statements for module: BasicProcess.Process 2 (Resolving
Severity 2)
;
2$ ASSIGN: Resolving Severity 2.NumberIn=Resolving
Severity 2.NumberIn + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP+1;
179$ QUEUE, Resolving Severity 2.Queue;
178$ SEIZE, 1,VA:
Resource 1,1:NEXT(177$);

177$ DELAY: MinutesToBaseTime(S2 resolving time),,VA;
176$ RELEASE: Resource 1,1;
224$ ASSIGN: Resolving Severity 2.NumberOut=Resolving
Severity 2.NumberOut + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP-1:NEXT(7$);


;
;
; Model statements for module: BasicProcess.Dispose 3 (Ticket Severity
2 Resolved)
;
7$ ASSIGN: Ticket Severity 2 Resolved.NumberOut=Ticket
Severity 2 Resolved.NumberOut + 1;
227$ DISPOSE: Yes;



134
C-2 Simulation Model of KMRCA IT Service Desk System
Simulation Model of KMRCA IT service desk system is shown in Figure C-2.
Arrivals
IT Incident Call
Assign Severity
0.303
95.753
3.238
El s e
1
Resolving Severity
2
Resolving Severity
3 by Factor A
Resolving Severity
4
Resolving Severity
Resolved
Ticket Severity 4
Resolved
Ticket Severity 3
Resolved
Ticket Severity 2
Resolved
Ticket Severity 1
Ticket
Assign IT Incident
Assign Servirity 4
Assign Servirity 3
Assign Servirity 2
Assign Servirity 1
3 by Factor B
Resolving Severity
3 by Factor C
Resolving Severity
0
0
0
0
0
0
0
0
0
0
0


FIGURE C-2 Simulation Model of KMRCA IT Service Desk System

The SIMAN code of the simulation model is in the following:
;
;
; Model statements for module: BasicProcess.Create 1 (IT Incident Call
Arrivals)
;

16$ CREATE, 1,MinutesToBaseTime(0.0),Entity
1:MinutesToBaseTime(WEIB( 3.16, 0.903)):NEXT(17$);

17$ ASSIGN: IT Incident Call Arrivals.NumberOut=IT Incident
Call Arrivals.NumberOut + 1:NEXT(9$);

;
;
; Model statements for module: BasicProcess.Assign 1 (Assign IT
Incident Ticket)
;
9$ ASSIGN: Picture=Picture.Ball:NEXT(0$);

;
; Model statements for module: BasicProcess.Decide 1 (Assign Severity)
;
0$ BRANCH, 1:
With,(0.303)/100,10$,Yes:
With,(95.753)/100,11$,Yes:
With,(3.238)/100,12$,Yes:
Else,13$,Yes;

135
;
; Model statements for module: BasicProcess.Assign 5 (Assign Servirity
1)
;
13$ ASSIGN: Entity.Type=Severity 1:
Picture=Picture.Red Ball:
S1 resolving time=LOGN(2.37, 4.74):
S1 time arrival=TNOW:NEXT(1$);

;
;
; Model statements for module: BasicProcess.Process 1 (Resolving
Severity 1)
;
1$ ASSIGN: Resolving Severity 1.NumberIn=Resolving
Severity 1.NumberIn + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP+1;
51$ STACK, 1:Save:NEXT(25$);
25$ QUEUE, Resolving Severity 1.Queue;
24$ SEIZE, 1,VA:
Resource 1,1:NEXT(23$);
23$ DELAY: S1 resolving time,,VA:NEXT(66$);
66$ ASSIGN: Resolving Severity 1.WaitTime=Resolving
Severity 1.WaitTime + Diff.WaitTime;
30$ TALLY: Resolving Severity
1.WaitTimePerEntity,Diff.WaitTime,1;
32$ TALLY: Resolving Severity
1.TotalTimePerEntity,Diff.StartTime,1;
56$ ASSIGN: Resolving Severity 1.VATime=Resolving Severity
1.VATime + Diff.VATime;
57$ TALLY: Resolving Severity
1.VATimePerEntity,Diff.VATime,1;
22$ RELEASE: Resource 1,1;
71$ STACK, 1:Destroy:NEXT(70$);
70$ ASSIGN: Resolving Severity 1.NumberOut=Resolving
Severity 1.NumberOut + 1:
Resolving Severity 1.WIP=Resolving Severity
1.WIP-1:NEXT(8$);
;
;
; Model statements for module: BasicProcess.Dispose 4 (Ticket Severity
1 Resolved)
;
8$ ASSIGN: Ticket Severity 1 Resolved.NumberOut=Ticket
Severity 1 Resolved.NumberOut + 1;
73$ DISPOSE: Yes;

;
;
; Model statements for module: BasicProcess.Assign 2 (Assign Servirity
4)
;
10$ ASSIGN: Entity.Type=Severity 4:
Picture=Picture.Green Ball:
S4 time arrival=TNOW:
S4 resolving
time=144*BETA(0.248,1.27):NEXT(4$);

;
;
; Model statements for module: BasicProcess.Process 4 (Resolving
Severity 4)
;

136
4$ ASSIGN: Resolving Severity 4.NumberIn=Resolving
Severity 4.NumberIn + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP+1;
103$ STACK, 1:Save:NEXT(77$);

77$ QUEUE, Resolving Severity 4.Queue;
76$ SEIZE, 3,VA:
Resource 1,1:NEXT(75$);

75$ DELAY: S4 resolving time,,VA:NEXT(118$);

118$ ASSIGN: Resolving Severity 4.WaitTime=Resolving
Severity 4.WaitTime + Diff.WaitTime;
82$ TALLY: Resolving Severity
4.WaitTimePerEntity,Diff.WaitTime,1;
84$ TALLY: Resolving Severity
4.TotalTimePerEntity,Diff.StartTime,1;
108$ ASSIGN: Resolving Severity 4.VATime=Resolving Severity
4.VATime + Diff.VATime;
109$ TALLY: Resolving Severity
4.VATimePerEntity,Diff.VATime,1;
74$ RELEASE: Resource 1,1;
123$ STACK, 1:Destroy:NEXT(122$);

122$ ASSIGN: Resolving Severity 4.NumberOut=Resolving
Severity 4.NumberOut + 1:
Resolving Severity 4.WIP=Resolving Severity
4.WIP-1:NEXT(5$);

;
; Model statements for module: BasicProcess.Dispose 1 (Ticket Severity
4 Resolved)
;
5$ ASSIGN: Ticket Severity 4 Resolved.NumberOut=Ticket
Severity 4 Resolved.NumberOut + 1;
125$ DISPOSE: Yes;

;
; Model statements for module: BasicProcess.Assign 3 (Assign Servirity
3)
;
11$ ASSIGN: S3 resolving time T2=TRIA(2,3,4.5):
S3 resolving time T3=2.4:
S3 resolving time T1=1.2:
Entity.Type=Severity 3:
Picture=Picture.Blue Ball:
S3 time arrival=TNOW:NEXT(3$);
;
; Model statements for module: BasicProcess.Process 3 (Resolving
Severity 3 by Factor A)
;
3$ ASSIGN: Resolving Severity 3 by Factor
A.NumberIn=Resolving Severity 3 by Factor A.NumberIn + 1:
Resolving Severity 3 by Factor A.WIP=Resolving
Severity 3 by Factor A.WIP+1;
155$ STACK, 1:Save:NEXT(129$);

129$ QUEUE, Resolving Severity 3 by Factor A.Queue;
128$ SEIZE, 2,VA:
Resource 1,1:NEXT(127$);
127$ DELAY: S3 resolving time T1,,VA:NEXT(170$);
170$ ASSIGN: Resolving Severity 3 by Factor
A.WaitTime=Resolving Severity 3 by Factor A.WaitTime + Diff.WaitTime;

137
134$ TALLY: Resolving Severity 3 by Factor
A.WaitTimePerEntity,Diff.WaitTime,1;
136$ TALLY: Resolving Severity 3 by Factor
A.TotalTimePerEntity,Diff.StartTime,1;
160$ ASSIGN: Resolving Severity 3 by Factor
A.VATime=Resolving Severity 3 by Factor A.VATime + Diff.VATime;
161$ TALLY: Resolving Severity 3 by Factor
A.VATimePerEntity,Diff.VATime,1;
126$ RELEASE: Resource 1,1;
175$ STACK, 1:Destroy:NEXT(174$);
174$ ASSIGN: Resolving Severity 3 by Factor
A.NumberOut=Resolving Severity 3 by Factor A.NumberOut + 1:
Resolving Severity 3 by Factor A.WIP=Resolving
Severity 3 by Factor A.WIP-1:NEXT(14$);
;
; Model statements for module: BasicProcess.Process 5 (Resolving
Severity 3 by Factor B)
;
14$ ASSIGN: Resolving Severity 3 by Factor
B.NumberIn=Resolving Severity 3 by Factor B.NumberIn + 1:
Resolving Severity 3 by Factor B.WIP=Resolving
Severity 3 by Factor B.WIP+1;
206$ STACK, 1:Save:NEXT(180$);
180$ QUEUE, Resolving Severity 3 by Factor B.Queue;
179$ SEIZE, 2,VA:
Resource 1,1:NEXT(178$);
178$ DELAY: S3 resolving time T2,,VA:NEXT(221$);
221$ ASSIGN: Resolving Severity 3 by Factor
B.WaitTime=Resolving Severity 3 by Factor B.WaitTime + Diff.WaitTime;
185$ TALLY: Resolving Severity 3 by Factor
B.WaitTimePerEntity,Diff.WaitTime,1;
187$ TALLY: Resolving Severity 3 by Factor
B.TotalTimePerEntity,Diff.StartTime,1;
211$ ASSIGN: Resolving Severity 3 by Factor
B.VATime=Resolving Severity 3 by Factor B.VATime + Diff.VATime;
212$ TALLY: Resolving Severity 3 by Factor
B.VATimePerEntity,Diff.VATime,1;
177$ RELEASE: Resource 1,1;
226$ STACK, 1:Destroy:NEXT(225$);
225$ ASSIGN: Resolving Severity 3 by Factor
B.NumberOut=Resolving Severity 3 by Factor B.NumberOut + 1:
Resolving Severity 3 by Factor B.WIP=Resolving
Severity 3 by Factor B.WIP-1:NEXT(15$);
;
;
; Model statements for module: BasicProcess.Process 6 (Resolving
Severity 3 by Factor C)
;
15$ ASSIGN: Resolving Severity 3 by Factor
C.NumberIn=Resolving Severity 3 by Factor C.NumberIn + 1:
Resolving Severity 3 by Factor C.WIP=Resolving
Severity 3 by Factor C.WIP+1;
257$ STACK, 1:Save:NEXT(231$);

231$ QUEUE, Resolving Severity 3 by Factor C.Queue;
230$ SEIZE, 2,VA:
Resource 1,1:NEXT(229$);

229$ DELAY: S3 resolving time T3,,VA:NEXT(272$);

272$ ASSIGN: Resolving Severity 3 by Factor
C.WaitTime=Resolving Severity 3 by Factor C.WaitTime + Diff.WaitTime;
236$ TALLY: Resolving Severity 3 by Factor
C.WaitTimePerEntity,Diff.WaitTime,1;

138
238$ TALLY: Resolving Severity 3 by Factor
C.TotalTimePerEntity,Diff.StartTime,1;
262$ ASSIGN: Resolving Severity 3 by Factor
C.VATime=Resolving Severity 3 by Factor C.VATime + Diff.VATime;
263$ TALLY: Resolving Severity 3 by Factor
C.VATimePerEntity,Diff.VATime,1;
228$ RELEASE: Resource 1,1;
277$ STACK, 1:Destroy:NEXT(276$);
276$ ASSIGN: Resolving Severity 3 by Factor
C.NumberOut=Resolving Severity 3 by Factor C.NumberOut + 1:
Resolving Severity 3 by Factor C.WIP=Resolving
Severity 3 by Factor C.WIP-1:NEXT(6$);
;
; Model statements for module: BasicProcess.Dispose 2 (Ticket Severity
3 Resolved)
;
6$ ASSIGN: Ticket Severity 3 Resolved.NumberOut=Ticket
Severity 3 Resolved.NumberOut + 1;
279$ DISPOSE: Yes;
;
; Model statements for module: BasicProcess.Assign 4 (Assign Servirity
2)
;
12$ ASSIGN: Picture=Picture.Yellow Ball:
Entity.Type=Severity 2:
S2 time arrival=TNOW:
S2 resolving time=LOGN(4.61, 9.4):NEXT(2$);
;
; Model statements for module: BasicProcess.Process 2 (Resolving
Severity 2)
;
2$ ASSIGN: Resolving Severity 2.NumberIn=Resolving
Severity 2.NumberIn + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP+1;
309$ STACK, 1:Save:NEXT(283$);
283$ QUEUE, Resolving Severity 2.Queue;
282$ SEIZE, 1,VA:
Resource 1,1:NEXT(281$);
281$ DELAY: S2 resolving time,,VA:NEXT(324$);
324$ ASSIGN: Resolving Severity 2.WaitTime=Resolving
Severity 2.WaitTime + Diff.WaitTime;
288$ TALLY: Resolving Severity
2.WaitTimePerEntity,Diff.WaitTime,1;
290$ TALLY: Resolving Severity
2.TotalTimePerEntity,Diff.StartTime,1;
314$ ASSIGN: Resolving Severity 2.VATime=Resolving Severity
2.VATime + Diff.VATime;
315$ TALLY: Resolving Severity
2.VATimePerEntity,Diff.VATime,1;
280$ RELEASE: Resource 1,1;
329$ STACK, 1:Destroy:NEXT(328$);
328$ ASSIGN: Resolving Severity 2.NumberOut=Resolving
Severity 2.NumberOut + 1:
Resolving Severity 2.WIP=Resolving Severity
2.WIP-1:NEXT(7$);

;
; Model statements for module: BasicProcess.Dispose 3 (Ticket Severity
2 Resolved)
;
7$ ASSIGN: Ticket Severity 2 Resolved.NumberOut=Ticket
Severity 2 Resolved.NumberOut + 1;
331$ DISPOSE: Yes;

139
C-3 Simulation Results for Design of Experiments
This Appendix illustrates the simulation results that are provided to as the inputs
of experimental design (DOE) with 2
3
full factorial running standard order of 8 times
for each of 4 replications. Tables C-1, C-2, , C-16 show entity detail summary of
Time (in Table C-1) and entity detail summary of Number of Entities ( in Table C-2)
by the 1
st
to the 8
th
Standard orders, respectively.

TABLE C-1 Entity Detail Summary of Time by 1
st
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
4.27 4.27 4.25 4.27
Severity 4
18.36 27.81 30.69 24.89
Total
29.12 40.27 41.80 36.42

TABLE C-2 Entity Detail Summary of Number of Entities by 1
st
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 19 19
Severity 2
117 117 110 110 103 103 103 103
Severity 3
3,434 3,434 3,484 3,482 3,457 3,454 3,457 3,454
Severity 4
9 9 16 16 9 9 9 9
Total 3,585 3,585 3,630 3,628 3,588 3,585 3,564 3,558

Note : Nr. In is number of the input and Nr. Out is number of the output.

140
TABLE C-3 Entity Detail Summary of Time by 2
nd
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
4.67 4.67 4.65 4.67
Severity 4
18.36 27.81 30.69 24.89
Total
29.52 40.67 42.20 36.82

TABLE C-4 Entity Detail Summary of Number of Entities by 2
nd
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,434 3,484 3,480 3,457 3,454 3,410 3,404
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,585 3,630 3,626 3,588 3,585 3,564 3,558

Note : Nr. In is number of the input and Nr. Out is number of the output.

141
TABLE C-5 Entity Detail Summary of Time by 3
rd
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.57 5.57 5.55 5.57
Severity 4
18.36 27.81 30.69 24.89
Total
30.42 41.57 43.10 37.72

TABLE C-6 Entity Detail Summary of Number of Entities by 3
rd
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,470 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,616 3,588 3,584 3,564 3,556

Note : Nr. In is number of the input and Nr. Out is number of the output.


142
TABLE C-7 Entity Detail Summary of Time by 4
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.97 5.97 5.95 5.97
Severity 4
18.36 27.81 30.69 24.89
Total
30.82 41.97 43.50 38.12

TABLE C-8 Entity Detail Summary of Number of Entities by 4
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,469 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,615 3,588 3,584 3,564 3,556

Note : Nr. In is number of the input and Nr. Out is number of the output.



143
TABLE C-9 Entity Detail Summary of Time by 5
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.17 5.17 5.15 5.17
Severity 4
18.36 27.81 30.69 24.89
Total
30.02 41.17 42.70 37.32

TABLE C-10 Entity Detail Summary of Number of Entities by 5
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,478 3,457 3,454 3,410 3,404
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,624 3,588 3,585 3,564 3,558

Note : Nr. In is number of the input and Nr. Out is number of the output.


144
TABLE C-11 Entity Detail Summary of Time by 6
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
5.57 5.57 5.55 5.57
Severity 4
18.36 27.81 30.69 24.89
Total
30.42 41.57 43.10 37.72

TABLE C-12 Entity Detail Summary of Number of Entities by 6
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,474 3,457 3,453 3,410 3,402
Severity 4
9 9 16 16 9 9 16 16
Total 3,585 3,584 3,630 3,620 3,588 3,584 3,564 3,556

Note : Nr. In is number of the input and Nr. Out is number of the output.


145
TABLE C-13 Entity Detail Summary of Time by 7
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.47 6.47 6.45 6.47
Severity 4
18.36 30.42 30.69 24.89
Total
31.32 45.08 44.00 38.62

TABLE C-14 Entity Detail Summary of Number of Entities by 7
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,433 3,484 3,437 3,457 3,452 3,410 3,401
Severity 4
9 9 16 14 9 9 16 16
Total 3,585 3,584 3,630 3,581 3,588 3,583 3,564 3,555

Note : Nr. In is number of the input and Nr. Out is number of the output.



146
TABLE C-15 Entity Detail Summary of Time by 8
th
Std Order
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.77 6.78 6.75 6.77
Severity 4
0.49 45.85 20.77 21.54
Total
13.75 60.82 34.38 35.58

TABLE C-16 Entity Detail Summary of Number of Entities by 8
th
Std Order
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,390 3,484 3,355 3,457 3,388 3,410 3,386
Severity 4
9 1 16 2 9 3 16 5
Total 3,585 3,533 3,630 3,487 3,588 3,513 3,564 3,529

Note : Nr. In is number of the input and Nr. Out is number of the output.

147
C-4 The Results of Design of Experiment (DOE)
The results of experimental design of Throughput and Time in resolving
incident of severity 3 as shown in Figure C-3 and Figure C-4, respectively.


FIGURE C-3 DOE Results of Throughput

148



FIGURE C-4 DOE Results of Time in Resolving Incidents of Severity 3

149
C-5 Simulation Results for the Comparison Test
The simulation results that are provided for comparison Test, running for 4
replications. Table C-17 to Table C-20 show the summary of entity details of Time in
resolving incident and an entity details of Number of Entities.

TABLE C-17 KMRCA IT Service Desk; Entity Detail Summary of Time
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
2.51 2.28 2.55 1.84
Severity 2
3.98 5.91 4.31 5.43
Severity 3
6.77 6.77 6.75 6.77
Severity 4
0.49 45.85 37.95 21.54
Total
13.74 60.82 51.55 35.57

TABLE C-18 KMRCA IT Service Desk; Entity Detail Summary of Number of Entities
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
25 25 20 20 19 19 29 29
Severity 2
117 117 110 110 103 103 109 109
Severity 3
3,434 3,397 3,484 3,360 3,457 3,375 3,410 3,388
Severity 4
9 1 16 2 9 4 16 5
Total 3,585 3,540 3,630 3,492 3,588 3,501 3,564 3,531

Note : Nr. In is number of the input and Nr. Out is number of the output.


150
TABLE C-19 Typical IT Service Desk; Entity Detail Summary of Time
Time in Resolving Incidents (minutes)
Rep 1 Rep 2 Rep 3 Rep 4
Severity 1
1.61 1.15 2.15 2.75
Severity 2
5.92 4.97 4.99 4.26
Severity 3
7.28 6.96 7.61 7.11
Severity 4
18.99 22.11 24.58 25.22
Total
33.79 35.19 39.33 39.34

TABLE C-20 Typical IT Service Desk; Entity Detail Summary of Number of Entities
Rep 1 Rep 2 Rep 3 Rep 4
Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out Nr. In Nr. Out
Severity 1
27 27 20 20 21 21 21 21
Severity 2
100 100 105 104 117 111 88 88
Severity 3
3,017 2,994 2,898 2,889 3,085 2,979 2,986 2,973
Severity 4
10 9 7 7 10 5 9 9
Total
3,154 3,130 3,030 3,020 3,233 3,116 3,104 3,091

Note : Nr. In is number of the input and Nr. Out is number of the output.

151
C-6 Summary of Comparison Test Results
The statistical t-test results of Comparison of the KMRCA IT service desk and
Typical IT service desk by significant variables as shown in Table C-21.

TABLE C-21 Summary of Comparison Test Results
Replication KMRCA Typical S1-T S1-K S2-T S2-K S3-T S3-K S4-T S4-K
1 3,540 3,130 1.61 2.51 5.92 3.98 7.29 6.77 18.99 0.49
2 3,492 3,020 1.15 2.28 4.97 5.91 6.97 6.77 22.11 45.85
3 3,501 3,116 2.15 2.55 4.95 4.31 7.63 6.75 24.58 37.95
4 3,531 3,091 2.75 1.84 4.26 5.43 7.10 6.77 25.22 21.54

It is note that S1-T, S1-K, S2-T,, S4-K are average time in resolving incident
of Severity 1 of Typical IT service desk, average time in resolving incident of
Severity 1 of KMRCA IT service desk, average time in resolving incident of Severity
2 of KMRCA IT service desk,, Time in resolving incident of Severity 4 of KMRCA
IT service desk, respectively.
The below are the t-test results which were generated by Minitab 15.
a) Throughput; Paired T-Test and CI: KMRCA, Typical

Paired T for KMRCA - Typical

N Mean StDev SE Mean
KMRCA 4 3516.0 23.1 11.6
Typical 4 3089.3 48.9 24.5
Difference 4 426.8 37.6 18.8


95% CI for mean difference: (366.9, 486.6)
T-Test of mean difference = 0 (vs not = 0): T-Value = 22.68 P-Value = 0.000


b) Time in resolving of Severity 1; Paired T-Test and CI: S1-T, S1-K

Paired T for S1-T - S1-K

N Mean StDev SE Mean
S1-T 4 1.915 0.691 0.345
S1-K 4 2.295 0.326 0.163
Difference 4 -0.380 0.912 0.456


95% CI for mean difference: (-1.832, 1.072)
T-Test of mean difference = 0 (vs not = 0): T-Value = -0.83 P-Value = 0.466


152
c) Time in resolving of Severity 2; Paired T-Test and CI: S2-T, S2-K

Paired T for S2-T - S2-K

N Mean StDev SE Mean
S2-T 4 5.025 0.682 0.341
S2-K 4 4.907 0.912 0.456
Difference 4 0.118 1.457 0.729


95% CI for mean difference: (-2.201, 2.436)
T-Test of mean difference = 0 (vs not = 0): T-Value = 0.16 P-Value = 0.882


d) Time in resolving of Severity 3; Paired T-Test and CI: S3-T, S3-K

Paired T for S3-T - S3-K

N Mean StDev SE Mean
S3-T 4 7.248 0.287 0.143
S3-K 4 6.765 0.010 0.005
Difference 4 0.483 0.296 0.148


95% CI for mean difference: (0.012, 0.953)
T-Test of mean difference = 0 (vs not = 0): T-Value = 3.26 P-Value = 0.047


e) Time in resolving of Severity 4; Paired T-Test and CI: S4-T, S4-K

Paired T for S4-T - S4-K

N Mean StDev SE Mean
S4-T 4 22.7 2.8 1.4
S4-K 4 26.5 20.1 10.0
Difference 4 -3.73 18.64 9.32


95% CI for mean difference: (-33.39, 25.93)
T-Test of mean difference = 0 (vs not = 0): T-Value = -0.40 P-Value = 0.716









153
BIOGRAPHY

Name : Mr. Padej Phomasakha Na Sakolnakorn
Thesis Title : Knowledge Management System Improvement towards
Service Desk of IT Outsourcing in Banking Business
Major Field : Information Technology

Biography
Padej worked as senior process architect in IBM solutions Delivery Company, a
strategic IT outsourcing company, working on site at KASIKORNBANK during
April 2004 to May 2007. The purpose of process architect is to implement several
ITIL-based processes to outsourcing of KASIKORN Bank in particular IT service
desk function in Incident management process. Earlier joining the IBM, from October
1996 to March 2004, he worked as Quality assurance manager at SIAMTELTECH
computer company, an IT system integrator focuses on the areas of banking business
financial institutes, and telecommunication such as CAT and TOT.
For his education and certification, he earned a Bachelor of engineering degree
in electronics and telecommunication engineering from King Monguts Institute of
Technology Ladkrabang (KMITL) in 1991 and a Master of engineering degree in
management industrial engineering from King Monguts Institute of Technology
North Bangkok (KMITNB) in 1996. He was certified ITIL foundation in 2004.
Furthermore, he has been certified a License for professional practice in associate
electrical engineer (telecommunication and electronics) as well as he has been a
member of the Council of Engineers (COE), the engineering institute of Thailand
under H.M. the King's Patronage (EIT).
His interesting researches include IT service management (ITSM) improving
organizational IT outsourcing, Simulation study, Knowledge management system for
IT service desk, Text mining discovery algorithms and classification, and IT disaster
recovery planning (DRP).
Padejs home address at 23/123 Ladprao Road Cahnkaseam Chatujak Bangkok,
Thailand 10900 and his email is padejp@gmail.com .

Anda mungkin juga menyukai