2012 0
year 39 www.compact.nl
International Edition
The Compact journal has been appearing for almost 40 years. In the Dutch-speaking territories, Compact is the leading periodical in the fields of IT auditing and IT advisory services. To make articles published in this journal available to a broader public, a number of the most important articles in the areas of IT governance, performance and compliance have been translated into English and published in this book. The articles were written by authors who are leading in their respective fields and these authors have revised and updated the articles in question to accommodate the most recent developments. The articles in this book from 2008 address the areas of IT Strategy & Governance, ERP Advisory, IT Attestation, IT Project Advisory, IT Security Services, IRM in the External Audit, and Regulatory & Compliance Services.
Contents
3 37
14
22
43
30
online
Year 39, number 0 Compact is published by KPMG IT Advisory and Uitgeverij kleine Uil. This magazine is published 4 times a year. The views expressed in this magazine are not the official views held by KPMG IT Advisory.
2012 0
Compact magazine is produced with the utmost care. It is possible, however, that the information contained within is not completely correct due to the passage of time and/or other causes. Neither KPMG, KPMG IT Advisory, nor the editors, nor Uitgeverij kleine Uil, accept any form of liability whatsoever for any direct or indirect consequences of the use of the information provided.
J.A.M. Donkers (editor in chief) B. Beugelaar M.A. Francken J.A.M. Hermans D. Hofland M.A.P. op het Veld L.H. Westenberg
Reproduction of articles
Reproduction and circulation of articles and other textual items is allowed only with the publishers written consent.
issn 0920 1645
Editorial secretariat
Marloes Janssen Jacqueline Hartman Kai Hang Ho compact@kpmg.nl
Compact_ 2012 0
Contents
Compact_
Subscriptions
Administration of subscriptions
Compact Aweg 4/1 9718 cs Groningen the Netherlands Fax: +31 50 318 20 26 E-mail: info@compact.nl www.compact.nl All (temporary) changes of address must be notified at least 8 weeks before the date of issue.
Hans Donkers
Photography
Photos in this issue are placed with permission Photographers: www.sxc.hu philipn (cover, p. 1, 2) www.sxc.hu yenhoon (p. 3) www.istockphoto Ints Vikmanis (p. 14) www.sxc.hu Nico1 (p. 22) www.sxc.hu zahal (p. 30) www.sxc.hu SailorJohn (p. 37) www.sxc.hu lizerixt (p. 43) Image bank KPMG (cover III)
H.J.M. Boersen
M.J. Butterhoff
Introduction
Data centers are the ganglion hubs of the nervous system of our economy. In fact, almost all automated data processing systems are housed in data centers. Government and large enterprises alike are particularly dependent on these data-processing factories. A data center comprises not only the building with the technical installations inside, but also the IT equipment within the building that is used for the processing, storing and transporting of data. Data centers have a useful life of ten to twenty years, while IT equipment must be replaced about every five years. The investment for a midsize data center1 is at least one hundred million Euros. In contrast
S. Peekel
R. de Wolf
Compact_ 2012 0
1 Midsize data center means a data center with a floor area of 5,000 square meters or more that is air conditioned. Large data centers, usually for IT service providers, may have tens of thousands of square meters of floor space that is air conditioned.
to the long lifetime of a data center, technological developments and business objectives evolve at an extremely high tempo. A data center strategy must focus on future requirements and the organizations capacity to change so it can adapt to these new technologies. This article discusses recent technological and market developments that have a significant impact on the strategic choices that our clients make about the future of their data centers. We will also discuss the challenges that are encountered on the path to the consolidation and migration of existing data centers.
The sequence of the layers indicates that each layer is necessary for the layer above and that ideally technology choices can be made for each layer that are independent of the other layers. The free market system and open standards means that several technological solutions for each infrastructure layer are available on the market that offer the same functionality. Consider, for example, the industry standards that support specific design formats for IT equipment and equipment racks, standard data transport protocols such as Ethernet and TCP/IP on different platforms, storage protocols like CIFS, NFS and iSCSI, and middleware solutions, databases and applications on the various vendor platforms. A data center includes one or more buildings with technical installations for power and cooling of a framework of network, storage and server equipment. These devices run hundreds to thousands of software applications, such as operating systems, databases, and customized or standard software applications. The data center is connected via fast (fiber) networks to other data centers, office locations or production facilities. With decentralized IT environments, the IT equipment intended for end users or production sites must be close at hand. Given the small size and the decentralized nature of these spaces we do not refer to these as data centers but as Main and Satellite Equipment Rooms (MERs, SERs). The technical installations and IT infrastructure in data centers are primarily dependent on the reliable supply of electricity and dependent on the provision of water for cooling and fuel for emergency power supplies.
Applications
Connection services WAN services, campus networks and wireless user networks
Decentralized IT infrastructure
Technological developments
This section discusses some recent technological developments that have a significant impact on the strategic choices that our clients make about theTheSansBold 8,5/10 future of their TheSansSemiBold 8,5/10 data centers.
Office automation
Virtualization
TheSansSemiBoldIt 8,5/10
Server and storage hardware, virtualization Data center network and access points Electrical, cooling and air conditioning
The virtualization of server hardware and operating systems has a huge impact on how data centers are designed and managed. Using virtualization, it is possible to consolidate multiple physical servers into one powerful physical server running multiple operating systems or instances of the same operating system running logical servers in parallel. The motivation to use virtualization comes from research showing that with respect to time the load experienced on servers is about twenty percent and on web servers is about 7.4 percent ([Barr07], [Meis09]). The crux of virtualization is to greatly increase the utilization of IT equipment and in particular servers.
Combined with server virtualization, SANs not only allow the quick replication of data to multiple locations, but also allow simple replication of virtual servers from one location to another. The article Business continuity using Storage Area Networks in this Compact looks at SANs in depth as an alternative to tape based data backup systems. SANs and central storage equipment are among the most expensive components within the IT infrastructure. A data center strategy should therefore evaluate the investments in data storage systems and the associated qualitative and quantitative advantages.
Cloud computing
By cloud computing is meant a delivery model providing IT infrastructure and application management services via the Internet. Cloud computing is not so much a technological development in itself. Cloud computing is made possible through a combination of technological developments, including flexible availability of network bandwidth, virtualization and SANs. The main advantage of cloud computing is the shift from investments in infrastructure to operational costs for the rental of cloud services (from capex to opex), transparency in costs (pay per use), the consumption of IT infrastructure services according to real needs (elasticity) and the high efficiency and speed with which infrastructure services are delivered (rapid deployment by fully automated management processes and self-service portals). Cloud computing differs from traditional IT with respect to the following characteristics ([Herm10]):
Figure 2. Virtualization makes it possible to consolidate logical servers on one physical platform.
Figure 2 illustrates how two physical servers can be consolidated into one physical server using virtualization techniques. Virtualization greatly reduces the required number of physical servers. Up to twenty five servers on one physical server to be virtualized depending on the nature of the applications running on these servers. The use of virtualization can cause a substantial drop in data center operational costs because the management effort required is significantly reduced by a factor of five to twenty fewer physical servers. However, this requires significant investment and migration efforts. The data center strategy must evaluate the magnitude of the investment in virtualization technology and the migration of existing servers to virtual servers.
multi-tenancy (IT infrastructure is shared across multiple customers) rental services (the use of IT resources is separated from the ownership of IT assets) elasticity (capacity can be immediately scaled up and down as needed) external storage (data is usually stored externally from the supplier)
A cloud computing provider must have sufficient processing, storage and transportation capacity available to handle increasing customer demand for capacity as it occurs. In practice, the maximum upscaling is limited to a percentage of the total capacity of the cloud and involves an upward limit on elasticity. Figure 3 illustrates the variety of forms of cloud services. The main difference between the traditional model of in-house data centers and a private cloud is the flexibility that the private cloud allows. The private cloud make use
2 RAID is an abbreviation of Redundant Array of Independent Disks and is the name given to the methodology for physically storing data on hard drives where the data is divided across disks, stored on more than one disk, or both, so as to protect against data loss and boost data retrieval speed. Source: http:// nl.wikipedia.org/wiki/ Redundant_Array_of_ Independent_Disks.
Compact_ 2012 0
Architecture
Public cloud
Customer A
Customer A
Customer B
Customer C
Customer A
Customer B
Customer C
Service
Service
Service
Service
Service
Service
Service
Internet
Internet Provider
Internet
Internet Provider
IT
IT
IT
IT
IT
of standardized hardware platforms high availability and capacity, virtualization and flexible software licensing where operational costs are partly dependent on the actual use of the IT infrastructure. The private cloud is not shared with other customers and the data is located on site. In addition, access to the private cloud is not via the Internet. The network infrastructure of the organization itself can be used. According to cloud purists, one cannot speak about cloud computing in this case. The internal private cloud uses the same technologies and delivery models as the external private and public cloud, but without the risk of primary data storage being accessed by a third party. The cost of an internal private cloud may be higher than the other types. Nonetheless, for many organizations, the need to meet privacy and data protection directives outweigh the potential cost savings of using the external private or public cloud. The data center strategy should provide direction on when and which IT applications will be deployed via cloud services. Subsequently, it will not be necessary to reserve capacity for these applications in your own data centers.
other organizations have invested in their own facilities or rent from an IT service provider. The cost for these fall-back facilities is relatively high. This is primarily because of the extremely low utilization of capacity. The previously described technological developments offer cost effective alternatives for a disaster recovery set up. A high degree of virtualization and a fast fiber optic network between two data center locations (twin data centers) are the main ingredients for guaranteeing a high level of availability and continuity. Virtualization allows an application to run in parallel without allocating the processing capacity on the backup site that would normally be needed to run it. In a twin data center, synchronization occurs 24/7 for the data and several times a day for the applications. In the event of a disaster, processing capacity must be rapidly ramped up and allocated for the respective application(s) at the backup site and the users redirected accordingly. The twin data center concept is not new. The Parallel Sysplex technology from IBM has been available for decades. This allows a mainframe to be set up as a cluster of two or more mainframes at sites that are miles apart. The mainframes then operate as a single logical mainframe that synchronizes both the data and processing between both locations. A twin data center also allows you to implement Unix and Windows platforms twice without incurring double costs.
High-density devices
Virtualization allows the consolidation of a large number of physical servers on a single (logical) powerful server. The utilization of this powerful server is significantly higher than on separate physical servers (on average eighty percent for a virtual cluster of servers versus twenty percent for a single server). This means that a highly virtualized data center has significantly higher processing capacity per square meter. In recent years, the various hardware vendors have introduced increasingly larger and more powerful servers, such as the IBM Power 795, Oracle Sun M8000/M9000 and HP 9000 Superdome. In the last twenty years, there was a shift from mainframe data processing to more compact servers. It now seems there is a reverse trend toward so-called high-density devices. A direct consequence is a higher energy requirement per square meter, not just to sustain these powerful servers but also to cool them. Existing data centers cannot always provide the higher power and cooling requirements, so the available space is not optimally utilized. In addition, the weight of such systems is such that the bearing capacity of floors in data centers is not always sufficient and it may be necessary to strengthen the raised computer floor. This makes it a challenge for data center operators to balance the increasing density of the physical concentration of IT equipment and virtualization with the available power, cooling and floor capacity. The paradox is that the use of cost-effective virtualization techniques means that the limits of existing data centers are quickly approached and this gives rise to additional costs ([Data]). A data center strategy must allow for the prospect of placing high-density devices in existing or new data centers.
3 Information Technology Infrastructure Library, usually abbreviated to ITIL, was developed as a reference framework for setting up management processes within an IT organization. http://nl.wikipedia. org/wiki/Information_ Technology_ Infrastructure_Library. 4 CMDB: Configuration Management Database, a collection of data where information relating to the Configuration Items (CIs) is recorded and administered. The CMDB is the fulcrum of the ITIL management processes.
Compact_ 2012 0
Architecture
Ciscos data center vision ([Cisc]) specifies increased flexibility and operational efficiency and the breaking apart of traditional application silos. Cisco specifies a prerequisite, namely, the improvement of risk management and compliance processes in data centers to guarantee the integrity and security of data in virtual environments. Cisco outlines a development path for data centers with a highly heterogeneous IT infrastructure going through several stages of consolidation, standardization, automation of administration and self-service leading to cloud computing. IBM uses modularity to increase the stability and flexibility of data centers ([IBM10]) (pay as you grow). The aim is to bring down both investment and operational costs to a minimum. Reducing energy consumption is also an important theme for IBM because much of the investment and operational costs affecting the construction of a data center are energy related. IBM estimates that approximately sixty percent of the investment in a data center (particularly the technical installations for cooling and redundant power supplies) and fifty to seventy-five percent of non-personnel operating costs (power consumption by data center and IT equipment) for a data center are energy related. According to IBM, the increasing energy demands of IT equipment requires data center designs that anticipate a doubling or tripling of energy needs over the lifetime of a data center. Just like Cisco, Hewlett Packard (HP) has identified a development path for data centers ([HPDa]) where there is a shift from application-specific IT hardware to shared services based on virtual platforms and automated management and then onto service oriented data centers and cloud computing. In this context, HP promotes its Data Center Transformation (DCT) concept as an integrated set of projects for the consolidation, virtualization and process automation within data centers. The common thread in these market developments is reduction in operational costs, increased flexibility and stability of data center services by reducing the complexity of the IT infrastructure and a strong commitment to virtualization and energy-efficient technologies. Cloud computing is seen as a logical next step in the consolidation and virtualization of data centers.
Market developments
This section discusses the future of data centers as seen by several trendsetting vendors of IT services and solutions. IT service providers such as Atos Origin define their data center vision so as to enable them to better meet the needs of their customers. Atos Origin defines the following initiatives in its data center vision ([Atos]):
reduction in costs and faster return on investment quicker response to (changing) business requirements (agility) availability: the requirement has grown to 24/7 forever security and continuity: increased awareness, partly due to terrorist threats compliance: satisfy industry and government mandated standards increase in density requirements: the ability to manage high-density systems that have vigorously increasing energy consumption and heat production increase in energy efficiency: utilization of more energy-efficient IT hardware and cooling techniques
tion. Nothing is further from the truth. Organizations are struggling with questions such as: How do we involve the process owners in making informed decisions? Do we understand our IT infrastructure well enough to carry this out in planned and controlled manner? How do I limit risks of disruption during the migration? How large must the new data center be to be ready for the future? Or should we just take the step to the cloud? What are the investment costs and the expected savings from a data center consolidation path? In brief, it is not easy to prove that the benefits of data center consolidation outweigh the costs and risks. In the next section, we briefly discuss the challenges associated with data center consolidation and the migration of IT applications between data centers.
A typical data center migration project consists of a thorough analysis of the environment to be migrated, thorough preparation where the IT infrastructure is broken into logical infrastructure components that will be each migrated as a whole and subprojects for the migration of each of the logical infrastructure components. Each migration project requires the development of migration plans and fall-back scenarios, the performance of automated tests, and the comprehensive testing of each scenario. In fact, comprehensive testing and dry runs of the migration plans in advance significantly reduce the likelihood of the need for a fall-back during the migration. Minute-to-minute plans must be drawn up because of the importance of performing all actions in the correct sequence or simultaneously. Examples of such actions are the deactivating and reactivating of hardware and software components. The scale and complexity of these plans requires that these be supported by automated tools that resemble the management of real-time processes in a factory.
The time available for a migration phase to complete is limited and brief. High availability requirements forces migrations to be carried out within a limited number of weekends in a year. The migration or relocation of applications in a way that does not jeopardize data or production requires sophisticated fall-back scenarios. These fall-back scenarios add additional complexity to the migration plans and usually halve the time in which migrations can be carried out. The larger the scale of migrations, the greater the complexity. The complexity of migration scenarios increases with the number of underlying technical components and the number of hardware, applications and management services vendors. This increases the risk incurred through lack of oversight and in making outright mistakes.
In the following sections, we look at mitigation measures within the migration method and organization that reduce the risks of data center migrations to a manageable level.
Compact_ 2012 0
Architecture
+ Cost efficient approach No fall-back scenario Risk of damage via cooling off, (dis)assembly and transport
Equivalent hardware built at location B. Applications and data transferred. Fall-back scenario: revert to old environment.
Virtualization
App App App App OS OS OS OS Servers & storage Network Building & facilities Public infrastructure Location A App App App App Virtualization Servers & storage Network Building & facilities
Public infrastructure
Virtualization platform built at location B. Data with virtualized applications transferred. Fall-back scenario: revert to old environment.
+ + +
Fall-back scenario Technological progress Simple migration Virtualization involves huge effort Relatively costly approach
Equivalant virtualization platform built at location B. Virtual applications and data transferred. Fall-back scenario: revert to old environment.
Location A
Location B
The virtual migration (V2V) assumes a high degree of virtualization at location A so it is fairly simple to transfer data and applications to a similar virtualization platform at location B. This migration approach is similar to how a twin data center replicates applications and data across several sites. The disadvantage of this method is that not all applications are virtualized. In practice, a combination of these migration methods are used depending on the nature of the platform that needs to be rehoused.
Cost-benefit assessments
Choosing the right mix of migration methods requires finding a balance between migration costs and risks. Heavily reducing the migration risks could lead to a final outcome where the same technical standards are used as before the migration. This limits the possibility of achieving cost and efficiency benefits from technological advances. Ideally, the technical architecture of the environment after the migration aligns well with the technical
10
Compact_ 2012 0
Architecture
11
selection and installation projects. However, recent research shows that outsourcing where new technology is used does not necessarily reduces costs and deliver flexibility in contrast to construction or redevelopment of your own data center ([Koss10]).
in the sense of consolidating data centers and server farms with server virtualization. This also means that the same processing capacity requires less energy. Do more in the sense of more processing capacity for the same money and new opportunities to accommodate Disaster Recovery in existing data centers. These innovations require large-scale migration within and between data centers and this is coupled with significant investment, costs and migration risks. To reduce these risks to an acceptable level, proper assessments must be made of the costs and risks taken during the migration and during the operational phase after migration. The article draws from experience and provides a few examples of data center strategies, namely, the construction of a new data center, the redevelopment of an existing data center, and the outsourcing of data center activities.
Conclusions
Our experience shows that there is no magic formula that clearly points to modernization, redevelopment or outsourcing of data centers. The principles of a good data center strategy should be aligned with business objectives, investment opportunity, and the risk appetite of the organization. The technological and market developments described in this article make long term decisions necessary. The central theme is do more with less. With less
12
References
[Atos] http://www.atosorigin.com/en-us/services/solutions/atos_ tm_infrastructure_solutions/data_center_strategy/default.htm. [Bala07] Balaouras, Schreck and Forrester, Maximizing Data Center Investments for Disaster Recovery And Business Resiliency, October 2007. [Barr07] Barrosso and U. Hlze, The Case For Energy-Proportional Computing, Google, IEEE Computer Society, December 2007. [Cisc] Cisco Cloud Computing Data Center Strategy, Architecture and Solutions, http://www.cisco.com/web/strategy/docs/gov/ CiscoCloudComputing_WP.pdf. [Data] Data Center Optimization, Beware of the Power Density Paradox, http://www.transitionaldata.com/insights/TDS_DC_ Optimization_Power_Density_Paradox_White_Paper.pdf. [Fria08] Friar, Covello and Bingham, Goldman Sachs IT Spend Survey 2008, Goldman Sachs Global Investment Research. [Goog10] Google Patents Tower of Containers, Data Center Knowledge, June 18th, 2010, http://www.datacenterknowledge. com/archives/2010/06/18/google-patents-tower-of-containers/. [Heijm] http://www.digitaalbestuurcongres.nl/Uploads/Files/T05 _20-_20Heijmans_20_28BZK_29_20-_20Consolidatie_20 Datacenters.pdf. [Herm10] J.A.M. Hermans, W.S. Chung and W.A. Guensberg, De overheid in de wolken? De plaats van cloud computing in de publieke sector (Government in the clouds? The place for cloud computing in the public sector), Compact 2010/4. [HPDa] HP Data Center Transformation strategies and solutions, Go from managing unpredictability to making the most of it: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-6781ENW. pdf. [HPIT] http://en.wikipedia.org/wiki/HP_IT_Management_ Software. [IBM10] Modular data centers: providing operational dexterity for an increasingly complex world, IBM Global Technology Services, november 2010, ftp://public.dhe.ibm.com/common/ssi/ ecm/en/gtw03022usen/GTW03022USEN.PDF. [Kapl08] Kaplan, Forrest and Kindler, Revolutionizing Data Center Energy Efficiency, McKinsey & Company, July 2008: http:// www.mckinsey.com/clientservice/bto/pointofview/pdf/ Revolutionizing_Data_Center_Efficiency.pdf. [Koss10] D. Kossmann, T. Kraska and S. Loesing, An evaluation of alternative architectures for transaction processing in the cloud, ETH Zurich, June 2010.
European insurer A few years back when this major insurance company outsourced its IT infrastructure management activities to a number of providers, it was already known that its data centers were outdated. The insurer had experienced all sorts of technical problems from leaky cooling systems to weekly power outages. The strategy of this insurer was to accommodate the entire IT infrastructure in the data centers of the provider in the Netherlands and Germany. The migration of such a complex IT infrastructure, however, required a detailed understanding of the relationship between the critical business chains, applications and underlying technological infrastructure. At the time of the release of this Compact, this insurer is currently completing the project that will empty its existing data centers and move these to its data center provider. They have chosen to virtualize existing systems and to carry out the virtual relocation of the systems and associated data in a limited number of weekends. KPMG was brought into this project to set up the risk management process.
[Meis09] D. Meisner, B.T. Gold and T.F. Wenisch, PowerNap: Eliminating Server Idle Power, ASPOLOS 09, Washington DC, USA, March 2009. [Rijk] http://www.rijksoverheid.nl/bestanden/documenten-enpublicaties/kamerstukken/2011/02/14/kamerbrief-uitvoeringsprogramma-compacte-rijksdienst/1-brief-aan-tk-compacterijksdienst.pdf.
Compact_ 2012 0
Architecture
13
E. Sturrus
J. J. C. Steevens
W.A. Guensberg
Cloud computing is maturing past the hype stage and is considered by many organizations to be the successor to much of the traditional on-premise IT infrastructure. However, recent research among numerous organizations indicates that the security of cloud computing and the lack of trust therein, are the biggest obstacles to adoption. Managing access rights to applications and data is increasingly important, especially as the number and complexity of laws and regulations grow. Control of access rights plays a unique role in cloud computing, because the data is no longer stored on devices managed by the organizations owning the data. This article investigates and outlines the challenges and opportunities arising from Identity and Access Management (IAM) in a cloud computing environment.
14
SaaS PaaS
Introduction
In recent years, cloud computing has evolved from relatively simple web applications, like Hotmail and Gmail, into commercial propositions such as SalesForce.com and Microsoft Office 365. Research shows that most organizations currently see cloud computing as the IT model of the future. The security of cloud computing and the lack of trust in existing cloud security levels, appear to be the greatest obstacles to adoption ([Chun10]). The growing amount of data, users and roles within modern organizations, and the stricter rules and legislation in recent years concerning data storage for organizations, have made the management of access rights to applications and data increasingly important and difficult. The control of access rights plays a unique role in cloud computing, because data stored in the cloud demands new, often different security measures from the organizations owning the data. Organizations must change how identities and access rights are managed with cloud computing. For example, many organizations have limited experience with the management and storage of identity data outside the organization. Robust Identity & Access Management (IAM) is required to minimize the security risks of cloud computing ([Gopa09]). This article describes the challenges and opportunities arising from Identity & Access Management in cloud computing environments.
Salesforce.com, Microsoft Office 365, Gmail Software + Platform + Infrastructure App Engine, Force.com, Azure Platform + Infrastructure Amazon EC2, Terremark, RackSpace Infrastructure
Cloud computing, from the perspective of the user, is the usage of centralized computing resources on the Internet. Cloud computing differs from traditional IT via the following characteristics:
Multi-tenancy. Unlike traditional IT, the IT resources in the cloud are shared across multiple users. Paid services. The user only pays for the use of cloud services and does not invest in additional hardware and software. Elasticity. The capacity can either increase or decrease at all times. Internet dependent. The primary network for cloud services is the Internet. On-demand services. Unlike the greater part of traditional IT, cloud services can be utilized practically immediately.
IaaS
Different types of cloud services are available. First and foremost is Software-as-a-Service (SaaS) where software is provided as a cloud service. There is also Platform-as-aService (PaaS) where a platform (operating system, application framework, etc.) is offered as a cloud service. Finally, there is Infrastructure-as-a-Service (IaaS) where an IT infrastructure or part thereof (storage, memory, processing power, network capacity, etc.) is offered as a cloud service.
User management: The activities related to managing end-users within the user administration. Authentication management: The activities related to the management of data and the allocation (and deallocation) of resources needed to validate the identity of a person.
Compact_ 2012 0
Information security
15
Autorisation model
Autoritative Sources
Desired state
Usage
Auditing Services
Reporting Services
Authorization management: The activities related to defining and managing the access rights that can be assigned to users. Access management: The actual identification, authentication and authorization of end users for utilizing the target system. Provisioning: The propagation of identities and authorization properties to IT systems. Monitoring and auditing: The activities required to achieve monitoring, auditing and reporting goals. Federation: The system of protocols, standards and technologies that make it possible for identities to be transferable and interchangeable between different autonomous domains.
The next section elaborates on the various challenges related to the components of the IAM architecture in a cloud computing environment.
IAM plays a major role in securing IT resources. IAM faces many challenges when cloud computing is used. IAM processes, such as adding a user, are managed by the cloud provider instead of the organization owning the data. It is difficult for the organization using the cloud service to verify whether a modification has been completed successfully within the administration of the cloud provider. Furthermore, it is harder to check whether the data stored by the cloud provider is only accessible to authorized users.
16
User management
User management deals with the policies and activities within the scope of administering the entire lifecycle of users in the appropriate registers (initial registration, modification and deletion). For example, this could be the HR system for the employees of an organization. The HR system records the recruitment, promotions and dismissal of employees. In addition, user management controls the policies and activities related to granting authorizations to the users registered in the HR database. An organization that utilizes cloud services may be faced with challenges in user management that are new compared to the traditional on-premise situation. Managing the user life cycle in the traditional IT environment is a challenge, it is even more so in a cloud environment. The organization cannot always maintain control over user administration via their own HR system (or other centralized resource). The cloud provider usually also maintains a user administration system. What happens when users update their information via the cloud provider? How are the managers of the cloud services and their attributes kept up to date? Which laws and regulations (possibly outside own jurisdiction) apply to the storing of personal information? All these issues have to be dealt with again in a cloud computing environment. The allocation of authorizations is also a part of user management. The customer and cloud provider must agree on who is responsible for granting and revoking user rights.
integration with the cloud provider. SSO is a collection of technologies that allow the user to authenticate for different services once as a particular user, which allows access to the other services.
Authorization management
Authorization management deals with the policies and activities in relation to defining and administering authorizations. This allows authorizations to be grouped into a single role (based on so-called authorization groups). After granting this role to a user, that user can carry out a particular task or sub-task on certain objects. When a manager welcomes a new team member, he has to grant the appropriate role to the new user. Once the association is made, the authorizations that belong to this role are now available to the new user. As previously described, the granting of these predefined roles to users is carried out via user management. Likewise for authorization management, there are new challenges when the organization utilizes cloud services. The cloud provider and the customer must agree upon where the authorizations and/or roles are managed. The IAM system must be capable of exchanging (automated) messages with the means of authentication that the cloud provider uses. In many cases, the cloud provider and customer use conflicting role models and the maturity of the role models differ. For example, the cloud provider may have switched over to centrally organized Role-Based Access Control (RBAC), while the customer still uses direct end-user authorizations that is administered in a decentralized manner. In accordance with user management principles, it is necessary to maintain a trusted-relationship on authorization management that is supported by contractual agreements.
Authentication management
Authentication management includes the processes and procedures for administering the authentication of users. If particular data is very sensitive, stringent authentication may be required to access this data (for example, by using a smart card). Defining and recording these requirements within objects in the form of policies and guidelines is part of authentication management. Authentication management also deals with the issuing and revocation of authentication means (for example, username and password and smart cards). The following challenges in authentication management are new compared to the traditional on-premise situation: the authentication means for different cloud providers may vary. Sometimes, the cloud provider itself may only use mechanisms that do not match the (security) technical requirements of the customer. It can also be complicated to implement the level of authentication in a uniform way. In addition, synchronization of passwords can be a challenge, especially in environments where the user administration changes quickly or where users must change their own passwords. Finally, it requires that a working Single Sign-On (SSO) environment is maintained for technical
Access management
Access management deals with the (operational) processes that ensure that access to IT resources is only granted in conformance with the requirements of the information security policies and based on the access rights associated with the users. The domain of access management has the following new challenges compared to the traditional on-premise situation: access management requires agreements to be made between the cloud provider, third parties and the customer, on how to appropriately organize access to the target systems. For example, the exchange of authorization data (user names, passwords, rights, and roles) must be fast enough to grant or deny access instantly. The customer and the cloud provider can decide to establish a trustedrelationship supported by certificates and/or a Public Key Infrastructure (PKI).
Compact_ 2012 0
Information security
17
The utilization of cloud services creates new challenges for authorization management
Provisioning
IAM must ensure that after a role is granted to a user, the user is created in the relevant objects, and that this user is then granted the appropriate authorizations for corresponding objects. Within IAM, this process is called provisioning. Provisioning deals with the manual and/or automatic propagation of user and authorization data to objects. In other words, provisioning consists of creating a user and assigning authorizations to the user objects. Manual provisioning means that a system manager creates a user with authorizations on request. Automatic provisioning means that the system automatically processes these requests without any intervention by a system manager. When a role is revoked from a user then deprovisioning has to take place, which means that the authorizations are revoked from the user. Provisioning in a cloud environment has the following challenges: the propagation of accounts within the organization and also within the cloud provider is challenging, since technologies and standards are often different for each cloud provider. As more cloud providers deliver services to an organization, it becomes exponentially more complex for the customer to implement provisioning. The creation and modification of accounts and rights on target systems is generally driven by business need. However, it is often the case that less attention is given to deletion because it serves limited business need and it is believed that the security risk does not outweigh the additional effort required to follow through this deprovisioning process effectively. With respect to the contract with the cloud provider, customers often forget to give sufficient attention to the ending of the relationship. It is then unclear what happens to the data and user rights when the cloud provider no longer provides paid services to the customer. auditing compliance with the requirements of the applicable information security policies. One reason for this is that the customer often does not have insight in what resources the cloud provider utilizes to manage and monitor the IT resources. A consequence of this lack of transparency is that it may be difficult for a customer to achieve full compliance. In particular, the use of accounts with high-level privileges is difficult to monitor.
Traditional model
If an organization utilizes part of its IT needs as a cloud service, the components of the IAM framework must work together with the cloud provider. This may be achieved by linking the existing IAM with the cloud provider (see Figure 3). In this case, the organization manages identities and access rights locally and then propagates these to the various cloud providers. For each cloud provider, the authorized users must be added to the directory of the cloud provider. There are several packages on the market that automate the processes of creation, modification and deletion by synchronizing the local directory with the cloud. However, the connector that enables the synchronization to occur must be separately developed and maintained for each cloud provider. A drawback is the added complexity in management when there are multiple cloud providers. Identification and authentication for cloud services occurs with the cloud provider. Handing these processes over to the provider requires strong confidence in the provider. There are tools on the market that make it possible to link with local SSO applications. With this method the user needs fewer identities to access services. Checking identification and authentication for cloud services is performed by the cloud provider. Strong confidence in the cloud providers and their policies is required.
18
This option is already actively used by a large Dutch retailer that has linked the local IAM infrastructure to their cloud provider of email and calendar services.
thus trusts the IAM of the customer and it is on that basis that the users can utilize the services. Thus, in most cases, duplication of accounts is unnecessary (unless for auditing purposes). If this option is used, the customer may continue to use the existing access methods to manage the user activities. A disadvantage for this option is that, when there are a large numbers of cloud providers, it is necessary to make agreements with each cloud provider about the confidentiality of the customers local IAM. In addition, for many cloud providers, it is impossible to maintain trustworthy and appropriate monitoring of the IAM of all customers.
Trusted-relationship model
Another option to allow IAM to have the cloud provider support the IAM of the customer (see Figure 4). The customer manages the local identities and access rights. The users are stored locally in a directory and an access request for a cloud service is authenticated locally. The cloud provider checks the authorizations and validates these using the directory of the customer. The cloud provider
Local
Application
CSP 2 IAM
Application Application Data Data
UDS Directory
Application
Resources
Local
Application
CSP 2 IAM
Application Application Data Data
UDS Directory
Application
Resources
Figure 4. Federated cooperation between customer and provider. Compact_ 2012 0 Information security
19
Conclusion
By using (public) cloud services, the organization will need to revise its control measures to maintain the required level of security. Whilst security risks may well decrease by transferring selected services to the cloud, risks are likely to increase in certain areas such as IAM. To minimize the risk it is necessary to properly set up the IAM framework. The implementation of necessary changes to IAM in a cloud environment is critical in providing an adequate level of confidence and guarantee security. The fact that some of the IT resources are no longer contained in the organization itself raises several questions in the IAM domain. Even though liability remains with the organization utilizing services, keeping control of the IAM processes is more difficult because these are often part of the cloud providers domain. For user management, it is important that organizations verify whether changes to user data are taken over by the cloud provider. Organizations must comply with company, national and international laws and regulations with regard to personal information. When considering authentication, it is important
IdSP
All-in-the-cloud model The last option is to outsource the entire IAM and utilize it as a cloud service (see Figure 6). In this case, the organization delegates all IAM systems and processes to a third party operating in the cloud itself. The link with all cloud providers is managed and controlled by this third party. Access to local IT resources will also be conducted via the IAM Service. Effectively, all management and control of IAM are outsourced to the cloud.
High trust in the IAM service is required. It is difficult for the customer to monitor the status of the processes for either local or cloud services. Currently, there are no fully
Local
CSP 1 IAM
Applicatie
CSP 2
Application Application Data Data
Resources
IAM
Application
20
that the authentication methods and requirements used match those of the cloud provider. Furthermore, the authorization models should align, so that the correct rights are granted to the authenticated users. The processing of both authorizations and authentications must be timely and accurate in order for the partnering organizations to have confidence in the actual use of cloud services. Finally, it is essential that the monitoring and auditing processes meet the requirements of the applicable security policies. Several options are available for managing identity and access to cloud services. Firstly, the IAM framework can be connected with the cloud provider. The customer itself manages and propagates users and the rights to the cloud provider. It may be possible to automate this process. Identification and authentication occur in the cloud provider domain. A second option is to allow the cloud provider to support the customers IAM framework. The use of this trusted-relationship makes it unnecessary to propagate user to all cloud providers. In addition, identification and authentication occurs locally. A third option is to use an IdSP. This is a third party which is trusted by both customers and cloud services providers and validates the identity of users. The last option is to outsource the entire IAM stack and consume IAM as a cloud service altogether. Which option is the most suitable depends on IAM requirements of the organization and on the type and number of cloud services consumed. The IAM framework should be properly established before cloud services are utilized to minimize risk exposure. Furthermore, it is very important to align the IAM framework with the cloud landscape to allow effective cooperation with the cloud provider and adequate security safeguards.
References
[Blak09] B. Blakley, The Business of Identity Services, Midvale, Burton Group, 2009. [Chun10] M. Chung and J. Hermans, From Hype to Future: KPMGs 2010 Cloud Computing Survey, KPMG Advisory, Amstelveen, 2010. [Cser10 A. Cser, S. Balaouras and N.M. Hayes, Are You Ready For Cloud-Based IAM?, Cambridge, Forrester Research, 2010. [Gopa09] A. Gopalakrishnan, Cloud Computing Identity Management, SETLabs Briefings 7 (7), p. 45-54, 2009. [Herm05] J. Hermans and J. ter Hart, Identity & Access Management: operational excellence or in control?, Compact 2005/3, p. 47-53. [Herm10] J. Hermans, M. Chung and W. Guensberg, De overheid in de wolken? (The government in the clouds), Compact 2010/4, p. 11-20. [KPMG09] KPMG, IAM Methodology, KPMG International, 2009. [NIST11] NIST, The NIST Definition of Cloud Computing, Information Technology Laboratory, Gaithersburg, National Institute of Standards and Technology, 2011.
Organization
CSP 2 IAM
Application
Application
Application
Data
Data
Resources
21
M.B. Paques
identify the risks to the organization being evaluated make employees aware of these risks (training)
During the tests, attempts are made to manipulate employees so that unauthorized access to confidential information is obtained. These attempts vary from a simple phone call test in which employees are tricked into disclosing
22
passwords or a so-called phishing attack (in which the attacker uses forged emails and/or websites), to a physical attack where a clients premises are entered by a tester undercover using counterfeited access badges (or sometimes disguised like a pizza delivery person or fireman) to gather confidential information from the inside. The findings are usually quite remarkable. To name just a few, unauthorized access has been gained to safes in banks, heavily secured government areas and large data centers. In several of these cases, the assignment also included a penetration test. In these combined tests, also known as red teaming (Figure 1), the team first has to gain unauthorized physical access to the building and then has to hack internal systems and eventually leave with confidential information without being caught. The main difference between a penetration test where you can attempt to access systems multiple times and a social engineering test is that in the latter the tester usually has only one chance of success. There are no try-outs, it must be successful the very first time. The tester has to be prepared for unforeseen situations and must have a made-up story (the pretext) ready in case his presence is
being questioned. If his story is not credible, there is a risk of being taken away in handcuffs. The employees of the organization for which the test is performed are generally not informed in advance about the test. Often, only a few executives are aware of the test and even they do not know exactly when the test will be carried out. Security staff is not meant to be put on the alert and take extra precautions. This approach makes it possible to obtain a realistic impression of the risks. As a result of this approach security personnel may take drastic measures if the tester is unmasked as an intruder (especially when he has a stack of confidential documents in his possession).
War dialing
Red Teaming
Social engineering
...
Penetration Testing
Database security testing Host security testing
Vulnerability Assessment
Mapping Scanning ...
Stress testing
Exploiting
...
War driving
Phishing
Figure 1. Red teaming is a test approach where different attack techniques are combined to simulate an actual attack. Compact_ 2012 0 Information security
23
The timing of an attack is also very important. Often, the help of an employee is required to get past a gate, fence, reception or other secured entrance. The exact moment that a suitable employee is present may be a matter of seconds. With a good story, improvisation skills for unanticipated situations, the ability to make contact easily and sometimes nerves of steel an attacker might even be able to penetrate the most secure environments. During an attack it is useful to know what people to approach and who to avoid. For example, secretaries often know a lot about what is happening in a company. Their knowledge can be of tremendous value. However, because they know a lot about what is happening in the company, a good story that is well supported is a prerequisite when you approach them. Complete improvisation may be like a game of Russian roulette and result in a premature and undesired end of the test. Case study 1 describes a test case in which the individuals approached were specifically selected to make the chance of success as high as possible.
Employee training
An important aspect of a social engineering test is to make the employees aware of the risks. Nevertheless, the attack scenarios should be selected in such a way that the impact experienced by employees is kept to the absolute minimum required. Therefore, we do not give the client any details (insofar as possible) about which employees played a role in the tests (for example, which employees did provide their password). Details are anonimized as much as possible. The least sensible thing a client can do (and, of course, highly undesirable, but not inconceivable) is taking disciplinary action against these employees. The outcome of such an action is that employees who do become victims of a real social engineering attack may not report it in fear of reprisals and the organization does not become aware of the attack until it is too late and it has to deal with the consequences. A good follow-up to a social engineering test is to present the results back to all employees so that the test can be a learning experience and they are better prepared against a real attack. We experience that in practice, most untrained employees are susceptible to a social engineering attack and employees can be misled at every level in the organization.
Case study 1
A colleague and I carried out an advanced phishing attack on one of our clients. My colleague placed himself at the entrance of the client office building and selectively asked employees who entered whether they wanted to take part in a survey about the upcoming Christmas activity. We focused our selection of employees on the younger female employees to minimize the risk of accidentally speaking with managers or IT staff. (They would know whether such a survey existed and thus figure out quite quickly that there was an attack underway.) Beforehand, we had examined the LinkedIn and Facebook profiles of key people in the organization so we could recognize and avoid these risky people. Participants would be included in a raffle for an iPod Touch. The employees who wanted to participate were given a sealed envelope containing a letter explaining the activity and a link to our forged web page with the survey that we set up beforehand. After logging in with their credentials, the employees were presented with ten questions about their ideas for the perfect Christmas activity. They could also supplement these with their own suggestions. After submitting their responses, they were thanked for their participation. Of course, we were not at all interested in the employees party ideas, but just in their login details. I had taken position around the corner to keep an eye through the window to see whether anything suspect happened inside. If it became necessary, I could warn my colleague via our two-way radio transceiver and inform him that it was time to take to his heels. At the same time I watched my smartphone that provided real-time updates on the number of users that logged in on the web page. In a matter of minutes several users had entered their passwords on our web page already. After about 35 minutes, we both left the location in different directions. We estimated that this was the minimum amount of time it would take to be detected. In the discussion with the client afterwards we discovered that only a few minutes passed after we left until two alarmed people came outside to demand an explanation.
Psychological tricks
For each test the attack scenario is completely different because it is tailored to the clients specific circumstances. Nonetheless, some fundamental psychological principles or tricks are regularly used:
Making a personal connection: mentioning a common problem or interest is typical. Social media can be a valuable source of information. Indicating that you have worked for the same company or play the same sport builds trust. You can also say you have a friend or acquaintance in common. After the connection is made, it is harder for the victim to refuse a request. Time pressure: create a situation where the victim does not have enough time to make a proper decision because circumstances are described in such a way that a quick decision must be made. The Windows operating system often shows the name of the last user that logged in (but not the password). Sitting at a users (locked) PC, you can usually block that users account by entering the wrong password five times. After blocking the account, you can call the help desk and say that you must give an important presentation within five minutes and need to get into your blocked account. Due to the time pressure the help desk employee (after checking that the account
24
Figure 2. Security badge costing a few dollars that a social engineer can use to exude authority.
is actually blocked) may issue a temporary password and give it over the telephone. Now, you have access to the system. Referring to a senior person in the organization (authority). This trick often works very effectively combined with the time pressure element. Indicate that the victim is hindering the actions of a high ranking person in the organization and that the victim must immediately assist with the request. A variation of this is using clothing and accessories that exude authority (see also Figure 2). Wearing a suit and tie makes it sometimes much easier to get into a building without being questioned than wearing jeans and a T-shirt. I once entered a bank in a soaking construction workers jacket announcing that there was a leak on the floor above. I said something like: I just want to take a quick look to see if any water is coming through the ceiling. The staff were happy that they had been warned in time and without asking questions allowed me access to the restricted areas in the building that should only be accessible by bank staff. Asking for help: for example, ask someone to print a file from a USB memory stick that is infected with malware that infects the pc of the victim as soon as the file on the stick is accessed, or borrow an access badge because you left yours on your desk. A request made by a man (the tester) to a woman (the victim) and vice versa is usually fulfilled easier than when the gender is the same. Using recognizable items related to the organization that is being evaluated. Employees may believe they are dealing with a co-worker because you have an access badge (possibly forged), similar style of clothing, business cards, jargon, knowledge of work methods or names of information systems or colleagues (name dropping). All are less likely to prompt critical questions. If the name on the (fake) badge also has a LinkedIn or Facebook profile that refers to the company being
evaluated, even the most suspicious people may be convinced that they are dealing with a co-worker. Another method is to request one employee to give information to another employee (for example, communicate with an internal department to have them forward a wrongly addressed email). Using these internal reference points increases credibility. Another example is recording the hold music that companies use when callers are put on hold. You can call an employee, then say after a few minutes: Wait a minute please, I have to get the other line. You then put the victim on hold and play the hold music that you previously recorded making the victim unconsciously think: Hey, thats our music, he must work for our company. Indicating that all colleagues of the victim have acted the same way so that it makes the request seem completely normal. People are inclined to believe something is correct when others have made the same choice. A variation of this is the gradual escalation of requests (for information). If someone has already fulfilled a number of requests (for example, they looked up trivial information) it is then more difficult to refuse a request for confidential information. Creating the need to return a favor. Giving people something creates an emotional obligation where they feel they owe you something back. This makes it easier than usual to get someone to fulfill a request. When you have done something for someone (even when they did not ask for it), it becomes more difficult for that person to refuse a request. Creating the impression that the actual request already is a concession. When all that is needed is five minutes inside, it can be useful to request a tour on the premises. If this is refused, insist that it will only take five minutes to have a quick look around. Offering something that leads to a personal benefit. For example, send a phishing email with a code to receive a personal Christmas packet. Creating unexpected situations so that employees (especially security guards) are no longer able to follow their usual routine. We once dressed up as Sinterklaas (a traditional Winter holiday figure celebrated in the Netherlands) and his helper and have even penetrated a high security data center in this manner (Figure 3). The data center was at a secluded location and surrounded by high fences with barbed wire, dozens of cameras and an earthen wall that hid the building from view. We called security on the phone a week in advance and pretended to be from the HR department. We told them that we were calling about the Sinterklaas activities at the different locations. To get onto the premises, we first had to get through a checkpoint where a security guard behind bullet-proof glass consulted his colleagues inside the building when we showed up. Somewhat to
Compact_ 2012 0
Information security
25
Figure 4. A button camera that surreptitiously films security sensitive actions such as password keystrokes.
Figure 3. The Sinterklaas and helper who managed to penetrate the data center.
of the guards with a chocolate letter, they allowed us access. We made a tour through the building and we left again with no problems. Using distraction such as bringing along that attractive female colleague with a short skirt and high heels.
our surprise, we were allowed to enter the premises and the door was locked again behind us. When we arrived in the data center itself, we walked straight up to a glassed-in security area with five security guards. A quick peak in our heavy bag of pepernoten (traditional Sinterklaas cookies) would have sufficed to reveal the recording equipment of the spy camera (Figure 4) and unmask us. Hello! Well, here we are then!!, we called out, and instead of putting identification into the tray filled it with pepernoten. After bribing one
As mentioned before, for each social engineering test specific attack scenarios are elaborated depending on the specific situation of the client. These scenarios often use one or more of the aforementioned techniques . In Case study 2, a personal connection was made with the victim, recognition was induced by referring to internal departments, a personal benefit was offered (not losing data) and a compromise was agreed upon (last paragraph). That help had been previously given also created the obligation for compensation.
Case study 2
In a test case where the goal was to gain unauthorized access to a system, I called an employee to report that there was probably a problem with her system as it was causing an enormous amount of traffic on the network. I said that it would eventually crash her system and in the worst case prevent access to existing data. When I asked whether her laptop was very slow lately, I did indeed receive an affirmative answer (of course). After some random tapping on my keyboard, I said that I had found the problem, emphasized how very difficult it was to solve, but that I was working on it. I hung up and called again after half an hour to indicate that the problem was solved. After she had thanked me emphatically, I hung up.
Attack scenario
constitutes
Methods
based on
Psychological tricks
Two days later, I called again and said that, unfortunately, it turned out that the problem was still present and it appeared that changes needed to be made to her laptop. I asked her whether she could bring her laptop along to the local IT department (that I had already called earlier to determine how the process worked and to verify that there actually was a local service point) to give the impression that I actually worked within her company. The employee said she was very busy and it was very bad timing. I said that we could make an exception and that I could try to solve the problem remotely. I said that we, because of security reasons, never asked users for their passwords over the phone, and therefore I asked her to temporarily change her password to welcome123 so that I could fix the problem remotely. Two minutes later I was able to login to the laptop and I had access to the confidential data that I wanted.
26
Figure 6. Audio bug with which one can listen in via cell phone calls.
Methods
Some common methods that are used in a social engineering attack are presented below. These methods partly rely on the previously described psychological tricks. The combination of methods constitutes the attack scenario.
Case study 3
It was just after eight oclock in the morning when I parked my car a few hundred feet from the building of one of our clients. I had earlier determined that most employees came to work with their car and parked behind the head office in the private parking lot. It seemed best to mimic this habit because walking through the car park would probably draw attention to my presence. In my car mirror, I kept an eye out for employees driving up to the lot. After about ten minutes, a gray car appeared. Once the car passed me, I merged and followed closely behind. Unfortunately, the car drove past the building of todays target and I was forced to circle back to my starting position. The second time, I had more luck and after the employee used his access badge to open the gate I could follow closely behind to get into the private car park behind the building. I waited until the employee left his car and entered through the staff entrance at the rear of the building. I walked to the smoking area near the entrance. I grabbed a new pack of cigarettes out of my pocket and lit one. Fortunately, there were no cameras on this side of the building, so I could just quietly wait until an unsuspecting employee joined this non-smoker who was flaunting a cigarette for the occasion. A woman wanting a smoke appeared after a little while. We talked a little and walked back together through the door opened with her employee badge into the building. I was inside! I immediately decided to follow her up the stairwell because it appeared that this client had placed card readers on the doors of each floor. I followed her to the fourth floor and entered the office, once again she politely opened the door for both of us. Luckily, there was a coffee machine so I could stay there for a while and observe the floor without walking myself into a dead-end part of the building. A little further away, I could see some rooms set up for meetings. I took my coffee with me to a meeting room, removed the cable from the VoIP phone and inserted it into my laptop. While my laptop booted up, I cast a glance at the stack of paper that I had grabbed from the bin near the printer while walking by. It included emails with a lot of addresses of employees in the To and CC fields. Perfect! These would be the victims in my next attack.
Phishing: this is an attack method using forged email messages or web pages that appear to be legitimate such as those of the employer, but which in reality are controlled by the attacker. These email messages and pages are often aimed at collecting employee data (for example, passwords). Dumpster diving: searching for valuable information by looking through garbage bins, bins by copiers, or containers outside an organizations premises. Pretexting: obtaining information under false pretenses (the pretext). For example, calling an employee and pretending you are a colleague. Tailgating: hitching along with an employee through a secured entry gate to get physical access to a secured location. Reverse Social Engineering: a method in which the victim is manipulated so that they ask the social engineer for help. The social engineer creates a problem for the victim and then makes himself known as an expert who can solve the problem. The social engineer then waits for the victim to make a request. Trust is more likely because the victim takes the initiative. Shoulder Surfing: watch when someone enters a password or PIN code. You do not actually have to watch. In several tests we used miniature spy cameras such as a button camera (Figure 4) with which you replace one of the buttons on your jacket. After the entry of a password has been recorded, it can be played back later. Placement of listening devices (bugs), wireless access point or key logger. Once access is gained to a building, it is often easy to place listening devices. Modern listening equipment is available at low cost. For instance, such a device can dial a previously programmed cell phone number when sound is detected so that the attacker can listen along via the phone (Figure 6). Alternatively, a key logger can be installed (Figure 7). This device can be plugged in between the keyboard and the computer in a few seconds and will then record all keystrokes that
are typed in. Current versions of key loggers can then automatically send an email with captured keystrokes to the attacker through a wireless network. Hiding an access point inside a building may also be useful (for example, by hiding it behind a radiator). After it is connected to the network, the attacker can then leave the building. On the outside, say in a car, the attacker then connects to the newly installed access point and
Compact_ 2012 0
Information security
27
After my laptop booted, I performed a port scan on port 80 on nearby IP addresses to look for internal web pages. I also used my web browser to try open a few obvious URLs like intranet.clientname.com, intraweb. clientname.com, search.clientname.com, directory. clientname.com, and so on. It did not take me long to find an internal web page. I copied the page and adjusted some text and after fifteen minutes I had put together an employee of the month voting page that looked exactly like the company web pages including logos and colors. Then, I started a web server on my laptop so that the newly created page could be accessed via the internal network. A second limited port scan allowed me to identify an internal mail server that had mail relaying enabled (allowing anonymous email to be sent out). At that moment, I had been in the building for at least twenty minutes and had not been questioned by anyone about what I was doing there. Then, I focused again on the victims. First, I sent an email via the mail server identified that contained the content of an email that I had copied from my spam folder, to some of the addresses in the printed emails. I hoped that this email would trigger an out-of-office message from one of the employees. When I then received just such an email, I copied the signature from it and changed the name and function to fictional ones. I now had a web page and an email message that looked exactly like those used in the organization. Then, I created an email with a reminder for the invitation to vote for the employee of the month. The message indicated that a random selection of employees could nominate their colleagues for this award. This could be done via an internal web page included in the link at the bottom of the email. Naturally, logging in was required to prevent people from making duplicate votes. The reminder indicated that those who missed the first mail still had the chance to enter their vote up until 12:00 oclock the same day. I switched to a second window and calmly waited
until the password of the first enthusiastic employees appeared in the second window. This took exactly two minutes after sending out the reminder email. By logging in at the site, the employees, in addition to their password and username, also automatically left behind their IP address. This was all the information that I needed. I started Metasploit (a hacker toolkit) that allowed me to remotely login to the PC of the first survey participant. Meanwhile, I had also found the user in the internal online telephone directory. Unfortunately, it turned out that the first employee worked in the finance department. At this stage, I was really looking for an IT administrator because they often have privileges to access a large number of systems. I decided to dump the local password hashes on the users system. Using the hash of the local administrator account, I tried to authenticate against the system of an arbitrary user on the network. This trick has worked at several client sites and was now also successful. Since all (or at least a lot of) desktops where installed from the very same image, the passwords for the local accounts were also identical. At this point, I had been inside for about three quarters of an hour without anyone noticing and I had already taken full control of two systems. Unfortunately, the password hash did not work on the domain controller, so I decided to keep logging into desktop systems until I found a system with a user (or process) that was running with the highest privileges (for example, the IT administrator). After twenty minutes, I found a system where an IT administrator was logged on. The freeware Metasploit tool has a built-in feature allowing you to take over the identity of a user and with it all his privileges. After I took over the identity of IT administrator, I had domain administrator rights and full access to all Windows systems and the data present on the network, including all servers with financial administration and the mailboxes of the board of directors. I made some screenshots and decided that it was time for a second cup of coffee.
then accesses the internal network with little chance of being detected and arrested. Malware: malicious software that, for example, collects and forwards passwords to the email address of the attacker. Malware can be installed on the systems by, for example, using an infected PDF file ([Paqu01]). The PDF file can be circulated in different ways, for example, by leaving a USB memory stick containing files titled 2011 payroll or fraud investigations in
2011 or similar. Ideal places to leave these sticks are in the restrooms or by the coffee machine. When the victim opens the PDF the malware is being run in the background automatically. In Case study 3, some of the above methods are used. This example shows, amongst other things, how information obtained from one attack can be used in another attack to get even more information.
28
Knowing about possible attack techniques and the weaknesses of the target builds real awareness
Case study 3 shows that it is not always important how many employees are tricked by social engineers. In this particular situation, it was enough for an outsider to deceive only two employees to compromise the entire IT environment. 6. Use secure waste bins for confidential information. 7. Verify the identity of the caller when asked for confidential information. (For example, in case of a telephone request, ask the caller to call back on a specific number.) 8. Never save confidential information locally or on a private PC or device (drive, USB stick). 9. Immediately alert the security officer about any suspicious activities. 10. Keep your access badge visible and request colleagues to wear their badge. Any unknown person without a badge should be escorted out of the building and handed over to the reception and/or security. To ensure that such rules are followed, it is necessary to monitor that employees are actually complying. The outcome of the monitoring (both positive and negative) should be given as feedback to the relevant employees.
Countermeasures
Awareness
The keyword in countering social engineering attacks is awareness. More specifically, it is what the targets know about possible attack techniques and their own weaknesses. In one of my assignments, in addition to the usual paper bins alongside printers, the client also placed large enclosed bins for any paper containing confidential information. Nonetheless, the bin for ordinary waste paper provided a huge stack of confidential documents (reports of security incidents, HR information, passwords, and so on). Why? It was probably too much trouble to push the piles of paper through the small slot in the bin for confidential paper and it was just easier to throw it all away in one go. When clients hear how a trick works at a presentation or training, people often say things like: you have to be really naive to fall for that, it would never work on me. Our test results shows differently. Therefore, it is useful to perform a test and confront employees with the results within their organization to really raise awareness. It usually shows that people are not so ready for such an attack as they think they are. It is this that leads to real awareness. In addition to promoting awareness, a test is also quite useful in identifying risks.
Conclusion
After reading this article, you may doubt that the cases described ever happened and that such incidents can succeed in real-life. Unfortunately, the reality is that these and similar attacks occur every day, despite the various security measures. Security personnel, barbed wire fences, access cards, CCTV, alarm systems, and so on, are not enough. Social engineers know how to penetrate into the heart of an organization. Performing a social engineering test can be a good way to identify risks in an organization and raise employee awareness.
References
[Hadg01] Christopher Hadgany, Social engineering the art of human hacking, 2010. [Mitn01] Kevin D. Mitnick and William L. Simon, The Art of Deception, 2002. [Paqu01] Matthieu Paques, Hacking with PDF files, http://www. compact.nl/artikelen/C-2009-4-Paques.htm. [security.nl] : articles concerning social engineering attacks, http:// www.security.nl/tag/social%20engineering.
Guidelines
Alongside awareness, it is essential to draw up guidelines and continue to check compliance with these. Consider drawing up ten rules for information security. An example is as follows: 1. Never reveal your passwords to others (including IT employees). 2. Do not share internal information with outsiders. 3. Adhere to the clean desk and whiteboard policy. 4. Lock your computer when you leave your workstation. 5. Do not leave any information behind at the printer.
Compact_ 2012 0 Information security
29
Gerard Wijers, Rudolf Liefers and Oscar Halfhide The quality of the internal organization is becoming progressively crucial to the existence and eventual success of an enterprise. At the same time, the organization must have the potential to own or develop capacities, knowledge and dynamic competencies to become a distinctively outstanding organization and to be able to maintain a sustainable competitive advantage. Modern organizations need sophisticated leadership and entrepreneurship to steer the organization and to maintain its course. A balanced and well-designed flexible governance, also called an operating model, is essential for such organizations. A governance model is a compass that not only supports the steering but also enhances decision-making that is required for proper steering. This article portrays various possible forms for a Target Operating Model for an IT organization in an outsourcing situation, as seen from the demand-supply management perspective. A variety of designs are elaborated, based on different fundamental starting points.
Introduction
G.M. Wijers
R.J. Liefers
O. Halfhide
Organizations today operate in interesting, fast paced, but also uncertain times. A few examples are globalization, shifts within and between markets, horizontal and vertical integration of operations in value chains, declining customer loyalty and the changing nature of competition. There are countless variables in the continuously changing and shifting playing field. Organizations are constantly working on increasing internal efficiency, improving external effectiveness and lowering costs. The concept of sourcing plays an important role in the optimization, rationalization and innovation of value chains within and between organizations. In this context, we examine services, processes and business functions and focus on how to achieve sustainable competitive advantages. In addition, we examine whether there is added-value in taking a different perspective concerning quality, time, and costs, or, to reorganize or outsource part(s) of the value chain. Examples are concentration-deconcentration, centraliza-
tion-decentralization, and insourcing-outsourcing. The risks with outsourcing are diverse, but usually are related to overspending or even unpredictability of costs, service degradation and loss of critical knowledge and expertise. Governance plays an important role in the repositioning or rearranging of activities within the value chain.
30
Ensuring adequate involvement of the client business units in the transition Establishing an effective project oversight committee and joint governance framework Focusing on the process transformation required for the parties to effectively work together
suboptimal results, even in the most favorable situation. Companies that outsource IT have exactly the same problem ([Beul10]). The annual vendor performance study carried out by KPMG EquaTerra showed that 63 percent of the outsourcing IT organizations in the Netherlands characterized the quality of management as weak or average ([KPMG11]). One aspect of the Pulse study in 2010 by KPMG EquaTerra ([KPMG10]) focused on the question of how organizations perform their sourcing transition. This study revealed that the realigning and redesigning of the retained IT organization, needed for an adequate connection between business demand and IT supply, often seems an undervalued element of a sourcing transition. The role of the business itself as well as well-applied demand management and supply management concepts are both crucial. When organizations choose to outsource, the internal management of organizations is often not adequately co-developed and adjusted. The internal governance structures often remain unchanged and traditional, fragmented and not very goal oriented. In an outsourcing situation, this hinders effective cooperation between parties in the demand and supply chain and leads to a partial or complete failure to meet sourcing and company objectives. The question arises as to what a modern governance
Ensuring checks and balances are in place to validate go-live readiness Validating the contract scope of services 0%
Advisors
50%
100%
Service Providers
Figure 2. Changing the own (retained) IT organization is not dealt with sufficiently during the transition.
model should look like, that can be flexibly utilized and is easy to adapt for a certain arbitrary outsourcing situation.
4%
Excellent
16% Weak
33% Good
Average 47%
Figure 1. The quality of governance for IT organizations in the Netherlands in 2011. Compact_ 2012 0 Demand and supply management
31
High level
ment of such supplier organizations to be successful and become an added-value activity. Also, creating well-organized connections between business demand and IT supply is deemed a value-adding activity and hence deserves to be placed in the Target Operating Model (see Figure 4).
Strategic assumptions
the first building block, demand management, is focused on the formulation of needs (the what). Demand management is customer-facing, ensures that the demand is well-defined and that the supply conforms with the demand. The value strategy is Customer Intimacy. the second building block, supply management, is focused on attaining the right services for these requirements (the how). Supply management is supplier-facing (internal/external) and ensures that the required services are provided. The value strategy is Operational Excellence.
Customers, suppliers
Process design
Organizational structure
Figure 4. Sourcing strategy as a factor influencing the Target Operating Model of the IT value chain.
The third building block, delivery, is focused on the actual delivery of the service(s). This building block may be internal (within the organization) or external (a supplier). The delivery building block deals with the development of IT solutions (project oriented) and the management of the solutions (management oriented). Management includes IT infrastructure, application management and database management. Figure 5 illustrates this structure.
32
Business
Demand management
Supply management
Case study
CIO Office Application Development & Maintenance Business Application Services Information management
per business unit
Application service provider
business demand management supply management, and internal and/or external delivery
a clear demarcation of responsibilities becomes visible, in such a way that everyone knows which part they should play in the demand-supply value chain.
Infrastructure Services Infrastructure Services Supply management Infrastructure service provider External delivery
Demand management
Figure 6. Demand and supply management units located in the IT value chain.
In Figure 6 we show an example of the Target Operating Model for an IT organization, that has a number of different delivery units that deliver IT services that are business unit specific.
products and services delivered by the organization the complexity and specificity of the application landscape the chosen sourcing strategy
The demand management (business information management) is organized within each business unit. The supply management is mainly grouped into realizing business applications on the one hand and providing infrastructure on the other. o The business application teams are application oriented and identifiably aligned with the business: each group focuses on an application landscape that either supports an enterprise domain or a business domain. o The business application teams focus especially on supply management (management, specifications and testing) and manage application suppliers. There are three infrastructure service teams: the own service desk, supply management for hosting and networking, and supply management for workplaces and telephony. The actual management of this infrastructure is outsourced.
The above building blocks can be used to derive strategic design possibilities for a future Operating Model. In our experience, the following options are possible: 1. demand management per business domain and/or generic use 2. supply management aligned to service/technology domain and/or generic use 3. internal delivery combined with supply management 4. combine demand management and supply management 5. strategic management processes per business domain and/or generic use (enterprise level) Each option is briefly described below.
Compact_ 2012 0
33
1. Demand management per business domain and/or generic use Demand management focuses on the customer. Organizations that have very dissimilar business units, each with their own strategy, will benefit from demand management that directly supports each business domain. Organizations that are focused on global and straightforward processes and products will benefit from a central/generic demand management organization. See Figure 7.
Business (group)
Business domain
2. Supply management aligned to service/technology domain and/or generic use Supply management can govern many dissimilar services. The nature of these services can be so different that we need to make a distinction between the possible types of supply management. In practice, it is categorized into the following three IT service delivery domains: 1) development, 2) application, and 3) infrastructure management. See Figure 8.
Supply management
IT service delivery
Supply management
34
3. Internal delivery combined with supply management When the provision of IT services is outsourced, it becomes necessary to identify the components of supply management. For internal delivery, it is somewhat less clear-cut. It is quite possible to combine the responsibilities for supply management and IT delivery. See Figure 9.
Supply management or
4. Combine demand management and supply management For specific business domains and application services, it can in some cases be advantageous to combine demand and supply management. For example, it is convenient in business domains that use their own specific application suite. See Figure 10.
Demand management or
Supply management
5. Strategic management processes per business domain and/or generic use (enterprise level) Fulfilling this option is the result of identifying three levels of management which can lead to further refining of the Target Operating Model for IT organizations:
Business (group)
The strategic processes determine the path of the enterprise in the middle and long term and also define the scope. It is necessary to consider strategy and policy, compliance, portfolio management, architecture and annual budgeting processes. The tactical processes concern the acquiring and maintaining and allocating of assets (money, people, means of production and support services) so that business objectives can be met. This may include project portfolio management, financial management, contract management and so on. The operational processes actually make use of the business assets for realizing the services, where one part can be performed by the organization itself and the other part can be performed by the supplier.
Strategy and policy, compliance, portfolio management, architecture and the annual budgeting cycle can be set up for each business domain or for the whole company (and sometimes at both levels). This choice is largely determined by the prevailing business governance. See Figure 11.
Business domain
Compact_ 2012 0
35
of the outsourcing objectives and the service performance itself. If the value chain and Target Operating Model are not clear, it is senseless to have discussions with suppliers about the effectiveness of the collaboration. Thus, it is extremely important to have an effective Target Operating Model for IT. This TOM must include a clear new results-oriented governance structure with corresponding processes, responsibilities, roles, jobs, competencies and an appropriate organizational sizing. The precise design of the TOM is determined, among other things, by the business strategy and the sourcing strategy. A good TOM ensures that the demand is driven by the business in consultation with the suppliers and safeguards the collaboration at all levels in the IT value chain. A successful outsourcing engagement ensures that designing the demand-supply management structure is already taking place when starting the outsourcing selection process. So when outsourcing must be managed (more) effectively, it is an important precondition that sufficient attention is given to applying demand-supply concepts.
References
[Beul10] Erik Beulen, et al., Managing IT outsourcing, Routledge, 2010. [KPMG10] KPMG EquaTerra Pulse Q3-2010. [KPMG11] Dutch Strategic Outsourcing Study 2011, KPMG EquaTerra, 2011. [Wije10] Gerard Wijers, Oscar Halfhide and Erik Cazemier, De regieorganisatie op maat (bespoke management), Outsource Magazine, June 2010.
Conclusion
A well-designed Target Operating Model is essential for successful outsourcing Organizations that rely heavily on IT with respect to their service or product cannot be effective when the IT value chain is not set up effectively. This is especially true with respect to governing demand and supply in IT outsourcing situations. When internal governance is not in order, it endangers the creation of added-value, the achievement
36
A.G. Plugge
Introduction
In the past 15 years, globalization, deregulation, and consolidation have played a significant role in how companies develop a business strategy. IT strategy derived from the business strategy invariably raises the question: what activities should we perform ourselves and what should we outsource? Examples of these IT activities include IT infrastructure, business applications, communication networks, and so on. In recent decades, the number of companies choosing to outsource all or part of their IT
environment has increased significantly across the globe. In 2010, the IT outsourcing market was US$ 270 billion with an annual growth of between 7 and 10% ([IDC10]). Enterprises that outsource IT activities expect their service providers to provide high-quality IT services that satisfy the agreed upon service level agreements. Factors that affect the quality of service include relationship building, contracts management ([Beul11]), insight into hidden costs ([Bart01]) and change management ([Plug09]). The multidisciplinary nature of IT outsourcing leads to increased complex-
Compact_ 2012 0
37
ity for service providers and this impacts the realization of consistent performance. Both scientific and market research ([Feen05]) have shown that many service providers are deficient in or incapable of providing consistent performance for their customers during the period of the outsourcing contract. By consistent performance we mean that the IT services delivered satisfy the agreed upon service level agreement. This article describes a work method for IT service providers that is based on adaptivity. First, the necessary background is covered that will throw some light on providing consistent performance. This is followed by an elaboration of the adaptivity concept. Next, the work method to achieve consistent performance is explained and then the corresponding measuring tools.
Background
IT service provider performance that does not meet the agreed upon requirements of the outsourcer often has a direct impact on the primary business processes. In practice, the deficiencies in consistent performance gives rise to onerous (financial) discussions between the outsourcer and service provider that put extreme pressure on the relationship. There is good reason for the sharp increase in the number of outsourcing mediation cases in the last five years. Lowered service provider performance also leads to lowered customer satisfaction and an attendant lowering of the recommendation rating. This is the degree to which an outsourcer recommends a service provider to other enterprises. Inconsistent performance is strongly related to the sourcing expertise (capabilities) of a provider and the manner in which the expertise is organized. The sourcing capabilities can be seen as the relationship between knowledge, experience, processes and procedures that support the development and delivery of IT services. This involves capacities that are both tangible (hardware, software) and intangible (attitude, behavior). Interestingly, enterprises that are
Relationship
Planning & Contracting Organizational design Governance Customer development
motivated to outsource make the assumption that providers actually have sufficient sourcing capabilities. When sourcing capabilities are further elaborated, it elicits the well known IT processes with relation to information management, service management and change management. A sourcing capability model is a convenient way of forming an impression of what sourcing capabilities are important ([Feen05]). The model (see Figure 1) describes a dozen capabilities divided into three areas of competency: Relationships, Delivery and Transformation. Providers must have a sufficiency in these sourcing capabilities to be capable of delivering quality IT services. The sourcing capabilities partially make use of IT processes. Thus, a relationship arises that is supported by the internal information services within the organization of the outsourcer. This affects the domain of IT auditing with regard to the specific monitoring of IT risks and management of IT processes. In addition, the question arises as to how these sourcing capabilities are organized within the organization. Is it clear where these capabilities are available in the organization and are these easy to gain access to? When sourcing capabilities must be made available internationally, it increases the complexity of organizing them. Moreover, dimensions that play an important role are decision making, hierarchy, communication, horizontal integration (specific or generic knowledge) and the degree of formalization. The developments on the customer side also appear to have an impact on the sourcing capabilities and organizational structure of providers ([Plug09]). Examples of changing customer circumstances include the changes in the sourcing strategy of the customer (from single sourcing to multivendor sourcing), the need for innovation and the need for flexible provision of manpower and resources. These business needs call for constant monitoring of providers and assessment of the impact on their own capabilities and organizational structure. Organizing IT services brings various orientations together including organizational structure, IT processes, competencies, HR, laws and regulations, and, of course, information technology. In a word, the delivery of IT services is multidisciplinary. Remarkably, many providers manage the changes only in specific knowledge areas, but not all areas as a whole. In fact, the different disciplines are interdependent and this complexity means that these can no longer be managed separately. This increasing complexity obligates service providers to pursue an interdisciplinary approach within the said orientations. Changes on the side of the outsourcer may mean that existing sourcing capabilities and organizational structures need to be adjusted. This demands a high degree of adaptability from the board and senior management of the providers. Given that many providers base the delivery of IT services on the value discipline model called operational excellence, adapting to the changes in the customers circumstances leads to internal conflicts. Offering tailor-made solutions
Leadership Business management Program management Behavior management Sourcing Process re-engineering Technology exploitation
Delivery
Domain expertise
Transformation
38
is always completely at odds with delivering IT services at the lowest possible cost. The key to resolving the conflict can be found in achieving a balance between sourcing capabilities and the manner in which these are organized. This balance will lead to the realization of consistent performance.
Adaptivity
The delivery of consistent IT performance requires the ability to adapt. Two factors play an important role here. The first is the willingness of providers to adapt themselves. It is not a given that this will occur automatically. Other influences within the organization can affect the willingness to adapt. Examples include a re-evaluation of the business strategy, shrinkage of market share, or loss of revenue in a specific market segment. After all, adaptation costs time and money. This also requires that management work to effect the changes and to ensure these are realized. In addition to willingness to change, the second significant factor in achieving consistent performance is the ability to actually implement these adaptations. An organization must have the right people and resources (processes, systems and tools) to be able to implement changes. The combination of willingness and ability to cope with change determines the degree of adaptivity.
During the second phase, the identified developments are assessed for their impact in relation to sourcing capabilities and organizational structure. An impact analysis shows which specific sourcing capabilities must be boosted or built from scratch. Additionally, an assessment is made on whether the organizational structure oriented toward delivering IT services to the customer must be adjusted. The analysis also includes a substantive review of the agreed upon service level agreements to determine whether, and if so, what changes should be made. The outcome of the analysis is then presented to the board or senior management. This step makes it possible to assess the impact of changes on the customer side on your own organization. Decisions can now be made within a much broader context. The third phase focuses on developing improvement initiatives. The impact analysis is a basis for developing focused initiatives that strengthen the sourcing capabilities and guaranteeing the possible changes to the organizational structure. Discussing these improvement initiatives with the customer positively influences their perception of the provider. Managing expectations helps restore the relationship between customer and provider and their trust in each other. The fourth and final stage is the implementation of the proposed improvement initiatives. Experience shows that people get caught up in day-to-day issues that regularly prevent improvement initiatives being implemented. This means that supervision of the actual implementation of the changes is very important. Setting up programs or projects is an effective way of ensuring improvements are realized. In particular, this is the responsibility of senior management within the provider organization. The adap-
Work method
A specific work method (Provider Performance Approach) was developed to assist providers in the change process required to achieve consistent performance ([Plug11]). The provider audience can be divided into two subgroups: external service providers and shared service center organizations within an enterprise. Both situations involve the delivery of IT services to end users (customers). The work method developed (see Figure 2) is a phased design and consists of four phases. The first phase focuses on monitoring and discussing customer developments in the relationship with the outsourcer. Developments occur on the customer side during the contract period of an outsourcing agreement that may affect the provider organization. Examples of these developments are the decision to be active in other markets and adjustments to the portfolio. The regular monitoring and discussion of customer developments may seem trivial. In practice, many providers focus more on operational activities, such as resolving incidents and the implementation of changes in IT infrastructure and applications. This attitude takes away from the task of mapping developments that will occur in the medium and long term. This approach often leads to recognizing problems too late and further delays in implementation of necessary changes. This gives rise to additional costs in the long run when catching up on those changes.
Im
t en em pl
Mapping
changes
1M on ito customer-side
Transition and
transformation plan
ov
al Ev
ua
te
Compact_ 2012 0
39
tive ability of a provider begins with the willingness of management to deliver consistent IT performance. After improvements are implemented, the monitoring of changes begins again, which accounts for the cyclical design of the methodology.
Measuring tools
Each phase of the developed methodology is translated into appropriate methods. This provides a measuring tool that traverses through the entire cycle of the methodology. Working in this manner, we work step by step toward the realization of consistent IT performance. The work method and associated measuring tool is used by different service providers (national and international). The material developed for this purpose is available in both Dutch and English. The methods for each phase, and experiences with the methods, are explained sequentially.
Phase 1: Monitor
To gain insight into the changing customer circumstances, the developments for each market segment are investigated. The reason is that developments in a specific market segment, say the retail sector, can have an unusual effect on the sourcing capabilities and organizational structure of the provider and thus also their performance. The most significant developments in a market segment are added to a checklist. This checklist is used for each cusCustomer developments
K01 The provider regularly monitors changing customer circumstances
Grand Total X N 3,0 3,4 2,6 2,8 3,2 2,3 3,1 3,5 3,0 3,3 39 33 22 24 28 32 34 38 33 34
K02 The provider explicitly identifies customer changes K03 Customer developments are assessed for the impact within the organization K04 Customer developments affect the sourcing capabilities K05 Customer developments affect the organizational structure K06 Customer developments affect performance K07 The business strategy is based on the customer intimacy K08 The customer sourcing strategy impacts sourcing capabilities K09 The customer requirement for innovative solutions impacts sourcing capabilities K10 The customer requirement for flexible deployment of staff impacts sourcing capabilities
40
IT outsourcing contracts. A distinction is made between three groups: relationships (sales), transformation and delivery. The reason is because employees in these different groups often have a different perspective about the above themes.
Phase 2: Evaluate
During the evaluation phase, the information collected in the previous phase is analyzed. The identified customer developments and the results of the questionnaire are evaluated with respect to the subsequent impact on the organization (impact analysis). This is both a qualitative and quantitative evaluation. It is possible to compare the results by selecting different target groups (sales, transition and delivery). Experience shows that this comparison provides surprising insights into how different groups look at current IT performance. Bottlenecks are identified based on the first analysis. In-depth interviews are used to discover the reasons for bottlenecks. The in-depth interviews should be conducted with participants working within the previously mentioned groups. Supplementary interviews are held with representatives from customers who receive services from the provider. This allows the outcome from the impact analysis by the provider to be tested against the perceptions of the customer regarding the IT services delivered.
an outsourcing contract is pre-eminently a people business. It is crucial to include employees who are affected by the changes. Indeed, employees are the key to realizing change. After a certain period, the outcome of the implementation is checked against the proposed improvement initiatives and, if necessary, adjustments are made. This allows improvements to be embedded within the organization. A case study follows that shows the work method and tools in action.
Phase 3: Improve
The bottlenecks identified in the impact analysis in the third stage are translated into a number of improvement initiatives. This requires interaction with the most important stakeholders, including senior management, application experts and technical managers. To support this process, a workshop is developed in collaboration with the responsible stakeholders to discuss the outcome of the analysis. The bottlenecks are then iteratively reworked a few times into improvement initiatives. For each improvement initiative, it is determined what specific activities must occur and who is responsible for these activities. Examples of improvement initiatives include design or re-design of IT governance processes, the designing or re-designing of organizational structures, the strengthening of specific sourcing capabilities and developing of an adaptation process.
Work method
As part of the first step of the work method, a questionnaire is disseminated among employees of the provider actively involved in outsourcing contracts. Subsequently, in-depth interviews are held with employees who are actively involved in a specific customer relationship. The customer was an international insurance company with headquarters in the Netherlands and operating globally. In addition, the customer relationship was investigated over a five year period with respect to the delivered performance. The analysis (Step 2) revealed four key events that had affected the performance of the provider. These events were related to: the transition phase, the transformation to an eService organization (on-line insurances), the safeguarding of IT continuity, and the need for more flexibility regarding the use of resources (FTE). The analysis showed that on the provider side, there were two significant causes that played a role in the events. The first was a lack of adequate sourcing capabilities (knowledge, skills, support processes) to adequately deliver IT services. The second was because the organizational structure was not aligned with the organization of the customer. During the third step of the work method, the events and bottlenecks are translated into a number of solutions. During the transition phase (first event), it appeared that the provider did not have the necessary sourcing knowledge and experience available to carry out or complete actions
Phase 4: Implement
In the last phase, the established improvement initiatives are actually implemented in the organization. An appropriate activity is sought for the type of improvement initiative, e.g. workshops, training and coaching. In particular, attention is given to the soft side of change. Experience shows that change invokes resistance. This is certainly true for changes in sourcing capabilities and organizational structures. The delivery of IT services based on
Compact_ 2012 0
41
in the appropriate manner. The transfer of people and resources (assets), translating the contract into workable procedures, and redesigning of the IT landscape required senior program managers and project team members with extensive knowledge and experience. These proved to be lacking in practice. The senior managers who replaced some team members brought more structure to the approach and this led to increased performance. In the transformation process towards an eService organization (second event), there was a need to be able to quickly start developing applications for projects. The dilemma that occurred here was that the customer could not take advantage of developments quickly enough (new eServices) because the provider did not have sufficient IT resources. This resulted in long lead times and difficult discussions between customer and provider. To solve this problem, the provider developed a resource and capacity tool (forecasting) based on certain attributes (initial work activities, type of application, required skills) to obtain an estimate of how many resources were needed. By incorporating experiences with the application projects into the tool, it was possible to substantially increase the predictive capability. This made it possible to enter changes during a project, such as enhancement work, directly into the tool and have it automatically adjust the planning and resource usage. This resulted in a significant reduction in turnaround time when developing applications. A phenomenon that a lot of providers especially in India have to deal with is frequent employee turnover (third event). The downside of the low-cost development of IT services is that workers can rapidly develop their knowledge and experience and then change employer. This development puts pressure on continuity in the delivery of IT services to the customer. This problem is solved by deploying so-called shadow resources. By staffing up an extra 30% above the existing workforce in the onsite and onshore team it is possible to deal with the turnover. The extra employees fulfill tasks that broadly cover the activities oriented toward the customer. This provides better safeguards in terms of continuity. The need of the customer for more flexibility (fourth event) is translated into a change in the functional organization. Here, a model was developed that ensures the physical support of provider employees at both the customer side as well as the onshore location in the Netherlands. Thirty provider employees are now permanently located at the customer site. This group of employees is mainly involved with defining functionality (specifications) for applications. In addition, there are about 40 employees present at the onshore location in the Netherlands. The group is focused on translating the new requirement for IT services into the development of solutions and managing colleagues working in offshore locations. The rapid scaling up of resources with specific knowledge and experience was an important requirement.
The work method developed as applied during the events had made a demonstrable contribution to improving provider performance. Another outcome was that the customer satisfaction with the providers performance increased significantly. Finally, it is worth mentioning that the proactive attitude of the provider resulted in gaining market share (additional assignments and activities) at the expense of a competitor. This shows that the adaptive ability of this provider played a crucial role in the customer awarding them additional projects.
Conclusion
Despite the fact that there so much has been written about the importance of the adaptive ability of providers, experience in sourcing shows that its existence is regularly a fiction. Service providers must develop themselves past their current practices so that they have the ability to adapt to changing customer circumstances. This will put them in a much better position to actually safeguard the agreed upon performance levels for IT services. Adaptivity particularly requires active management involvement. Given that IT outsourcing contracts is multidisciplinary, managers of service providers must take an interdisciplinary perspective and act accordingly in the handling of adjustments of sourcing capabilities and organizational structure. The work method and measuring tools developed for the Provider Performance Approach have provided demonstrable results for both national and international service providers. The deliberate monitoring and evaluating of changing customer circumstances and the subsequent adjustments to sourcing capabilities and organizational structure increase the adaptive ability of service providers. Adaptivity is no longer a fiction but a fact!
References
[Bart01] J. Barthlemy, The hidden cost of IT outsourcing, Sloan Management Review, 42 (3),2001, p. 60-69. [Beul11] E. Beulen, P. Ribbers and J. Roos, Managing IT outsourcing. Governance in Global Partnerships, 2nd edition, Routledge, London, 2011. [Feen05] D. Feeny, M.C. Lacity and L.P. Willcocks, Taking the measure of outsourcing providers. Sloan Management Review, 46 (3), 2005, p. 41-48. [IDC10] IDCs second quarterly revision for 2010 Western European IT services, 2010. [Plug09] A.G. Plugge and M.F.W.H.A. Janssen, Managing change in IT outsourcing arrangements: an offshore service provider perspective on adaptability. Strategic Outsourcing, 2, 2009, p. 257-274. [Plug11] A.G. Plugge, Managing change in IT outsourcing arrangements: Towards a dynamic fit model. Unpublished Dissertation, Delft University of Technology, Delft, 2011.
42
A closer look at
Guido Dieperink and Jeroen Tegelaar Companies, now more than ever, are under continuous pressure to demonstrate improved performance. Government regulations, price pressure, lack of trust, critical shareholders and strategic objectives are increasingly triggering the need for structural changes in daily business operations. The transformation is born! This article is about transformation, its shapes and forms. We look at transformation through the eyes of midsize and large corporations, and analyze the journey these companies must go through to achieve the objectives of the transformation. If enterprises succeed in prevailing over the many bumps and setbacks on this journey, the transformation will lead to a new and stronger position in the market place in which they operate. In this article, we provide a definition of transformation and describe the various types that exist. We also outline which dimensions of the organization are impacted by the transformational change. Finally, we support our viewpoint with practical examples.
Transformation: a definition
Ask ten people to describe a business transformation and you will get ten different answers. Most describe it in terms of business related change, but what it exactly encompasses, is not clearly defined. It is not strange that the term is so commonly misused, even though the need for transformation is greater now than it has been for years. Many examples of true business transformations can be given. In all cases, however, the relationship between change and transformation should be delineated. Lets first identify what cannot be classified as a transformation. A change arising bottom-up, initiated because the daily business is inefficient or needs to be improved, is not a transformation but rather an optimization. An implemen-
G.H. Dieperink
J.A.C. Tegelaar
Compact_ 2012 0
43
tation of an IT system is, in itself, not a business transformation because it only affects part of the organization and only yields a limited strategic advantage. In our opinion, a transformation should always be initiated upon a strategic (burning) platform. This means that a transformation will only be partially successful if implemented from the bottom-up or when relevant for only a part of the organization. We view the following definition as most suitable:
Strategic value
A large-scale business transformation is characterized by an intervention from senior management, driven by situational factors, technological or internal changes that impact all dimensions of the organization, with the long-term goal of increasing the performance of the entire company.
Dimensions of an organization
The most striking transformations often stretch several decades as was the case with IBM. The company originally focused on clocks and typewriters in the early twentieth century. As the years passed, it responded so well to the rise of the computer that it is still considered in the present age as a leader in the domain of business machines. Nokia is another good example. Formerly known for paper, rubber and cables, it gradually transformed itself into a mobile communications giant. These organizations are continuously adapting, driven by a keen vision and clear business strategy, to (re-)create their future through structural transformation. If transformation can be characterized, as outlined above, as a (continuous) intervention by senior management, it is always strategically significant because the performance of the company will progress up to a higher level. Such interventions may be imposed via external causes such as a government mandate. A good example is the separation of a large international financial institution in Europe into a separate bank and an insurer. A self-initiated, but EU determined strategic change where the company wants to reposition itself in the market. This brings us to the question of what change actually is and when serious change is involved, whether or not it qualifies as a transformation. In our opinion, a change can be characterized as structural modifications that cannot be reversed without cost. A change becomes a transformation when all dimensions of the business are impacted and the change has a significant strategic objective (see Figure 1). A transformation is further characterized by a high level of ambition and a substantial gap that must be
bridged between the current and future business state. It represents a fundamental discontinuity in the current business operations. Although we are aware of some exaggeration, a business transformation can be seen as the mother of all projects. Companies that are involved in a business transformation consider this to have the highest priority and it is their main focus besides the normal going concern activities. The company has one key strategic theme to focus on and that is the transformation. In addition to the elements mentioned in the transformation definition and the characteristics above, there are some significant factors that determine how large and complex the transformation will be:
Geographic: The more countries and time zones that are involved, the greater the complexity of the transformation will be in terms of requirements, communication and time differences. Scope: The number of business units and employees involved will influence the complexity as well. Stakeholders: The number of stakeholders, each one bringing a particular interest to the table. Third parties: The number of (external) third-parties involved in the transformation, such as shareholders, product suppliers and consultants. Knowledge: The number of disciplines that must be mobilized to realize change and the experience needed for the change to take form.
44
pl ex i
ty
Culture: Differences in corporate culture(s) that affect the transformation and lead to additional challenges during implementation in terms of core values, cooperation and behavior. Duration: The duration of the transformation and the necessity to continuously reinvigorate those involved in the transformation to stay committed. Technology: The number of technologies and innovations being implemented often causes unexpected problems and setbacks. financial processes has an impact on all business units supplying information and thus far beyond just the finance department. 4. An IT-enabled transformation is triggered by major investments in technology. Usually, technologically driven transformation impacts different business processes and thus goes far beyond just the IT department. The sponsor is usually the CIO, but depending on the impact, it may also be the CEO or CFO. It should be clear that these types of transformations are very different from each other. Nevertheless, all of these can be described in a generic context in which corresponding organizational dimensions are affected by the transformation. Let us first examine the generic context and then move on to an example for each type of transformation.
Complex transformation journeys are usually divided into phases or plateaus that have distinct milestones or stages to achieve. This has the advantage that the complexity is divided into smaller, more manageable and logically related parts. The organization also sees at the end of each plateau clear results and benefits and can learn from these steps. Usually, the planning of plateaus starts with the relatively easier chunks of work and ends with the more complicated challenges.
Types of transformations
Every business transformation is unique, but does have a generic aspect that is similar in most transformations. There are also specific aspects, mostly related to the domain of the transformation. We have identified four types of business transformations, each one triggered by a different starting point. 1. The integration/separation is addressed when a company merges with another company or is separated from another company, triggering a variety of implications for the company and its employees. The CEO is the key sponsor of this type of transformation. 2. An operations transformation involves a fundamental change of all core processes of the company and the resources deployed within the core processes. For example, it might be an organizational transition from a supply-oriented organization to a customer-focused organization. The COO is the key sponsor. 3. A finance transformation involves substantial changes in the structure of the financial processes (starting at the source through to the reporting), the organization, and the systems within the enterprise. The CFO acts as the main sponsor. The fundamental restructuring of
Operations transformation
CEO COO
Finance transformation
Company
CIO
CFO
IT-enabled transformation
Compact_ 2012 0
Transformations
45
Strategy Organization People Business processes Applications Infrastructure Information Products Data
hurdles is one of the key success factors. The complexity of the transformation lies in the interdependencies between the business dimensions as shown in Figure 3. Each of these components will change individually and in combination with the other dimensions. As mentioned earlier, the complexity lies in the interdependencies and the relationship with the going concern. In other words, implementing the change-world alongside the run-world is a challenging effort that requires a lot of management attention. Clear, consistent and frequent communication is essential and should be considered the responsibility of management as well. In particular, management must communicate the why and how of the transformation, and must be able to clearly explain the strategy, future situation and the roadmap. Regardless of the type of transformation and the domain, the generic part of the transformation involves changes across each of these dimensions. Figure 4 lists topics that can be relevant on each of these dimensions. The transformation is aimed at achieving the target operational situation for each of the dimensions in a coherent way leading to higher levels of performance.
other companies. The executive board decided on the separation of the banking and insurance activities and began a transformation that had a huge impact on all its operations throughout the world including the operational, legal, technological, contractual and HR domains. The complexity of this separation can be illustrated by considering the scope of 60 countries and 150 business units and the time constraints under which this transformation had to take place (about two years). A multidisciplinary team was formed to accomplish this complex operation with the help of many stakeholders around the globe and guided by external advisors.
46
Change The new strategy sets the tone for the change journey and provides the framework within which this will be achieved.
Translate the strategy in terms of a new business model resulting in a Target Operating Model (TOM). The impact on personnel in terms of required knowledge and skills. External and internal information requirements are defined in the context of the new strategy, laws and legislation. Structural change in the design of business processes aimed at improving efficiency, better quality and greater customer focus.
Target Operating Model Core values Formation of organization Culture and competencies Customer focused Cooperation
Improved management reporting KPI reporting and management Lean process change Straight Through Processing (STP) Operational excellence
The product range is revised, old products are made obsolete, and new ones are introduced.
Reduction in complexity of application landscape, replacement of legacy systems, internal and external integration, are commonly occurring themes in large-scale transformations.
Application rationalization From custom built to package-based solutions Standardization of data Data management Data quality and data cleaning Virtualization Rationalization
Data storage, processing and quality are key issues that require attention during a transformation. Replacement and rationalization of infrastructure and technologies are a consequence of the IT strategy and direction.
is key. Failure to thoroughly consider the transformation readiness drastically reduces the chances of success. Experience shows that during the transition there is limited opportunity to fill in the missing prerequisites. At the very least, this leads to significant delays in most cases.
Compact_ 2012 0
Transformations
47
Conclusion
Not every change is a transformation. A transformation is characterized by the scope, complexity and strategic value, but mainly by the fact that it impacts the entire company. Only then is a change also a transformation! Each transformation takes place in a context where the elements that are changing are similar. Nonetheless, every transformation stands by itself. Fortunately, these can be classified into four different types: the integration / separation transformation, the operational transformation, the financial transformation, and finally, the IT enabled transformation. Make it explicitly clear what the transformation is trying to achieve and do so in such a way that it gives everyone a sufficiently clear picture of the future situation. Before starting each transformation, it is important to understand the complexity it entails and what the impact will be on the various business units and components of the organization. Ensure that the prerequisites are properly addressed to ensure that the transformation has a higher chance to succeed. If the conclusion is that the prerequisites are insufficiently in place, it is recommended that it becomes the highest priority to ensure they are addressed before the transformation begins. Transformations that are done right can bring major benefits to the organization, repositioning them in the market place, securing a solid future and market share and sometimes even projecting the company to market leader and example for all others to follow.
Is it clear who the key stakeholders are? Are they involved? Is the governance set up properly? If is it clear to everyone how the transformation will take place and what their role is? Are the expectations sufficiently clear in terms of results, finances, and completion time? Does the organization have sufficient capacity to change and does it have the relevant experience? To what extent is the organization mature and familiar enough with large-scale and complex change programs? In other words, does it have the right skills to complete the tasks? For an organization that is not sufficiently mature, it is advisable to seek external assistance both on the business side and the IT side. Is a mature architecture discipline being used to guide the change process and to manage the complexity and risks? Has a readiness assessment been conducted? Most transformations that fail only provide an after-thefact evaluation from which lessons can be learned. Such evaluations are commendable, but it makes more sense to perform a readiness assessment beforehand to determine how ready the organization is to start the transformation.
G.H. Dieperink is a director with IT Advisory at KPMG and has extensive experience in leading complex transformations in the financial sector. As program manager, he has guided several major Dutch and international companies in transforming their businesses. He conducts program readiness scans for new programs and health-checks of existing programs mainly in the financial sector. J.A.C. Tegelaar is a senior manager at KPMG IT Advisory and has been involved in various roles in the implementation of IT enabled transformations in different sectors. He was a Risk and Data Security manager sharing responsibility for managing data-related risks for a bank. He is also a program manager at the forefront of digitization projects and change management.
48
in evolving
C.G.R.M. Aldenhoven
is a director at KPMG IT Advisory.
aldenhoven.stan@kpmg.nl
J.C. de Boer
IT was first used in health care as a means to improve the management of the organization. The objective there was to gain insight into production agreements, operating results, occupancy rates, sickness absence, cost trends, waiting list data, and so on. In the past, IT used to play a less central role in the health care process. International research carried out by KPMG shows that the most successful and sustainable changes in health arose from examining the health care process from the perspective of the patient. This perspective should also take center stage in the strategy of the health provider. The organization uses this as a basis for formulating a vision for information services. Health care institutions are information processing organizations where it is essential that information services run smoothly. It is the task of the administrators to completely integrate the vision for information services into the entire strategy of the institution. IT is a determining factor in all domains, from health care innovation and collaboration with other health care providers to e-health and new construction. Thus, providers would do well to devote time to IT in board meetings and put it in the weekly agenda. If there is no strategy in place, investing in IT means "doing things better" rather than "doing better things". In that case, IT is not much more than the selection of vendor products, when it could be a resource that is strategically deployed to achieve objectives in terms of suitability and quality.
will lead to dramatic changes in health care. A number of serious roadblocks must be overcome before it is possible to reap the benefits of implementing e-health on a large scale. Current legislation is unclear about physicians liability when they are involved in e-health activities. In addition, the current costs for health care constitute a barrier preventing the stimulation of the development and utilization of e-health. The use of e-health applications is inadequately stimulated at this time. Only cautious initiatives are being taken around the globe. At the local level, patients may have the option of using the Internet to consult with a health professional or make an appointment. At the national level, health care providers and patient organizations collaborate to stimulate the growth of communities for fellow sufferers. These are all good examples. Overall, however, these developments are still taking too long. Recent research by KPMG in the Netherlands shows that online "health convenience services" have a direct positive outcome on patient self-management. These services include online registration for a health care institution, entering case history, making appointments, ordering (repeat) prescriptions, and consulting with a health care provider. Government and relevant parties in the health care domain are wise to initially focus on these seemingly simple e-health applications.
An EHR allows an institution to focus predominantly on customer intimacy and less on efficiency
ning and logistics of (health care) resources and the registration of patient information, the processing of Diagnosis-related groups (DRGs), and the planning of admissions/ surgeries and appointments. The ERP can be described as the IT system that supports the logistical and administrative back-office functions (finance and control, human resource management, purchasing and warehousing). These workflow-supporting systems focus primarily upon operational excellence. It is aimed especially at gains in efficiency and less on gains in quality in health care. The EHR supports the creation of the digital medical file for the patient (including clinical documentation and medication data). It is a portal for the exchange of information between parties in the health care chain: patient, referring physician, pharmacy, and so on. An EHR allows an institution to focus predominantly on customer intimacy and less on efficiency. The EHR does not or minimally reduces caregiver workload. More information must be recorded to provide for transparency and quality of care. Practice shows that the need for information in a digital world always increases because data recorded digitally is much easier to exchange than data on paper.
EHR as imperative
The Electronic Health care Records (EHR) can make all the difference for the information services of a health care institution. It establishes the degree of accessibility and interchangeability of medical information and the transparency for a hospital. The EHR is a crucial IT system in the development of the information strategy. Health care institutions will do well to differentiate between the EHR system and the Health care Information System (HIS) in the IT systems landscape. Currently, IT vendors consider these systems to be so interwoven that they are sold as a single package. It is presented as if the client will get the best of both worlds. Unfortunately, nothing could be further from the truth. There is indeed some overlap, but the EHR and HIS are substantially different systems. The HIS must be considered congruously with the Enterprise Resource Planning (ERP) system, both of which are logistical support systems. The HIS is focused especially on the health care domain, while the ERP system focuses on general business activities. The HIS is the IT system that predominantly supports the health care logistics and administrative process. It is focused on the efficient plan-
Uniformity of language
It is the task of the administrators to completely integrate the vision for information services into the entire strategy of the institution
50
IT a meaningful factor in evolving health care sector
Much more than is currently the case, HIS, EHR, ERP and departmental IT systems must become a smoothly running whole and offer the option to be browsed through. This increases transparency and patient safety. In the future, if an event such as a medical complication occurs, all relevant information will have to be available. From the patients screening through to the maintenance history of the infusion pump used. This information can serve not only to record activities but also for accountability purposes. For example, it can be used to trace the origin of a medical complication. Thus, the cause of a complication might be that the physical examination was not thorough enough or that the infuse pump used was not connected by qualified staff or not properly maintained, and so on. All of this type of information already exists in most institutions except that it is not in a standardized format or linked together in any way. Indeed, there is still no adequate technological solution for this matter. Service Oriented Architecture (SOA), a sort of multiple socket software framework where you can plug in different systems, is still under development. When integration of systems and devices is no longer an issue, institutions
Compact_ 2012 0
Transformations
will need protocols, standardization and uniformity much more than occurs now. However, standardization will be required beyond that of the internal operations in institutions. An unambiguous framework of concepts and definitions is inevitable for communication among all partners in the health care chain just as there is for research and training. Uniformity of language is a prerequisite for effective use of IT in health care and the subsequent conversion from paper to digital.
Privacy discussion
Research by KPMG shows that many people are worried about privacy and security on the Internet and attendant risks linked to its use. The concern that confidentiality of data on the Internet is not guaranteed is increasing. Conversely, the willingness to exchange confidential data on the Internet is increasing if there some added value involved. And this is the case with the health care sector. The security of electronic patient data will always be an issue and cannot be guaranteed 100%. This is also true of course for paper files. There is a greater chance of accessibility in the digital domain compared to paper. It is important that technology be used in the most optimal manner to protect the information. And the patient must be able to authorize who can access their information. But that is easier said than done.
C.G.R.M. Aldenhoven is a director at KPMG and is jointly responsible for Health IT within the KPMG practice. He manages the international IT consultancy practice of KPMG in health care and also advises clients on strategic and complex issues that impact both IT and the primary health care processes.
J.C. de Boer is a partner at KPMG international and is responsible for Health IT. In the Netherlands, he advises a large number of hospitals, health insurers and governments in the areas of strategic IT issues.
International players
There are few IT systems currently on the shelf that are adequately equipped to meet the innovation needs of health care. A comforting thought is that the needs of most health professionals still do not vary much from the functionality already offered. And, vendors are indeed increasingly capitalizing on the changing needs. Institutions that are taking steps forward in IT must be careful to keep the door open for future developments. The health care market still operates from a "replace" perspective and hardly at all from a "change" perspective. This means that there is more "following" than "renovating". When selecting vendors, health care institutions are well advised to evaluate whether the vendors also demonstrate commitment to innovation and attendant best practices. The market for EHR systems is becoming more international. Why should we in the Netherlands know better and do things differently than in the U.S.? The HIS market is dominated by national and international players who know how to incorporate localization as part of their systems and focus on timely compliance with changes in national legislation. The national or local health care market is too small. Selecting IT solutions that only work locally rather than internationally can lead to isolation and the inability to capitalize on international developments and innovations. The future belongs to systems based on an international perspective on health care, which have an international market and which unfailingly deal with national and international changes in a timely manner. This development makes extensive standardization inevitable.
Funding problem
Health care institutions will make considerable investments in IT in the coming years. Most health care providers are facing shrinking budgets, which means that institutions must seek innovative funding options. It is "hot" to invest in the health care sector, so there are opportunities. Solutions are conceivable where IT is no longer purchased but leased. There are already hospitals where a single vendor brings in all medical equipment and ensures that all of it is functional and up-to-date. It is always possible for such a hospital to keep pace with innovation at reasonable management costs. A future trend is that ownership of IT, just as with medical equipment, is not an imperative for a health care institution. It is the actual utilization of the equipment that sets the health provider apart. Considered from this perspective, it is clear that institutions will arise where the only assets are the employees themselves. It is possible that property, workplaces, medical equipment, and IT will all be leased for a fixed amount per month based on yield. In any case, a large part will be outsourced to other parties. A consequence will be that other parties will be more involved in the day-to-day operations.
52
Compact_ 2012 0
Transformations
53
This article on www.compact.nl is an adaptation of Chapter 2 of the Dutch book ICT in de zorg. Probaat middel, maar lees voor gebruik de bijsluiter! (Health IT. Good medicine, but read the instructions before use!). This book explores visions of IT developments in health care. It covers nine strategic themes, including the Electronic Health care Records (EHR), information security, Health 2.0, project realization and IT investment. The international version of this book will be published in the spring of 2012.
Compact_ 2012 0
International Edition:
Compact_ 2012 0