Anda di halaman 1dari 41

Service-oriented architecture (SOA) is a software design and software architecture design

pattern based on distinct pieces of software providing application functionality as services to


other applications. This is known as service-orientation. It is independent of any vendor, product
or technology.
[1]

A service is a self-contained unit of functionality, such as retrieving an online bank
statement.
[2]
Services can be combined by other software applications to provide the complete
functionality of a large software application.
[3]
SOA makes it easy for computers connected over a
network to cooperate. Every computer can run an arbitrary number of services, and each service
is built in a way that ensures that the service can exchange information with any other service in
the network without human interaction and without the need to make changes to the underlying
program itself.
Contents
[hide]
1 Overview
2 SOA Framework
3 Design concept
4 Principles
o 4.1 Service architecture
o 4.2 Service composition architecture
o 4.3 Service inventory architecture
o 4.4 Service-oriented enterprise architecture
5 Web services approach
6 Web service protocols
7 Other SOA concepts
8 Definitions
9 Organizational benefits
10 Challenges
11 Criticisms
12 SOA Manifesto
13 Extensions
o 13.1 SOA, Web 2.0, services over the messenger, and mashups
o 13.2 Web 2.0
o 13.3 Digital nervous system
14 See also
15 References
16 External links
Overview[edit]
Services are unassociated, loosely coupled units of functionality that are self-contained. Each
service implements at least one action, such as submitting an online application for an account,
retrieving an online bank statement or modifying an online booking or airline ticket order. Within
an SOA, services use defined protocols that describe how services pass and parse messages
using description metadata, which in sufficient details describes not only the characteristics of
these services, but also the data that drives them. Programmers have made extensive use of
XML in SOA to structure data that they wrap in a nearly exhaustive description-container.
Analogously, the Web Services Description Language (WSDL) typically describes the services
themselves, while SOAP(originally Simple Object Access Protocol) describes the
communications protocols. SOA depends on data and services that are described by metadata
that should meet the following two criteria:
1. The metadata should be provided in a form that software systems can use to configure
dynamically by discovery and incorporation of defined services, and also to maintain
coherence and integrity. For example, metadata could be used by other applications, like
a catalogue, to perform auto discovery of services without modifying the functional
contract of a service.
2. The metadata should be provided in a form that system designers can understand and
manage with a reasonable expenditure of cost and effort.
The purpose of SOA is to allow users to combine together fairly large chunks of functionality to
form ad hoc applications built almost entirely from existing software services. The larger the
chunks, the fewer the interfaces required to implement any given set of functionality; however,
very large chunks of functionality may not prove sufficiently granular for easy reuse. Each
interface brings with it some amount of processing overhead, so there is a performance
consideration in choosing the granularity of services.
SOA as an architecture relies on service-orientation as its fundamental design principle. If a
service presents a simple interface that abstracts away its underlying complexity, then users can
access independent services without knowledge of the service's platform implementation.
[4]

SOA Framework[edit]
In a SOA-based solution, a holistic approach is taken to enable business objectives while
building an enterprise-quality system. SOA architecture is viewed as five horizontal layers:-
1. Consumer Interface Layer - These are GUI for end users or apps accessing apps/service
interfaces..
2. Business Process Layer - These are choreographed services representing business use-
cases in terms of applications.
3. Services - Services are consolidated together for whole enterprise in service inventory.
4. Service Components - The components used to build the services, like functional and
technical libraries, technological interfaces etc.
5. Operational Systems - This layer contains the data models, enterprise data repository,
technological platforms etc.
Besides the above five, there are four vertical layers also in SOA architecture, Each of these
vertical layers are applied to and supported by each of horizontal layers:-
1. Integration Layer - Each of horizontal layer supports to provide the highest level of
integration. It starts with platform integration (protocols support), data integration, service
integration, application integration, leading to enterprise application integration
supporting B2B and B2C with great effectiveness.
2. Quality of Service - Security, availability, performance etc. constitute the quality of service
which are configured based on required SLAs, OLAs. Each of horizontal layer supports
to provide the required quality of service.
3. Informational - Each horizontal supports to provide business information.
4. Governance - IT strategy is governed to each horizontal layer to achieve required
operating and capability model.
In a SOA approach, the vertical layers are state-of art architectural concepts which make SOA
highly sophisticated when compared to the old approach of Object-oriented design, procedural
design etc.
[according to whom?]

Design concept[edit]
SOA is based on the concept of a service. Depending on the service design approach taken,
each SOA service is designed to perform one or more activities by implementing one or more
service operations. As a result, each service is built as a discrete piece of code. This makes it
possible to reuse the code in different ways throughout the application by changing only the way
an individual service interoperates with other services that make up the application, versus
making code changes to the service itself. SOA design principles are used during
software development and integration.
SOA generally provides a way for consumers of services, such as web-based applications, to be
aware of available SOA-based services. For example, several disparate departments within a
company may develop and deploy SOA services in different implementation languages; their
respective clients will benefit from a well-defined interface to access them.
SOA defines how to integrate widely disparate applications for a Web-based environment and
uses multiple implementation platforms. Rather than defining an API, SOA defines the interface
in terms of protocols and functionality. An endpoint is the entry point for such a SOA
implementation.
Service-orientation requires loose coupling of services with operating systems and other
technologies that underlie applications. SOA separates functions into distinct units, or
services,
[5]
which developers make accessible over a network in order to allow users to combine
and reuse them in the production of applications. These services and their corresponding
consumers communicate with each other by passing data in a well-defined, shared format, or by
coordinating an activity between two or more services.
[6]

For some, SOA can be seen in a continuum from older concepts of distributed computing
[5][7]
and
modular programming, through SOA, and on to current practices of mashups,SaaS, and cloud
computing (which some see as the offspring of SOA).
[8]

Principles[edit]
There are no industry standards relating to the exact composition of a service-oriented
architecture, although many industry sources have published their own principles. Some of the
principles published
[9][10][11]
include the following:
Standardized service contract: Services adhere to a communications agreement, as defined
collectively by one or more service-description documents.
Service loose coupling: Services maintain a relationship that minimizes dependencies and
only requires that they maintain an awareness of each other.
Service abstraction: Beyond descriptions in the service contract, services hide logic from the
outside world.
Service reusability: Logic is divided into services with the intention of promoting reuse.
Service autonomy: Services have control over the logic they encapsulate, from a Design-
time and a Run-time perspective.
Service statelessness: Services minimize resource consumption by deferring the
management of state information when necessary
Service discoverability: Services are supplemented with communicative meta data by which
they can be effectively discovered and interpreted.
Service composability: Services are effective composition participants, regardless of the size
and complexity of the composition.
Service granularity: A design consideration to provide optimal scope and right granular level
of the business functionality in a service operation.
Service normalization: Services are decomposed and/or consolidated to a level of normal
form to minimize redundancy. In some cases, services are denormalized for specific
purposes, such as performance optimization, access, and aggregation.
[12]

Service optimization: All else equal, high-quality services are generally preferable to low-
quality ones.
Service relevance: Functionality is presented at a granularity recognized by the user as a
meaningful service.
Service encapsulation: Many services are consolidated for use under the SOA. Often such
services were not planned to be under SOA.
Service location transparency: This refers to the ability of a service consumer to invoke a
service regardless of its actual location in the network. This also recognizes the
discoverability property (one of the core principle of SOA) and the right of a consumer to
access the service. Often, the idea of service virtualization also relates to location
transparency. This is where the consumer simply calls a logical service while a suitable
SOA-enabling runtime infrastructure component, commonly a service bus, maps this logical
service call to a physical service.
Service architecture[edit]
This is the physical design of an individual service that encompasses all the resources used by a
service. This would normally include databases, software components, legacy systems, identity
stores,
[13]
XML schemas and any backing stores, e.g. shared directories. It is also beneficial to
include any service agents
[14]
employed by the service, as any change in these service agents
would affect the message processing capabilities of the service.
The (standardized service contract) design principle, keeps service contracts independent from
their implementation. The service contract needs to be documented to formalize the required
processing resources by the individual service capabilities. Although it is beneficial to document
details about the service architecture, the service abstraction design principle dictates that any
internal details about the service are invisible to its consumers so that they do not develop any
unstated couplings. The service architecture serves as a point of reference for evolving the
service or gauging the impact of any change in the service.
Service composition architecture[edit]
One of the core characteristics of services developed using the service-orientation design
paradigm is that they are composition-centric. Services with this characteristic can potentially
address novel requirements by recomposing the same services in different configurations.
Service composition architecture is itself a composition of the individual architectures of the
participating services. In the light of the Service Abstraction principle, this type of architecture
only documents the service contract and any published service-level agreement (SLA); internal
details of each service are not included.
If a service composition is a part of another (parent) composition, the parent composition can
also be referenced in the child service composition. The design of service composition also
includes any alternate paths, such as error conditions, which may introduce new services into the
current service composition.
Service inventory architecture[edit]
A service inventory is composed of services that automate business processes. It is important to
account for the combined processing requirements of all services within the service inventory.
Documenting the requirements of services, independently from the business processes that they
automate, helps identify processing bottlenecks. The service inventory architecture is
documented from the service inventory blueprint, so that service candidates
[15]
can be redesigned
before their implementation.
Service-oriented enterprise architecture[edit]
This umbrella architecture incorporates service, composition, and inventory architectures, plus
any enterprise-wide technological resources accessed by these architectures e.g. an ERP
system. This can be further supplemented by including enterprise-wide standards that apply to
the aforementioned architecture types. Any segments of the enterprise that are not service-
oriented can also be documented in order to consider transformation requirements if a service
needs to communicate with the business processes automated by such segments.SOA's main
goal is to deliver agility to business
Web services approach[edit]
Web services can implement a service-oriented architecture. They make functional building-
blocks accessible over standard Internet protocols independent of platforms and programming
languages. These services can represent either new applications or just wrappers around
existing legacy systems to make them network-enabled.
Each SOA building block can play one or both of two roles:
1. Service provider: The service provider creates a web service and possibly publishes its
interface and access information to the service registry. Each provider must decide
which services to expose, how to make trade-offs between security and easy availability,
how to price the services, or (if no charges apply) how/whether to exploit them for other
value. The provider also has to decide what category the service should be listed in for a
given broker service and what sort of trading partner agreements are required to use the
service. It registers what services are available within it, and lists all the potential service
recipients. The implementer of the broker then decides the scope of the broker. Public
brokers are available through the Internet, while private brokers are only accessible to a
limited audience, for example, users of a company intranet. Furthermore, the amount of
the offered information has to be decided. Some brokers specialize in many listings.
Others offer high levels of trust in the listed services. Some cover a broad landscape of
services and others focus within an industry. Some brokers catalog other brokers.
Depending on the business model, brokers can attempt to maximize look-up requests,
number of listings or accuracy of the listings. The Universal Description Discovery and
Integration (UDDI) specification defines a way to publish and discover information about
Web services. Other service broker technologies include (for
example) ebXML (Electronic Business using eXtensible Markup Language) and those
based on the ISO/IEC 11179 Metadata Registry (MDR) standard.
2. Service consumer: The service consumer or web service client locates entries in the
broker registry using various find operations and then binds to the service provider in
order to invoke one of its web services. Whichever service the service-consumers need,
they have to take it into the brokers, bind it with respective service and then use it. They
can access multiple services if the service provides multiple services.
Web service protocols[edit]
See also: List of web service protocols
Implementers commonly build SOAs using web services standards (for example, SOAP) that
have gained broad industry acceptance after recommendation of Version 1.2 from the
W3C
[16]
(World Wide Web Consortium) in 2003. These standards (also referred to as web service
specifications) also provide greater interoperability and some protection from lock-in to
proprietary vendor software. One can, however, implement SOA using any service-based
technology, such as Jini, CORBA or REST.
Other SOA concepts[edit]
Architectures can operate independently of specific technologies and can therefore be
implemented using a wide range of technologies, including:
SOAP, RPC
REST
DCOM
CORBA
Web services
DDS
Java RMI
WCF (Microsoft's implementation of web services now forms a part of WCF)
Apache Thrift
SORCER
Implementations can use one or more of these protocols and, for example, might use a file-
system mechanism to communicate data conforming to a defined interface specification between
processes conforming to the SOA concept. The key is independent services with defined
interfaces that can be called to perform their tasks in a standard way, without a service having
foreknowledge of the calling application, and without the application having or needing
knowledge of how the service actually performs its tasks.

Elements of SOA, by Dirk Krafzig, Karl Banke, and Dirk Slama
[17]


SOA meta-model, The Linthicum Group, 2007

Service-Oriented Modeling Framework (SOMF) Version 2.0
SOA enables the development of applications that are built by combining loosely coupled
andinteroperable services.
[18]

These services inter-operate based on a formal definition (or contract, e.g., WSDL) that is
independent of the underlying platform and programming language. The interface definition hides
the implementation of the language-specific service. SOA-based systems can therefore function
independently of development technologies and platforms (such as Java, .NET, etc.). Services
written in C# running on .NET platforms and services written in Java running on Java
EE platforms, for example, can both be consumed by a common composite application (or
client). Applications running on either platform can also consume services running on the other
as web services that facilitate reuse. Managed environments can also wrap COBOL legacy
systems and present them as software services. This has extended the useful life of many core
legacy systems indefinitely
[citation needed]
, no matter what language they originally used.
SOA can support integration and consolidation activities within complex enterprise systems, but
SOA does not specify or provide a methodology or framework for documenting capabilities or
services.
We can distinguish the Service Object-Oriented Architecture (SOOA), where service providers
are network (call/response) objects accepting remote invocations, from the Service Protocol
Oriented Architecture (SPOA), where a communication (read/write) protocol is fixed and known
beforehand by the provider and requestor. Based on that protocol and a service description
obtained from the service registry, the requestor can bind to the service provider by creating own
proxy used for remote communication over the fixed protocol. If a service provider registers its
service description by name, the requestors have to know the name of the service beforehand. In
SOOA, a proxyan object implementing the same service interfaces as its service provideris
registered with the registries and it is always ready for use by requestors. Thus, in SOOA, the
service provider owns and publishes the proxy as the active surrogate object with a codebase
annotation, e.g., URLs to the code defining proxy behavior (Jini ERI). In SPOA, by contrast, a
passive service description is registered (e.g., an XML document in WSDL for Web services, or
an interface description in IDL for CORBA); the requestor then has to generate the proxy (a stub
forwarding calls to a provider) based on a service description and the fixed communication
protocol (e.g., SOAP in Web services, IIOP in CORBA). This is referred to as a bind operation.
The proxy binding operation is not required in SOOA since the requestor holds the active
surrogate object obtained via the registry. The surrogate object is already bound to the provider
that registered it with its appropriate network configuration and its code annotations. Web
services, OGSA, RMI, and CORBA services cannot change the communication protocol between
requestors and providers while the SOOA approach is protocol neutral.
[19]

High-level languages such as BPEL and specifications such as WS-CDL and WS-
Coordination extend the service concept by providing a method of defining and supporting
orchestration of fine-grained services into more coarse-grained business services, which
architects can in turn incorporate into workflows and business processes implemented
in composite applications or portals
[citation needed]
.
Service-oriented modeling
[5]
is a SOA framework that identifies the various disciplines that guide
SOA practitioners to conceptualize, analyze, design, and architect their service-oriented assets.
TheService-oriented modeling framework (SOMF) offers a modeling language and a work
structure or "map" depicting the various components that contribute to a successful service-
oriented modeling approach. It illustrates the major elements that identify the what to do
aspects of a service development scheme. The model enables practitioners to craft a project
plan and to identify the milestones of a service-oriented initiative. SOMF also provides a common
modeling notation to address alignment between business and IT organizations.
Definitions[edit]
The OASIS group
[20]
and the Open Group
[21]
have both created formal definitions. OASIS defines
SOAas the following:
A paradigm for organizing and utilizing distributed capabilities that may be under the control of
different ownership domains. It provides a uniform means to offer, discover, interact with and use
capabilities to produce desired effects consistent with measurable preconditions and
expectations.
Organizational benefits[edit]
Some enterprise architects believe that SOA can help businesses respond more quickly and
more cost-effectively to changing market conditions.
[22]
This style of architecturepromotes reuse
at the macro (service) level rather than micro (classes) level. It can also simplify interconnection
toand usage ofexisting IT (legacy) assets.
With SOA, the idea is that an organization can look at a problem holistically. A business has
more overall control. Theoretically there would not be a mass of developers using whatever tool
sets might please them. But rather there would be a coding to a standard that is set within the
business. They can also develop enterprise-wide SOA that encapsulates a business-oriented
infrastructure. SOA has also been illustrated as a highway system providing efficiency for car
drivers. The point being that if everyone had a car, but there was no highway anywhere, things
would be limited and disorganized, in any attempt to get anywhere quickly or efficiently. IBM Vice
President of Web Services Michael Liebow says that SOA "builds highways".
[23]

In some respects, SOA could be regarded as an architectural evolution rather than as a
revolution. It captures many of the best practices of previous software architectures. In
communications systems, for example, little development of solutions that use truly static
bindings to talk to other equipment in the network has taken place. By formally embracing a SOA
approach, such systems can position themselves to stress the importance of well-defined, highly
inter-operable interfaces.
[24]

Some
[who?]
have questioned whether SOA simply revives concepts like modular programming
(1970s), event-oriented design (1980s), or interface/component-based design (1990s)
[citation needed]
.
SOA promotes the goal of separating users (consumers) from the service implementations.
Services can therefore be run on various distributed platforms and be accessed across networks.
This can also maximize reuse of services
[citation needed]
.
A service comprises a stand-alone unit of functionality available only via a formally defined
interface. Services can be some kind of "nano-enterprises" that are easy to produce and
improve. Also services can be "mega-corporations" constructed as the coordinated work of
subordinate services.
Services generally adhere to the following principles of service-orientation:
[25]

Abstraction
Autonomy
Composability
Discoverability
Formal contract
Loose coupling
Reusability
Statelessness
[26]

A mature rollout of SOA effectively defines the API of an organization.
Reasons for treating the implementation of services as separate projects from larger projects
include:
1. Separation promotes the concept to the business that services can be delivered quickly
and independently from the larger and slower-moving projects common in the
organization. The business starts understanding systems and simplified user interfaces
calling on services. This advocates agility. That is to say, it fosters business innovations
and speeds up time-to-market.
[27]

2. Separation promotes the decoupling of services from consuming projects. This
encourages good design insofar as the service is designed without knowing who its
consumers are.
3. Documentation and test artifacts of the service are not embedded within the detail of the
larger project. This is important when the service needs to be reused later.
An indirect benefit of SOA involves dramatically simplified testing. Services are autonomous,
stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the
implementation.
If an organization possesses appropriately defined test data, then a corresponding stub is built
that reacts to the test data when a service is being built. A full set of regression tests, scripts,
data, and responses is also captured for the service. The service can be tested as a 'black box'
using existing stubs corresponding to the services it calls. Test environments can be constructed
where the primitive and out-of-scope services are stubs, while the remainder of the mesh is test
deployments of full services. As each interface is fully documented with its own full set of
regression test documentation, it becomes simple to identify problems in test services. Testing
evolves to merely validate that the test service operates according to its documentation, and
finds gaps in documentation and test cases of all services within the environment. Managing the
data state of idempotentservices is the only complexity.
Examples may prove useful to aid in documenting a service to the level where it becomes useful.
The documentation of some APIs within the Java Community Process provide good examples.
As these are exhaustive, staff would typically use only important subsets. The 'ossjsa.pdf' file
within JSR-89 exemplifies such a file.
[28]

Challenges[edit]
One obvious and common challenge faced involves managing services metadata
[citation needed]
. SOA-
based environments can include many services that exchange messages to perform tasks.
Depending on the design, a single application may generate millions of messages. Managing
and providing information on how services interact can become complex. This becomes even
more complicated when these services are delivered by different organizations within the
company or even different companies (partners, suppliers, etc.). This creates huge trust issues
across teams; hence SOA Governance comes into the picture.
Another challenge involves the lack of testing in SOA space. There are no sophisticated tools
that provide testability of all headless services (including message and database services along
with web services) in a typical architecture. Lack of horizontal trust requires that both producers
and consumers test services on a continuous basis. SOA's main goal is to deliver agility to
businesses
[citation needed]
. Therefore it is important to invest in a testing framework (build it or buy it)
that would provide the visibility required to find the culprit in the architecture. Business
agility requires SOA services to be controlled by the business goals and directives as defined in
the business Motivation Model (BMM).
[29]

Another challenge relates to providing appropriate levels of security. Security models built into an
application may no longer suffice when an application exposes its capabilities as services that
can be used by other applications. That is, application-managed security is not the right model
for securing services. A number of new technologies and standards have started
[when?]
to emerge
and provide more appropriate models for security in SOA.
Finally, the impact of changing a service that touches multiple business domains will require
a higher level of change management governance
As SOA and the WS-* specifications practitioners expand, update and refine their output, they
encounter a shortage of skilled people to work on SOA-based systems, including the integration
of services and construction of services infrastructure.
Interoperability becomes an important aspect of SOA implementations. The WS-I organization
has developed basic profile (BP) and basic security profile (BSP) to enforce compatibility.
[30]
WS-I
has designed testing tools to help assess whether web services conform to WS-I profile
guidelines. Additionally, another charter has been established to work on the Reliable Secure
Profile.
Significant vendor hype surrounds SOA, which can create exaggerated expectations. Product
stacks continue to evolve as early adopters test the development and runtime products with real-
world problems. SOA does not guarantee reduced IT costs, improved systems agility or shorter
time to market. Successful SOA implementations may realize some or all of these benefits
depending on the quality and relevance of the system architecture and design.
[31][32]

Internal IT delivery organizations routinely initiate SOA efforts, and some do a poor job of
introducing SOA concepts to a business
[citation needed]
with the result that SOA remains
misunderstood
[by whom?]
within that business. The adoption of SOA starts to meet IT delivery needs
instead of those of the business, resulting in an organization with, for example, superlative laptop
provisioning services, instead of one that can quickly respond to market opportunities.
Business leadership also frequently becomes convinced that the organization is executing well
on SOA.
One of the most important benefits of SOA is its ease of reuse. Therefore accountability and
funding models must ultimately evolve within the organization.
[citation needed]
A business unit needs to
be encouraged to create services that other units will use. Conversely, units must be encouraged
to reuse services. This requires a few new governance components:
Each business unit creating services must have an appropriate support structure in place to
deliver on its service-level obligations, and to support enhancing existing services strictly for
the benefit of others. This is typically quite foreign to business leaders.
Each business unit consuming services accepts the apparent risk of reusing services outside
their own control, with the attendant external project dependencies, etc.
An innovative funding model is needed as incentive to drive these behaviors above.
[citation
needed]
Business units normally pay the IT organization to assist during projects and then to
operate the environment. Corporate incentives should discount these costs to service
providers and create internal revenue streams from consuming business units to the service
provider.
[citation needed]
These streams should be less than the costs of a consumer simply
building it the old-fashioned way. This is where SOA deployments can benefit from
the SaaS monetization architecture.
[33]

Criticisms[edit]
Some criticisms of SOA depend on conflating SOA with Web services.
[34]
For example, some
critics
[who?]
claim SOA results in the addition of XML layers, introducing XML parsing and
composition. In the absence of native or binary forms of remote procedure call (RPC),
applications could run more slowly and require more processing power, increasing costs. Most
implementations do incur these overheads, but SOA can be implemented using technologies (for
example, Java Business Integration (JBI), Windows Communication Foundation (WCF) and data
distribution service (DDS)) that do not depend on remote procedure calls or translation through
XML. At the same time, emerging open-source XML parsing technologies (such as VTD-XML)
and various XML-compatible binary formats promise to significantly improve SOA
performance.
[35][36][37]

Stateful services require both the consumer and the provider to share the same consumer-
specific context, which is either included in or referenced by messages exchanged between the
provider and the consumer. This constraint has the drawback that it could reduce the overall
scalability of the service provider if the service-provider needs to retain the shared context for
each consumer. It also increases the coupling between a service provider and a consumer and
makes switching service providers more difficult.
[38]
Ultimately, some critics feel that SOA services
are still too constrained by applications they represent.
[39]

Another concern relates to the ongoing evolution of WS-* standards and products (e. g.,
transaction, security), and SOA can thus introduce new risks unless properly managed and
estimated with additional budget and contingency for additional proof-of-concept work. There has
even been an attempt to parody the complexity and sometimes-oversold benefits of SOA, in the
form of a 'SOA Facts' site that mimics the 'Chuck Norris Facts' meme.
Some critics
[who?]
regard SOA as merely an obvious evolution of currently well-deployed
architectures (open interfaces, etc.).
IT system designs sometimes overlook the desirability of modifying systems readily. Many
systems, including SOA-based systems, hard-code the operations, goods and services of the
organization, thus restricting their online service and business agility in the global
marketplace.
[citation needed]

The next step in the design process covers the definition of a service delivery platform (SDP) and
its implementation. In the SDP design phase one defines the business information models,
identity management, products, content, devices, and the end-user service characteristics, as
well as how agile the system is so that it can deal with the evolution of the business and its
customers.
SOA Manifesto[edit]
In October 2009, at the 2nd International SOA Symposium, a mixed group of 17 independent
SOA practitioners and vendors, the "SOA Manifesto Working Group", announced the publication
of the SOA Manifesto.
[40]
The SOA Manifesto is a set of objectives and guiding principles that aim
to provide a clear understanding and vision of SOA and service-orientation. Its purpose is
rescuing the SOA concept from an excessive use of the term by the vendor community and "a
seemingly endless proliferation of misinformation and confusion". [2]
The manifesto provides a broad definition of SOA, the values it represents for the signatories and
some guiding principles. The manifesto prioritizes:
Business value over technical strategy
Strategic goals over project-specific benefits
Intrinsic interoperability over custom integration
Shared services over specific-purpose implementations
Flexibility over optimization
Evolutionary refinement over pursuit of initial perfection
As of September 2010, the SOA Manifesto had been signed by more than 700 signatories and
had been translated to nine languages.
Web 2.0, a perceived "second generation" of web activity, primarily features the ability of visitors
to contribute information for collaboration and sharing. Web 2.0 applications often
use RESTful web APIs and commonly feature AJAX based user interfaces, utilizing web
syndication, blogs, and wikis. While there are no set standards for Web 2.0, it is characterized by
building on the existing Web server architecture and using services. Web 2.0 can therefore be
regarded as displaying some SOA characteristics.
[41][42][43]

Some commentators
[who?]
also regard mashups as Web 2.0 applications. The term "business
mashups" describes web applications that combine content from more than one source into an
integrated user experience that shares many of the characteristics of service-oriented business
applications (SOBAs). SOBAs are applications composed of services in a declarative manner.
There is ongoing debate about "the collision of Web 2.0, mashups, and SOA," with some stating
that Web 2.0 applications are a realization of SOA composite and business applications.
[44]

Web 2.0[edit]
Tim O'Reilly coined the term "Web 2.0" to describe a perceived, quickly growing set of web-
based applications.
[45]
A topic that has experienced extensive coverage involves the relationship
between Web 2.0 and Service-Oriented Architectures (SOAs).
SOA is the philosophy of encapsulating application logic in services with a uniformly defined
interface and making these publicly available via discovery mechanisms. The notion of
complexity-hiding and reuse, but also the concept of loosely coupling services has inspired
researchers to elaborate on similarities between the two philosophies, SOA and Web 2.0, and
their respective applications. Some argue Web 2.0 and SOA have significantly different elements
and thus can not be regarded parallel philosophies, whereas others consider the two concepts
as complementary and regard Web 2.0 as the global SOA.
[42]

The philosophies of Web 2.0 and SOA serve different user needs and thus expose differences
with respect to the design and also the technologies used in real-world applications. However, as
of 2008, use-cases demonstrated the potential of combining technologies and principles of both
Web 2.0 and SOA.
[42]

In an "Internet of Services", all people, machines, and goods will have access via the network
infrastructure of tomorrow.
[citation needed]
The Internet will thus offer services for all areas of life and
business, such as virtual insurance, online banking and music, and so on. Those services will
require a complex services infrastructure including service-delivery platforms bringing together
demand and supply. Building blocks for the Internet of Services include SOA, Web 2.0 and
semantics on the technology side; as well as novel business models, and approaches to
systematic and community-based innovation.
[46]





Cloud computing is internet-based computing in which large groups of remote servers
are networked to allow sharing of data-processing tasks, centralized data storage, and online
access to computer services or resources. Clouds can be classified as public, private or hybrid.
[1]

Contents
[hide]
1 Overview
2 History
o 2.1 Origin of the term
o 2.2 The 1950s
o 2.3 The 1990s
o 2.4 Since 2000
3 Similar concepts
4 Characteristics
5 Service models
o 5.1 Infrastructure as a service (IaaS)
o 5.2 Platform as a service (PaaS)
o 5.3 Software as a service (SaaS)
o 5.4 Unified Communications as a Service
6 Cloud clients
7 Deployment models
o 7.1 Private cloud
o 7.2 Public cloud
o 7.3 Hybrid cloud
o 7.4 Others
7.4.1 Community cloud
7.4.2 Distributed cloud
7.4.3 Intercloud
7.4.4 Multicloud
8 Architecture
o 8.1 Cloud engineering
9 Security, privacy and trust
10 The future
11 See also
12 References
13 External links
Overview[edit]
Cloud computing relies on sharing of resources to achieve coherence and economies of scale,
similar to a utility (like the electricity grid) over a network.
[1]
At the foundation of cloud computing
is the broader concept of converged infrastructure and shared services.
Cloud computing, or in simpler shorthand just "the cloud", also focuses on maximizing the
effectiveness of the shared resources. Cloud resources are usually not only shared by multiple
users but are also dynamically reallocated per demand. This can work for allocating resources to
users. For example, a cloud computer facility that serves European users during European
business hours with a specific application (e.g., email) may reallocate the same resources to
serve North American users during North America's business hours with a different application
(e.g., a web server). This approach should maximize the use of computing power thus reducing
environmental damage as well since less power, air conditioning, rackspace, etc. are required for
a variety of functions. With cloud computing, multiple users can access a single server to retrieve
and update their data without purchasing licenses for different applications.
The term "moving to cloud" also refers to an organization moving away from a
traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to
theOPEX model (use a shared cloud infrastructure and pay as one uses it).
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs,
and focus on projects that differentiate their businesses instead of on infrastructure.
[2]
Proponents
also claim that cloud computing allows enterprises to get their applications up and running faster,
with improved manageability and less maintenance, and enables IT to more rapidly adjust
resources to meet fluctuating and unpredictable business demand.
[2][3][4]
Cloud providers typically
use a "pay as you go" model. This can lead to unexpectedly high charges if administrators do not
adapt to the cloud pricing model.
[5]

The present availability of high-capacity networks, low-cost computers and storage devices as
well as the widespread adoption of hardware virtualization, service-oriented architecture,
and autonomic and utility computing have led to a growth in cloud computing.
[6][7][8]

Cloud vendors are experiencing growth rates of 50% per annum.
[9]

History[edit]
Origin of the term[edit]

The origin of the term cloud computing is unclear. The expression cloud is commonly used in
science to describe a large agglomeration of objects that visually appear from a distance as a
cloud and describes any set of things whose details are not inspected further in a given context.
In analogy to above usage the word cloud was used as a metaphor for the Internet and a
standardized cloud-like shape was used to denote a network on telephony schematics and later
to depict the Internet in computer network diagrams. With this simplification, the implication is
that the specifics of how the end points of a network are connected are not relevant for the
purposes of understanding the diagram. The cloud symbol was used to represent the Internet as
early as 1994,
[10][11]
in which servers were then shown connected to, but external to, the cloud.
References to cloud computing in its modern sense can be found as early as 1996, with the
earliest known mention to be found in a Compaq internal document.
[12]

The popularization of the term can be traced to 2006 when Amazon.com introduced the Elastic
Compute Cloud.
[13]

The 1950s[edit]
The underlying concept of cloud computing dates to the 1950s, when large-scale mainframe
computers were seen as the future of computing, and became available in academia and
corporations, accessible via thin clients/terminal computers, often referred to as "static
terminals", because they were used for communications but had no internal processing
capacities. To make more efficient use of costly mainframes, a practice evolved that allowed
multiple users to share both the physical access to the computer from multiple terminals as well
as the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater
return on the investment. The practice of sharing CPU time on a mainframe became known in
the industry as time-sharing.
[14]
During the mid 70s, time-sharing was popularly known as RJE
(Remote Job Entry); this nomenclature was mostly associated with large vendors such as IBM
and DEC.
The 1990s[edit]
In the 1990s, telecommunications companies, who previously offered primarily dedicated point-
to-point data circuits, began offering virtual private network (VPN) services with comparable
quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use,
they could use overall network bandwidth more effectively. They began to use the cloud symbol
to denote the demarcation point between what the provider was responsible for and what users
were responsible for. Cloud computing extends this boundary to cover all servers as well as the
network infrastructure.
[15]

As computers became more prevalent, scientists and technologists explored ways to make large-
scale computing power available to more users through time-sharing, experimenting with
algorithms to provide the optimal use of the infrastructure, platform and applications which
prioritized the CPU and efficiency for the end users.
[16]

Since 2000[edit]
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for
deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European
Commission-funded project, became the first open-source software for deploying private and
hybrid clouds, and for the federation of clouds.
[17]
In the same year, efforts were focused on
providing quality of service guarantees (as required by real-time interactive applications) to
cloud-based infrastructures, in the framework of the IRMOS European Commission-funded
project, resulting in a real-time cloud environment.
[18]
By mid-2008, Gartner saw an opportunity for
cloud computing "to shape the relationship among consumers of IT services, those who use IT
services and those who sell them"
[19]
and observed that "organizations are switching from
company-owned hardware and software assets to per-use service-based models" so that the
"projected shift to computing ... will result in dramatic growth in IT products in some areas and
significant reductions in other areas."
[20]

In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software
initiative known as OpenStack. The OpenStack project intended to help organizations offer
cloud-computing services running on standard hardware. The early code came from
NASA's Nebula platform as well as from Rackspace's Cloud Files platform.
[21]

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter
Planet.
[22]
Among the various components of the Smarter Computing foundation, cloud computing
is a critical piece.
On June 7, 2012, Oracle announced the Oracle Cloud.
[23]
While aspects of the Oracle Cloud are
still in development, this cloud offering is posed to be the first to provide users with access to an
integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and
Infrastructure (IaaS) layers.
[24][25][26]

Similar concepts[edit]
Cloud computing is the result of evolution and adoption of existing technologies and paradigms.
The goal of cloud computing is to allow users to take benet from all of these technologies,
without the need for deep knowledge about or expertise with each one of them. The cloud aims
to cut costs, and help the users focus on their core business instead of being impeded by IT
obstacles.
[27]

The main enabling technology for cloud computing is virtualization. Virtualization software allows
a physical computing device to be electronically separated into one or more "virtual" devices,
each of which can be easily used and managed to perform computing tasks. With operating
systemlevel virtualization essentially creating a scalable system of multiple independent
computing devices, idle computing resources can be allocated and used more efficiently.
Virtualization provides the agility required to speed up IT operations, and reduces cost by
increasing infrastructure utilization. Autonomic computing automates the process through which
the user can provision resources on-demand. By minimizing user involvement, automation
speeds up the process, reduces labor costs and reduces the possibility of human errors.
[27]

Users routinely face difficult business problems. Cloud computing adopts concepts from Service-
oriented Architecture (SOA) that can help the user break these problems intoservices that can be
integrated to provide a solution. Cloud computing provides all of its resources as services, and
makes use of the well-established standards and best practices gained in the domain of SOA to
allow global and easy access to cloud services in a standardized way.
Cloud computing also leverages concepts from utility computing in order to provide metrics for
the services used. Such metrics are at the core of the public cloud pay-per-use models. In
addition, measured services are an essential part of the feedback loop in autonomic computing,
allowing services to scale on-demand and to perform automatic failure recovery.
Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of
service) and reliability problems. Cloud computing provides the tools and technologies to build
data/compute intensive parallel applications with much more affordable prices compared to
traditional parallel computing techniques.
[28]

Grid computing "A form of distributed and parallel computing, whereby a 'super and virtual
computer' is composed of a cluster of networked, loosely coupled computers acting in
concert to perform very large tasks."
Mainframe computer Powerful computers used mainly by large organizations for critical
applications, typically bulk data processing such as: census; industry and consumer
statistics; police and secret intelligence services; enterprise resource planning; and
financial transaction processing.
Utility computing The "packaging of computing resources, such as computation and
storage, as a metered service similar to a traditional public utility, such as electricity."
[29][30]

Peer-to-peer A distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of resources (in contrast to the traditional
clientserver model).
Characteristics[edit]
Cloud computing exhibits the following key characteristics:
Agility improves with users' ability to re-provision technological infrastructure resources.
Application programming interface (API) accessibility to software that enables machines
to interact with cloud software in the same way that a traditional user interface (e.g., a
computer desktop) facilitates interaction between humans and computers. Cloud computing
systems typically use Representational State Transfer (REST)-based APIs.
Cost reductions claimed by cloud providers. A public-cloud delivery model converts capital
expenditure to operational expenditure.
[31]
This purportedly lowers barriers to entry, as
infrastructure is typically provided by a third party and does not need to be purchased for
one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-
grained, with usage-based options and fewer IT skills are required for implementation (in-
house).
[32]
The e-FISCAL project's state-of-the-art repository
[33]
contains several articles
looking into cost aspects in more detail, most of them concluding that costs savings depend
on the type of activities supported and the type of infrastructure available in-house.
Device and location independence
[34]
enable users to access systems using a web
browser regardless of their location or what device they use (e.g., PC, mobile phone). As
infrastructure is off-site (typically provided by a third-party) and accessed via the Internet,
users can connect from anywhere.
[32]

Maintenance of cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places.
Multitenancy enables sharing of resources and costs across a large pool of users thus
allowing for:
centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
peak-load capacity increases (users need not engineer for highest possible load-levels)
utilisation and efficiency improvements for systems that are often only 1020%
utilised.
[35][36]

Performance is monitored, and consistent and loosely coupled architectures are constructed
using web services as the system interface.
[32][37][38]

Productivity may be increased when multiple users can work on the same data
simultaneously, rather than waiting for it to be saved and emailed. Time may be saved as
information does not need to be re-entered when fields are matched, nor do users need to
install application software upgrades to their computer.
[39]

Reliability improves with the use of multiple redundant sites, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.
[40]

Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-
grained, self-service basis in near real-time
[41][42]
(Note, the VM startup time varies by VM
type, location, OS and cloud providers
[41]
), without users having to engineer for peak
loads.
[43][44][45]

Security can improve due to centralization of data, increased security-focused resources,
etc., but concerns can persist about loss of control over certain sensitive data, and the lack
of security for stored kernels.
[46]
Security is often as good as or better than other traditional
systems, in part because providers are able to devote resources to solving security issues
that many customers cannot afford to tackle.
[47]
However, the complexity of security is greatly
increased when data is distributed over a wider area or over a greater number of devices, as
well as in multi-tenant systems shared by unrelated users. In addition, user access to
security audit logs may be difficult or impossible. Private cloud installations are in part
motivated by users' desire to retain control over the infrastructure and avoid losing control of
information security.
The National Institute of Standards and Technology's definition of cloud computing identifies "five
essential characteristics":
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled,
and reported, providing transparency for both the provider and consumer of the utilized service.
National Institute of Standards and Technology
[1]

Service models[edit]
Cloud computing providers offer their services according to several fundamental models:
[1][48]


Infrastructure as a service (IaaS)[edit]
See also: Category:Cloud infrastructure and Software-defined application delivery
In the most basic cloud-service model & according to the IETF (Internet Engineering Task
Force), providers of IaaS offer computers physical or (more often) virtual machines and other
resources. (A hypervisor, such as Xen, Oracle VirtualBox, KVM, VMware ESX/ESXi, or Hyper-
V runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-
system can support large numbers of virtual machines and the ability to scale services up and
down according to customers' varying requirements.) IaaS clouds often offer additional resources
such as a virtual-machine disk image library, raw block storage, and file or object storage,
firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software
bundles.
[49]
IaaS-cloud providers supply these resources on-demand from their large pools
installed in data centers. For wide-area connectivity, customers can use either the Internet
or carrier clouds(dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the
operating systems and the application software. Cloud providers typically bill IaaS services on a
utility computing basis: cost reflects the amount of resources allocated and consumed.
[50][51][52]

Platform as a service (PaaS)[edit]
Main article: Platform as a service
See also: Category:Cloud platforms
In the PaaS models, cloud providers deliver a computing platform, typically including operating
system, programming language execution environment, database, and web server. Application
developers can develop and run their software solutions on a cloud platform without the cost and
complexity of buying and managing the underlying hardware and software layers. With some
PaaS offers like Microsoft Azure and Google App Engine, the underlying computer and storage
resources scale automatically to match application demand so that the cloud user does not have
to allocate resources manually. The latter has also been proposed by an architecture aiming to
facilitate real-time in cloud environments.
[53]

Software as a service (SaaS)[edit]
Main article: Software as a service
In the business model using software as a service (SaaS), users are provided access to
application software and databases. Cloud providers manage the infrastructure and platforms
that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually
priced on a pay-per-use basis. SaaS providers generally price applications using a subscription
fee.
In the SaaS model, cloud providers install and operate application software in the cloud and
cloud users access the software from cloud clients. Cloud users do not manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install and run
the application on the cloud user's own computers, which simplifies maintenance and support.
Cloud applications are different from other applications in their scalabilitywhich can be
achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work
demand.
[54]
Load balancers distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access point. To accommodate a large
number of cloud users, cloud applications can be multitenant, that is, any machine serves more
than one cloud user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,
[55]
so
price is scalable and adjustable if users are added or removed at any point.
[56]

Proponents claim SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This enables
the business to reallocate IT operations costs away from hardware/software spending and
personnel expenses, towards meeting other goals. In addition, with applications hosted centrally,
updates can be released without the need for users to install new software. One drawback of
SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be
unauthorized access to the data. For this reason, users are increasingly adopting intelligent third-
party key management systems to help secure their data.
Unified Communications as a Service[edit]
Main article: Unified communications
In the UCaaS model, multi-platform communications over the network are packaged by the
service provider. The services could be in different devices, such as computers and mobile
devices. Services may include IP telephony, unified messaging, video conferencing and mobile
extension
[57]
etc.
Cloud clients[edit]
See also: Category:Cloud clients
Users access cloud computing using networked client devices, such as desktop
computers, laptops, tablets and smartphones. Some of these devices cloud clients rely on
cloud computing for all or a majority of their applications so as to be essentially useless without it.
Examples are thin clients and the browser-based Chromebook. Many cloud applications do not
require specific software on the client and instead use a web browser to interact with the cloud
application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even
better, look and feel to native applications. Some cloud applications, however, support specific
client software dedicated to these applications (e.g.,virtual desktop clients and most email
clients). Some legacy applications (line of business applications that until now have been
prevalent in thin client computing) are delivered via a screen-sharing technology.
Deployment models[edit]

Cloud computing types
Private cloud[edit]
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party, and hosted either internally or externally.
[1]
Undertaking a private
cloud project requires a significant level and degree of engagement to virtualize the business
environment, and requires the organization to reevaluate decisions about existing resources.
When done right, it can improve business, but every step in the project raises security issues that
must be addressed to prevent serious vulnerabilities.
[58]
Self-run data centers
[59]
are generally
capital intensive. They have a significant physical footprint, requiring allocations of space,
hardware, and environmental controls. These assets have to be refreshed periodically, resulting
in additional capital expenditures. They have attracted criticism because users "still have to buy,
build, and manage them" and thus do not benefit from less hands-on management,
[60]
essentially
"[lacking] the economic model that makes cloud computing such an intriguing concept".
[61][62]

Public cloud[edit]
A cloud is called a "public cloud" when the services are rendered over a network that is open for
public use. Public cloud services may be free or offered on a pay-per-usage model.
[63]
Technically
there may be little or no difference between public and private cloud architecture, however,
security consideration may be substantially different for services (applications, storage, and other
resources) that are made available by a service provider for a public audience and when
communication is effected over a non-trusted network. Generally, public cloud service providers
like Amazon AWS, Microsoft and Google own and operate the infrastructure at their data
center and access is generally via the Internet. AWS and Microsoft also offer direct connect
services called "AWS Direct Connect" and "Azure ExpressRoute" respectively, such connections
require customers to purchase or lease a private connection to a peering point offered by the
cloud provider.
[32]

Hybrid cloud[edit]
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
distinct entities but are bound together, offering the benefits of multiple deployment models.
Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated
services with cloud resources.
[1]

Gartner, Inc. defines a hybrid cloud service as a cloud computing service that is composed of
some combination of private, public and community cloud services, from different service
providers.
[64]
A hybrid cloud service crosses isolation and provider boundaries so that it cant be
simply put in one category of private, public, or community cloud service. It allows one to extend
either the capacity or the capability of a cloud service, by aggregation, integration or
customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may store
sensitive client data in house on a private cloud application, but interconnect that application to a
business intelligence application provided on a public cloud as a software service.
[65]
This
example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business
service through the addition of externally available public cloud services.
Another example of hybrid cloud is one where IT organizations use public cloud computing
resources to meet temporary capacity needs that can not be met by the private cloud.
[66]
This
capability enables hybrid clouds to employ cloud bursting for scaling across clouds.
[1]
Cloud
bursting is an application deployment model in which an application runs in a private cloud or
data center and "bursts" to a public cloud when the demand for computing capacity increases. A
primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays
for extra compute resources when they are needed.
[67]
Cloud bursting enables data centers to
create an in-house IT infrastructure that supports average workloads, and use cloud resources
from public or private clouds, during spikes in processing demands.
[68]

Others[edit]
Community cloud[edit]
Community cloud shares infrastructure between several organizations from a specific community
with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or
by a third-party, and either hosted internally or externally. The costs are spread over fewer users
than a public cloud (but more than a private cloud), so only some of the cost savings potential of
cloud computing are realized.
[1]

Distributed cloud[edit]
Cloud computing can also be provided by a distributed set of machines that are running at
different locations, while still connected to a single network or hub service. Examples of this
include distributed computing platforms such as BOINC and Folding@Home. An interesting
attempt in such direction is Cloud@Home, aiming at implementing cloud computing provisioning
model on top of voluntarily shared resources
[69]

Intercloud[edit]
Main article: Intercloud
The Intercloud
[70]
is an interconnected global "cloud of clouds"
[71][72]
and an extension of the
Internet "network of networks" on which it is based. The focus is on direct interoperability
between public cloud service providers, more so than between providers and consumers (as is
the case for hybrid- and multi-cloud).
[73][74][75]

Multicloud[edit]
Main article: Multicloud
Multicloud is the use of multiple cloud computing services in a single heterogeneous architecture,
in order to reduce reliance on any single vendor, increase flexibility through choice, mitigate
against disasters, etc. It differs from hybrid cloud in that it refers to multiple cloud services rather
than multiple deployment modes (public, private, legacy).
[76][77]

Architecture[edit]

Cloud computing sample architecture
Cloud architecture,
[78]
the systems architecture of the software systems involved in the delivery
of cloud computing, typically involves multiple cloud components communicating with each other
over a loose coupling mechanism such as a messaging queue. Elastic provision implies
intelligence in the use of tight or loose coupling as applied to mechanisms such as these and
others.
Cloud engineering[edit]
Cloud engineering is the application of engineering disciplines to cloud computing. It brings a
systematic approach to the high-level concerns of commercialization, standardization, and
governance in conceiving, developing, operating and maintaining cloud computing systems. It is
a multidisciplinary method encompassing contributions from diverse areas such
as systems, software, web, performance, information, security, platform, risk,
and quality engineering.
Security, privacy and trust[edit]
Across all forms of deployment, architecture and service models the basic concept of cloud
computing remains to be the abstraction of computation over the used hardware/resources. If
considering various groups of stakeholders, a three-tier setup can be considered: i) users of
virtual services ii) tenants who provide services iii) providers who provide the infrastructure. The
key to gaining trust in cloud computing is assuring the user or tenant of security being applied
and maintained in all components contributing to his virtual service. Assuring security properties
of the overall system to the service-user at the top level and considering the abstraction of
computation over used hardware between all the levels makes assurance of security properties
for a virtual service a very complex task. As privacy must not be comprised by revealing too
much information between layers some inter level/stakeholder interfaces are required. A list of
existing approaches can be found here
[79]

The future[edit]
According to Gartner's Hype cycle, cloud computing has reached a maturity that leads it into a
productive phase. This means that most of the main issues with cloud computing have been
addressed to a degree that clouds have become interesting for full commercial exploitation. This
however does not mean that all the problems listed above have actually been solved, only that
the according risks can be tolerated to a certain degree. Cloud computing is therefore still as
much a research topic, as it is a market offering.
[80]

See also[edit]
Software as a service (SaaS; pronounced /ss/ or /ss/
[1]
) is a software licensing and
delivery model in which software is licensed on a subscription basis and is centrallyhosted.
[2][3]
It
is sometimes referred to as "on-demand software".
[4]
SaaS is typically accessed by users using
a thin client via a web browser. SaaS has become a common delivery model for many business
applications, including office & messaging software, DBMS software, management software,
CAD software, development
software, gamification,virtualization,
[4]
accounting, collaboration, customer relationship
management (CRM), management information systems (MIS), enterprise resource
planning (ERP), invoicing,human resource management (HRM), content management (CM) and
service desk management.
[5]
SaaS has been incorporated into the strategy of all
leading enterprise softwarecompanies.
[6]
One of the biggest selling points for these companies is
the potential to reduce IT support costs by outsourcing hardware and software maintenance and
support to the SaaS provider.
[7]

According to a Gartner Group estimate,
[8]
SaaS sales in 2010 reached $10 billion, and were
projected to increase to $12.1bn in 2011, up 20.7% from 2010. Gartner Group estimates that
SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected
$21.3bn. Customer relationship management (CRM) continues to be the largest market for
SaaS. SaaS revenue within the CRM market was forecast to reach $3.8bn in 2011, up from
$3.2bn in 2010.
[9]

The term "software as a service" (SaaS) is considered to be part of the nomenclature of cloud
computing, along with infrastructure as a service (IaaS), platform as a service(PaaS), desktop as
a service (DaaS), backend as a service (BaaS), and information technology management as a
service (ITMaaS).
[10]

Contents
[hide]
1 History
2 Distribution
3 Pricing
4 Architecture
5 Characteristics
o 5.1 Configuration and customization
o 5.2 Accelerated feature delivery
o 5.3 Open integration protocols
o 5.4 Collaborative (and "social") functionality
6 Adoption drivers
7 Adoption challenges
8 Emerging trends
9 Data escrow
10 Criticism
11 See also
12 References
History[edit]
Centralized hosting of business applications dates back to the 1960s. Starting in that
decade, IBM and other mainframe providers conducted a service bureau business, often referred
to as time-sharing or utility computing. Such services included offering computing power and
database storage to banks and other large organizations from their worldwidedata centers.
The expansion of the Internet during the 1990s brought about a new class of centralized
computing, called Application Service Providers (ASP). ASPs provided businesses with the
service of hosting and managing specialized business applications, with the goal of reducing
costs through central administration and through the solution provider's specialization in a
particular business application. Two of the world's pioneers and largest ASPs were USI, which
was headquartered in the Washington, D.C. area, and Futurelink Corporation, headquartered in
Orange County California.
Software as a service essentially extends the idea of the ASP model. The term Software as a
Service (SaaS), however, is commonly used in more specific settings:
Whereas most initial ASPs focused on managing and hosting third-party independent
software vendors' software, as of 2012 SaaS vendors typically develop and manage their
own software.
Whereas many initial ASPs offered more traditional client-server applications, which require
installation of software on users' personal computers, SaaS solutions of today rely
predominantly on the Web and only require an internet browser to use.
Whereas the software architecture used by most initial ASPs mandated maintaining a
separate instance of the application for each business, as of 2012 SaaS solutions normally
utilize a multi-tenant architecture, in which the application serves multiple businesses and
users, and partitions its data accordingly.
The SAAS acronym allegedly first appeared in an article called "Strategic Backgrounder:
Software As A Service," internally published in February 2001 by the Software & Information
Industry Association's (SIIA) eBusiness Division.
[11]

DbaaS (Database as a Service) has emerged as a sub-variety of SaaS.
[12]

Distribution[edit]
The Cloud (or SaaS) model has no physical need for indirect distribution since it is not distributed
physically and is deployed almost instantaneously. The first wave of SaaS companies built their
own economic model without including partner remuneration in their pricing structure (except
when there were certain existing affiliations). It has not been easy for traditional software
publishers to enter into the SaaS model. Firstly, because the SaaS model does not bring them
the same income structure, secondly, because continuing to work with a distribution network was
decreasing their profit margins and was damaging to the competitiveness of their product pricing.
Today a landscape is taking shape with SaaS and managed service players who combine the
indirect sales model with their own existing business model, and those who seek to redefine their
role within the 3.0 IT economy.
[13]

Pricing[edit]

This section does not cite any references or sources. Please help
improve this section by adding citations to reliable sources.
Unsourced material may be challenged and removed. (December 2013)
Unlike traditional software which is conventionally sold as a perpetual license with an up-front
cost (and an optional ongoing support fee), SaaS providers generally price applications using a
subscription fee, most commonly a monthly fee or an annual fee.
[14]
Consequently, the initial
setup cost for SaaS is typically lower than the equivalent enterprise software. SaaS vendors
typically price their applications based on some usage parameters, such as the number of users
using the application. However, because in a SaaS environment customers' data reside with the
SaaS vendor, opportunities also exist to charge per transaction, event, or other unit of value.
[15]

The relatively low cost for user provisioning (i.e., setting up a new customer) in a multi-tenant
environment enables some SaaS vendors to offer applications using the freemiummodel.
[15]
In
this model, a free service is made available with limited functionality or scope, and fees are
charged for enhanced functionality or larger scope.
[15]
Some other SaaS applications are
completely free to users, with revenue being derived from alternate sources such as advertising.
A key driver of SaaS growth is SaaS vendors' ability to provide a price that is competitive with
on-premises software. This is consistent with the traditional rationale for outsourcing IT systems,
which involves applying economies of scale to application operation, i.e., an outside service
provider may be able to offer better, cheaper, more reliable applications..
Architecture[edit]
The vast majority of SaaS solutions are based on a multi-tenant architecture. With this model, a
single version of the application, with a single configuration (hardware, network,operating
system), is used for all customers ("tenants"). To support scalability, the application is installed
on multiple machines (called horizontal scaling). In some cases, a second version of the
application is set up to offer a select group of customers with access to pre-release versions of
the applications (e.g., a beta version) for testing purposes. This is contrasted with traditional
software, where multiple physical copies of the software each potentially of a different version,
with a potentially different configuration, and often customized are installed across various
customer sites.
While an exception rather than the norm, some SaaS solutions do not use multi-tenancy, or use
other mechanismssuch as virtualizationto cost-effectively manage a large number of
customers in place of multi-tenancy.
[16]
Whether multi-tenancy is a necessary component for
software-as-a-service is a topic of controversy.
[17]

Characteristics[edit]
While not all software-as-a-service applications share all traits, the characteristics below are
common among many SaaS applications:
Configuration and customization[edit]
SaaS applications similarly support what is traditionally known as application customization. In
other words, like traditional enterprise software, a single customer can alter the set of
configuration options (a.k.a., parameters) that affect its functionality and look-and-feel. Each
customer may have its own settings (or: parameter values) for the configuration options. The
application can be customized to the degree it was designed for based on a set of predefined
configuration options.
For example: to support customers' common need to change an application's look-and-feel so
that the application appears to be having the customer's brand (orif so desiredco-branded),
many SaaS applications let customers provide (through a self service interface or by working
with application provider staff) a custom logo and sometimes a set of custom colors. The
customer cannot, however, change the page layout unless such an option was designed for.
Accelerated feature delivery[edit]
SaaS applications are often updated more frequently than traditional software,
[18]
in many cases
on a weekly or monthly basis. This is enabled by several factors:
The application is hosted centrally, so an update is decided and executed by the
provider, not by customers.
The application only has a single configuration, making development testing faster.
The application vendor has access to all customer data, expediting design and regression
testing.
The solution provider has access to user behavior within the application (usually via web
analytics), making it easier to identify areas worthy of improvement.
Accelerated feature delivery is further enabled by agile software
development methodologies.
[19]
Such methodologies, which have evolved in the mid-1990s,
provide a set ofsoftware development tools and practices to support frequent software releases.
Open integration protocols[edit]
Since SaaS applications cannot access a company's internal systems (databases or internal
services), they predominantly offer integration protocols and application programming
interfaces (APIs) that operate over a wide area network. Typically, these are protocols based
on HTTP, REST, SOAP and JSON.
The ubiquity of SaaS applications and other Internet services and the standardization of their API
technology has spawned development of mashups, which are lightweight applications that
combine data, presentation and functionality from multiple services, creating a compound
service. Mashups further differentiate SaaS applications from on-premises software as the latter
cannot be easily integrated outside a company's firewall.
Collaborative (and "social") functionality[edit]
Inspired by the success of online social networks and other so-called web 2.0 functionality, many
SaaS applications offer features that let its users collaborate and share information.
For example, many project management applications delivered in the SaaS model offerin
addition to traditional project planning functionalitycollaboration features letting users comment
on tasks and plans and share documents within and outside an organization. Several other SaaS
applications let users vote on and offer new feature ideas.
While some collaboration-related functionality is also integrated into on-premises software,
(implicit or explicit) collaboration between users or different customers is only possible with
centrally hosted software.
Adoption drivers[edit]
Several important changes to the software market and technology landscape have facilitated
acceptance and growth of SaaS solutions:
The growing use of web-based user interfaces by applications, along with the proliferation of
associated practices (e.g., web design), continuously decreased the need for traditional
client-server applications. Consequently, traditional software vendor's investment in software
based on fat clients has become a disadvantage (mandating ongoing support), opening the
door for new software vendors offering a user experience perceived as more "modern".
The standardization of web page technologies (HTML, JavaScript, CSS), the increasing
popularity of web development as a practice, and the introduction and ubiquity of web
application frameworks like Ruby on Rails or languages like PHP gradually reduced the cost
of developing new SaaS solutions, and enabled new solution providers to come up with
competitive solutions, challenging traditional vendors.
The increasing penetration of broadband Internet access enabled remote centrally hosted
applications to offer speed comparable to on-premises software.
The standardization of the HTTPS protocol as part of the web stack provided universally
available lightweight security that is sufficient for most everyday applications.
The introduction and wide acceptance of lightweight integration protocols such as REST and
SOAP enabled affordable integration between SaaS applications (residing in the cloud) with
internal applications over wide area networks and with other SaaS applications.
Adoption challenges[edit]
Some limitations slow down the acceptance of SaaS and prohibit it from being used in some
cases:
Since data are being stored on the vendors servers, data security becomes an issue.
[20]

SaaS applications are hosted in the cloud, far away from the application users. This
introduces latency into the environment; so, for example, the SaaS model is not suitable for
applications that demand response times in the milliseconds.
Multi-tenant architectures, which drive cost efficiency for SaaS solution providers, limit
customization of applications for large clients, inhibiting such applications from being used in
scenarios (applicable mostly to large enterprises) for which such customization is necessary.
Some business applications require access to or integration with customer's current data.
When such data are large in volume or sensitive (e.g., end users' personal information),
integrating them with remotely hosted software can be costly or risky, or can conflict with
data governance regulations.
Constitutional search/seizure warrant laws do not protect all forms of SaaS dynamically
stored data. The end result is that a link is added to the chain of security where access to the
data, and, by extension, misuse of these data, are limited only by the assumed honesty of
3rd parties or government agencies able to access the data on their own
recognizance.
[21][22][23][24]

Switching SaaS vendors may involve the slow and difficult task of transferring very large data
files over the Internet.
Organizations that adopt SaaS may find they are forced into adopting new versions, which
might result in unforeseen training costs or an increase in probability that a user might make
an error.
Relying on an Internet connection means that data are transferred to and from a SaaS firm at
Internet speeds, rather than the potentially higher speeds of a firms internal network.
[25]

The standard model also has limitations:
Compatibility with hardware, other software, and operating systems.
[26]

Licensing and compliance problems (unauthorized copies of the software program putting
the organization at risk of fines or litigation).
Maintenance, support, and patch revision processes.
Emerging trends[edit]
As a result of widespread fragmentation in the SaaS provider space, there is an emerging trend
towards the development of SaaS Integration Platforms (SIP).
[27]
These SIPs allow subscribers to
access multiple SaaS applications through a common platform. They also offer new application
developers an opportunity to quickly develop and deploy new applications. This trend is being
referred to as the "third wave" in software adoption - where SaaS moves beyond standalone
applications to become a comprehensive platform. Zoho and Sutisoft are two companies that
offer comprehensive SIPs today. Several other industry players, including Salesforce, Microsoft,
and Oracle are aggressively developing similar integration platforms.
Data escrow[edit]
Software as a service data escrow is the process of keeping a copy of critical software-as-a-
service application data with an independent third party. Similar to source code escrow, where
critical software source code is stored with an independent third party, SaaS data escrow is the
same logic applied to the data within a SaaS application. It allows companies to protect and
insure all the data that resides within SaaS applications, protecting against data loss.
[28]

There are many and varied reasons for considering SaaS data escrow including concerns about
vendor bankruptcy, unplanned service outages and potential data loss or corruption. Many
businesses are also keen to ensure that they are complying with their own data
governance standards or want improved reporting and business analytics against their SaaS
data. A research conducted by Clearpace Software Ltd. into the growth of SaaS showed that 85
percent of the participants wanted to take a copy of their SaaS data. A third of these participants
wanted a copy on a daily basis.
[29]

Criticism[edit]
One notable criticism of SaaS comes from Richard Stallman of the Free Software
Foundation referring to it as Service as a Software Substitute (SaaSS).
[30]
He considers the use of
SaaSS to be a violation of the principles of free software.
[31]
According to Stallman,
With SaaSS, the users do not have a copy of the executable file: it is on the server, where the
users can't see or touch it. Thus it is impossible for them to ascertain what it really does, and
impossible to change it. SaaS inherently gives the server operator the power to change the
software in use, or the users' data being operated on.
This criticism does not apply to all SaaS products. In 2010, Forbes contributor Dan Woods noted
that Drupal Gardens, a free web hosting platform based on the open sourceDrupal content
management system, as a "new open source model for SaaS". He added, "Open source
provides the escape hatch. In Drupal Gardens, users will be able to press a button and get a
source code version of the Drupal code that runs their site along with the data from the database.
Then, you can take that code, put it up at one of the hosting companies, and you can do anything
that you would like to do."
[32]

Andrew Hoppin, a former Chief Information Officer for the New York State Senate, refers to this
combination of SaaS and open source software as OpenSaaS and points toWordPress as
another successful example of an OpenSaaS software delivery model that gives customers "the
best of both worlds, and more options. The fact that it is open source means that they can start
building their websites by self-hosting WordPress and customizing their website to their hearts
content. Concurrently, the fact that WordPress is SaaS means that they dont have to manage
the website at all -- they can simply pay WordPress.com to host it."
[33]

Anda mungkin juga menyukai