Cloud computing is a marketing term for technologies that provide computation, software, data access, and storage services that do not require
end-user knowledge of the physical location and configuration of the system that delivers
the services. A parallel to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the
component devices or infrastructure required to provide the service.Cloud computing providers deliver applications via the internet, which are
accessed from web browsers and desktop and mobile apps, while the business software and data are stored on servers at a remote location. At the
foundation of cloud computing is the broader concept of infrastructure convergence (or Converged Infrastructure) and shared services. This type of
data center environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and
enables IT to more rapidly adjust IT resources (such as servers, storage, and networking) to meet fluctuating and unpredictable business demand.
Cloud Computing vendors combine virtualization (one computer hosting several virtual servers), automated provisioning (servers have software
installed automatically), and Internet connectivity technologies to provide the service. These are not new technologies but a new name applied to a
collection of older (albeit updated) technologies that are packaged, sold and delivered in a new way.
\
FUNDAMENTALS
*LAYERS OF CLOUD COMPUTING*
Client
A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery and that is in essence
useless without it. Examples include some computers, phones and other devices, operating systems, and browsers.
Application
Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run
the application on the customer's own computers and simplifying maintenance and support.
Platform
Cloud platform services, also known as platform as a service (PaaS), deliver a computing platform and/or solution stack as a service, often
consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying
and managing the underlying hardware and software layers. Cloud computing is becoming a major change in our industry, and one of the most
important parts of this change is the shift of cloud platforms. Platforms let developers write certain applications that can run in the cloud, or even use
services provided by the cloud. There are different names being used for platforms which can include the on-demand platform, or Cloud 9. It's your
choice on what you would like to call the platform, but they all have great potential in developing. When development teams create applications for
the cloud, they must build its own cloud platform.
Infrastructure
Cloud infrastructure services, also known as "infrastructure as a service" (IaaS), deliver computer infrastructure typically a platform
virtualisation environment as a service, along with raw (block) storage and networking. Rather than purchasing servers, software, data-center space
or network equipment, clients instead buy those resources as a fully outsourced service. Suppliers typically bill such services on a utility computing
basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity.
Server
The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services,
including multi-core processors, cloud-specific operating systems and combined offerings.
*THE ECONOMICS*
Economies of scale and skill drive Cloud Computing economics. As with rented Real Estate, the costs of ownership are pooled and spread among all
tenants of the multi-tenant Cloud Computing solution. Consequentially, acquisition costs are low but tenants never own the technology asset and
might face challenges if they need to move or end the service for any reason.Something that is often overlooked when evaluating Cloud
Computing costs is the continued need to provide LAN services that are robust enough to support the Cloud solution. These costs are not always
small. For example, if you have 6 or more workstation computers, you will probably need to continue to maintain a server in a domain controller role
(to ensure name resolution), at least one switch (to connect all of the computers to each other and the router), one or more networked printers, and
the router for the Internet connection.
The virtual machine monitor (VMM) provides the means for simultaneous use of cloud facilities (see Figure 1). VMM is a program on a host
system that lets one computer support multiple, identical execution environments. From the user's point of view, the system is a self-contained
computer which is isolated from other users. In reality, every user is being served by the same machine. A virtual machine is one operating system
(OS) that is being managed by an underlying control program allowing it to appear to be multiple operating systems. In cloud computing, VMM
allows users to monitor and thus manage aspects of the process such as data access, data storage, encryption, addressing, topology, and workload
movement. Figure 1. How the Virtual Machine Monitor works
These are the layers the cloud provides:
The infrastructure layer is the foundation of the cloud. It consists of the physical assets servers, network devices, storage disks,
etc. Infrastructure as a Service (IaaS) has providers such as the IBM Cloud. Using IaaS you dont actually control the underlying
infrastructure, but you do have control of the operating systems, storage, deployment applications, and, to a limited degree, control over
select networking components. Print On Demand (POD) services are an example of organizations that can benefit from IaaS. The POD
model is based on the selling of customizable products. PODs allow individuals to open shops and sell designs on products. Shopkeepers
can upload as many or as few designs as they can create. Many upload thousands. With cloud storage capabilities, a POD can provide
unlimited storage space.
The middle layer is the platform. It provides the application infrastructure. Platform as a Service (PaaS) provides access to operating
systems and associated services. It provides a way to deploy applications to the cloud using programming languages and tools supported
by the provider. You do not have to manage or control the underlying infrastructure, but you do have control over the deployed
applications and, to some degree over application hosting environment configurations. PaaS has providers such as Amazon's Elastic
Compute Cloud (EC2). The small entrepreneur software house is an ideal enterprise for PaaS. With the elaborated platform, world-class
products can be created without the overhead of in-house production.
The top layer is the application layer, the layer most visualize as the cloud. Applications run here and are provided on demand
to users. Software as a Service (SaaS) has providers such as Google Pack. Google Pack includes Internet accessible
applications, tools such as Calendar, Gmail, Google Talk, Docs, and many more. Figure 2 shows these layers.
Cloud formations
There are three types of cloud formations: private (on premise), public, and hybrid.
Public clouds are available to the general public or a large industry group and are owned and provisioned by an organization selling cloud
services. A public cloud is what is thought of as the cloud in the usual sense; that is, resources dynamically provisioned over the Internet
using web applications from an off-site third-party provider that supplies shared resources and bills on a utility computing basis.
Private clouds exist within your company's firewall and are managed by your organization. They are cloud services you create and control
within your enterprise. Private clouds offer many of the same benefits as the public clouds the major distinction being that your
organization is in charge of setting up and maintaining the cloud.
Hybrid clouds are a combination of the public and the private cloud using services that are in both the public and private space.
Management responsibilities are divided between the public cloud provider and the business itself. Using a hybrid cloud, organizations can
determine the objectives and requirements of the services to be created and obtain them based on the most suitable alternative.
HISTORY
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone
network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.
Cloud computing is a natural evolution of the widespread adoption of virtualisation, service-oriented architecture, autonomic, and utility
computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology
infrastructure "in the cloud" that supports them.The underlying concept of cloud network bandwidth more effectively. The cloud
symbol was used to denote the demarcation point between that which computing dates back to the 1960s, when John McCarthy opined
that "computation may someday be organised as a public utility." Almost all the modern-day characteristics of cloud computing (elastic
provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public,
private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer
Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the
author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers. The
actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s offered primarily dedicated
point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much
lower cost. By switching traffic to balance utilisation as they saw fit, they were able to utilise their overall was the responsibility of
the provider and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the
network infrastructure. After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernising
their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room
for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby
small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development
effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula,
enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds,
and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications)
to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By
mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and
those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based
models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other
areas.
In addition, most organisations data centres are often oversized when they are built and typically will run at a utilisation rate of less
than 60%. It is also reckoned that over the four year life of a server, the combined facility, capital and operational expense will be up
to four times greater than the cost of the server itself. One of the key advantages offered by cloud computing is that you can pay on a
consumption basis e.g. per hour, per gigabyte etc. This has a huge impact on the economics. When a true comparison is done, using a
fully costed model, the decision weighs more favourably towards cloud compu ng. And when the other advantages are taken into
account then cloud compu ng can really stack up as a viable op on.
Research firm IDC summed it up the thus - "The cloud model offers a much cheaper way for businesses to acquire and use IT. In an
economic downturn the appeal of that cost advantage will be greatly magnified".
3. Business Agility :One of the understated advantages of cloud computing is that it enables an organisation to be more agile.The speed at which
new computing capacity can be requisitioned is a vital element of cloud computing. Adding additional storage, network bandwidth, memory,
computing power etc can be done rapidly and often instantaneously.Most cloud providers employ infrastructure software that can easily add, move,
or change an application with very little, if any, intervention by cloud provider personnel. This dynamic, elastic nature of cloud computing is what
gives it a big advantage over an in-house data centre. Many internal IT departments have to work through procurement processes just to add
additional capacity. Once the procurement has been authorised it can still take weeks to acquire and rack new equipment. In many cases the demand
for IT services is outstripping the ability of the IT department to manage using traditional practices. Cloud computing allows organisations to react
more quickly to market conditions and to scale up and down as needed. New applications can be quickly released with lower up-front costs. The
flexibility offered by cloud computing enables innovative ideas to be rapidly tried and tested without the need to divert existing IT staff from their
daily routine. Increasingly people wont want to spend capital on these new ideas. Theyll want to pay for them operationally. They may represent a
new market, a new technique, a new set of standards, or a new set of technologies. If youre starting a new line of business, you can launch with a
robust, state-of-the-art infrastructure without tying up limited capital.
For development projects, organisations can provision multiple production-scale systems on demand in the cloud saving time and
expense over traditional testing scenarios and enabling faster handoff from development to operations. And when the project is
finished they can be turned off again with nothing else to pay.
4. Built-in Disaster Recovery & Back-up Sites :With cloud computing, the burden of managing technology is placed on the technology provider. It
is their responsibility to provide built-in data protection, fault tolerance, self-healing and disaster recovery. Typical disaster recovery costs are
estimated at twice the cost of the infrastructure. With a cloud-based model, true disaster recovery is estimated to cost little more than one times the
costs, a significant saving. Additionally, because cloud service providers replicate their data, even the loss of one or two data centres will not result in
lost data.Cloud computing provides a high level of redundancy at a price point traditional managed solutions cannot match. Now every business can
put a plan in place to ensure they are able to continue their business in the face of radical environment changes. Even the cloud computing cynics are
agreed on this.
5. Device & Location Independence:Cloud computing is already enabling greater device independence, greater portability, and
greater opportunities for interconnection and collaboration.With applications and data located in the cloud it becomes much easier to
enable users to access systems regardless of their location or what device they are using. Teleworkers can be quickly brought online,
remote offices can be quickly connected, temporary teams can be easily set up on site, mobile access can be easily enabled. With the
growing use of smartphones, netbooks and other hand-held devices there is also an increasing need for data access on the go. The
success of devices such as the iPhone and its App store is also opening up a whole new world of mobile applications. Connecting
these types of applications to data stores will be significantly easier through the cloud. Location-based applications will reach their
potential through cloud computing. Many smartphones are now location-aware (using inbuilt GPS facilities) and we will increasingly
see applications that take advantage of this capability.
Cloud computing will facilitate innovation in many areas. Much of it will be driven by the ease with which different devices can
connect to cloud-based applications.
6. BONUS REASON: Its Greener As mentioned above, most internal data centres are oversized and dont run at anything like full capacity. Most
servers run significantly below capacity (real world estimates of server utilisation in data centres range from 5% to 20%) yet they still consume close
to the same amount of power and require the same amount of cooling as a full capacity machine (granted that Virtualisation is changing this in some
cases). A typical data centre consumes up to 100 times more power than an equivalent sized office building. The carbon footprint of a typical data
centre is therefore a significant concern for many organisations. In a cloud computing environment resources are shared across applications (and
even customers) resulting in greater use of the resources for a similar energy cost. For corporations spread over different time zones the computing
power lying idle at one geographic location (during off-work hours) could be harnessed at a location in a different time zone. This reduces not only
the power consumption but also the amount of physical hardware required.
With cloud computing virtual offices can be quickly set up. Employees can easily work from home. Travelling salespeople can have all their data
available in any location without needing to visit the office. These are just some of the other examples of how the carbon footprint can be reduced.
2. Data Location & Privacy :Data in a cloud computing environment has to exist on physical servers somewhere in the world and the physical
location of those servers is important under many nations laws. This is especially important for companies that do business across national
boundaries, as different privacy and data management laws apply in different countries.
For example, the European Union places strict limits on what data can be stored on its citizens and for how long. Many banking regulators also
require customers' financial data to stay in their home country. Many compliance regulations require that data not be intermixed with other data, such
as on shared servers or databases.
In another example, Canada is concerned about its public sector projects being hosted on U.S.-based servers because under the U.S. Patriot Act, it
could be accessed by the U.S. government. Some of the larger cloud providers (e.g. Microsoft, Google) have recognised this issue and are starting to
allow customers to specify the location of their data.
Another data issue to address is: What happens to your data in a legal entanglement?
What if you miss paying a bill, or decide not to pay a bill for various reasons, like dissatisfaction with the service? Do you lose access to your data?
This is something that can be addressed at the contract stage to ensure that the right safeguards are be put in place to prevent a provider from
withholding access to your data.
3. Internet Dependency, Performance & Latency :A concern for many organisations is that cloud computing relies on the availability, quality and
performance of their internet connection.
Dependency on an internet connection raises some key questions:
4. Availability & Service Levels :One of the most common concerns regarding cloud computing is the potential for down-time if the system isn't
available for use. This is a critical issue for line-of-business apps, since every minute of downtime is a minute that some important business function
can't be performed. Every minute of down-time can not only affect revenue but can also cause reputation damage. These concerns are further
exacerbated by the recent highly public outages at some of the major cloud providers such Google, Salesforce.com and Amazon.
As a counter to these concerns, cloud-computing advocates are quick to point out that few enterprise IT infrastructures are as good as those run by
major cloud providers.
Whilst highly public outages get lots of press coverage and help feed the views of the cloud computing cynics one needs to compare this against
in-house outages which rarely get publicised. Just how many times per year are internal systems down and/or unavailable? How does this compare to
a typical cloud computing scenario?
Many companies thinking of adopting cloud computing will look to the service-level agreements (SLAs) to give them some comfort about
availability. Surprisingly, some cloud providers don't even offer SLAs and many others offer inadequate SLAs (in terms of guaranteed uptime.)
Cloud providers will need to get serious about offering credible SLAs if the growth of cloud computing is not to stall. Increased competition will
help and will push the early entrants to provide greater assurances to their customers.
Just because a provider says that they can deliver a particular service over the Web better than an internal IT organisation can doesn't make it
necessarily true. And even if they can, how do you know they are doing that consistently and, if they aren't, what compensation is due back to the
customer?
Cynics will say service level agreements are not worth the paper they are written on. They will point out that an SLA doesn't necessarily assist in
obtaining high quality uptime, but provides the basis for conflict negotiation when things don't go well. A bit like a pre-nuptial agreement.
5. Current Enterprise Applications Can't Be Migrated Easily :Moving an existing application to a cloud platform is not as easy as it might first
appear. Different cloud providers impose different application architectures which are invariably dissimilar to architectures of enterprise applications.
So for a complex multitier application that depends on internal databases and that serves thousands of users with ever-changing access rights its not
going to be an easy switch-over to a cloud platform.
In reality most organisations that adopt cloud computing will end up doing it with new applications. Existing applications will probably continue to
run on-premise for some time. Thats not to say that these applications cant be converted but that the costs of conversion will often outweigh the
benefits.
Amazon Web Services offers the most flexibility with regard to migrating applications because it provisions an "empty" image that you can put
anything into. However, applications cannot be easily moved due to its idiosyncratic storage framework.
With some other cloud platforms (e.g. Microsoft Azure) it may be possible to take existing applications and, with minimal effort, modify them to run
in the cloud. Much will depend on the existing architecture of the application though. If it was developed using underlying web services then it is
likely that it can be modified relatively easily to run in the cloud.
Whilst the lack of a convenient migration path for existing applications might hinder cloud computing adoption, in the longer term it is not going to
be a permanent barrier. As more vendors build applications for cloud deployment so will the uptake of cloud computing grow.
CLOUD TYPES
Normal clouds are classified into 4 categories:
1. High level clouds
2. Mid level clouds
3. Low level clouds
4. Vertically developed clouds
5. Other type clouds
Cloud computing is typically classified in two ways:
1. Location of the cloud computing
2. Type of services offered
Location of the cloud
Cloud computing is typically classified in the following three ways:.
Public Clouds: In Public cloud the computing infrastructure is hosted by the cloud vendor at the vendors premises. The customer has no visibility
and control over where the computing infrastructure is hosted. The computing infrastructure is shared between any organizations.
These are the clouds which are open for use by general public and they exist beyond the firewall of an organization, fully hosted and managed by
vendors like Google, Amazon, Microsoft, etc. They strictly follow Pay as you go model which helps start ups to start small and go big without
investing much in the IT infrastructure. Here a user does not have a control on the management of the resources. Everything is managed by the third
party and its their responsibility to apply software updates, security patches etc.
Though they are quite effective and eases an organization effort since everything is already there, it does face some criticism, esp on security related
issue.
Private Clouds: The computing infrastructure is dedicated to a particular organization and not shared with other organizations. Some experts
consider that private clouds are not real examples of cloud computing. Private clouds are more expensive and more secure when compared to public
clouds.
Private clouds are of two types: On-premise private clouds and externally hosted private clouds. Externally hosted private clouds are also exclusively
used by one organization, but are hosted by a third party specializing in cloud infrastructure. Externally hosted private clouds are cheaper than
On-premise private clouds.
These are the types of clouds which exist within the boundaries (firewall) of an organization. It is totally managed by an enterprise and has all the
features of Public Clouds with a major difference that it has to take care the underlying IT infrastructure. They are more secure as they are internal to
an organization and they shuffle resources according to their business needs.They are best suited for the applications which are related to tight
security and follows some stringent policies or are meant for regulatory purposes. It is not very easy for an organization to go with a Private Cloud
due to its complexity and management so they are often used by enterprises who have made huge investments in their IT infrastructure and have the
man power and abilities to manage it.
Hybrid Clouds: Organizations may host critical applications on private clouds and applications with relatively less security concerns on the public
cloud. The usage of both private and public clouds together is called hybrid cloud. A related term is Cloud Bursting. In Cloud bursting organization
use their own computing infrastructure for normal usage, but access the cloud for high/peak load requirements. This ensures that a sudden increase in
computing requirement is handled gracefully.
They consist of external and internal providers, viz a mix of public and private clouds. Secure & critical apps are managed by an organization and
the not-so-critical & secure apps by the third party vendor. They have a unique identity, bound by standard technology, thus enabling data and
application portability. They are used in the situations like Cloud Bursting. In most countries, we are going to see lot of investment in the Hybrid
Clouds in the next decade, for the simple reason, that lot of companies are skeptical about the Clouds Security and they prefer that the critical data
be managed by themselves and the non-critical data by the external provider.
From an end-user perspective, Public Clouds will be more interesting for them, we all use public clouds services like Microsoft Office Web apps,
Windows Live Mesh 2011, Google Docs, etc; whereas an enterprises will be having an interest in private & hybrid clouds. I will suggest them to
check Microsoft Exchange online, Share Point online etc for this .
Community cloud: involves sharing of computing infrastructure in between organizations of the same community. For example all
Government organizations within the state of California may share computing infrastructure on the cloud to manage data related to
citizens residing in California.
Classification based upon service provided
Based upon the services offered, clouds are classified in the following ways:
A lot of terms have become part of the cloud computing lexicon, none more popular than SaaS, PaaS and IaaS. Heres an attempt to remove the
layers of complexity and present them in a language even technophobes can understand.
SaaS: Software as a Service (SaaS) is software that is deployed over the Internet, available to the end user as and when wanted. Hence, its also
known as software on demand. Payment can either be as per usage, on a subscription model or even free if advertisement is part of the equation.
While SaaS offers several advantages like accessibility from any location, rapid scalability and bundled maintenance, there may be certain security
concerns, especially for users who desire high security and control, as that domain is in the hands of the provider. In fact, that is one of the arguments
forwarded by open-source proponent Richard Stallman against SaaS. (See: Who Doesnt Like Cloud Computing?)
SaaS may be considered the oldest and most mature type of cloud computing. Examples include Salesforce.com sales management applications,
NetSuite, Googles Gmail and Cornerstone OnDemand.
PaaS: Platform as a Service (PaaS) is a combination of a development platform and a solution stack, delivered as a service on demand. It provides
infrastructure on which software developers can build new applications or extend existing ones without the cost and complexity of buying and
managing the underlying hardware and software and provisioning hosting capabilities. In other words, it provides the supporting infrastructure to
enable the end user develop his own solutions.
In addition to firms IT departments who use PaaS to customize their own solutions, its users include independent software vendors (ISVs) as well,
those who develop specialized applications for specific purposes. While earlier application development required hardware, an operating system, a
database, middleware, Web servers, and other software, with the PaaS model only the knowledge to integrate them is required. The rest is taken care
of by the PaaS provider.
Sometimes, PaaS is used to extend the capabilities of applications developed as SaaS. Examples of PaaS include Salesforce.coms Force.com,
Googles App Engine, and Microsofts Azure.
IaaS: Infrastructure as a Service (IaaS) delivers computer infrastructure typically a platform virtualization environment as a service. This
includes servers, software, data-center space and network equipment, available in a single bundle and billed as per usage in a utility computing
model.
IaaS is generally used by organizations that have the in-house expertise to manage their IT requirements but dont have the infrastructure. They then
hire the required infrastructure from IaaS providers and load up their libraries, applications, and data, after which they configure them themselves. A
popular use of IaaS is in hosting websites, where the in-house infrastructure is not burdened with this task but left free to manage the business.
Amazons Elastic Compute Cloud (EC2) is a major example of IaaS. Rackspaces Mosso and GoGrids ServePath are other IaaS offerings.
One important thing to note here: there is considerable overlap between SaaS, PaaS and IaaS, and with the rapid changes in the field, definitions are
in a flux. In fact, the same service may be categorized into either of the three depending on who is making the categorization developer, system
administrator or a manager.
Empowerment of end-users of computing resources by putting the provisioning of those resources in their own control, as opposed to the
control of a centralized IT service (for example)
Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the
user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure.[15] This is
purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or
infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are
required for implementation (in-house).[16]
Device and location independence enable users to access systems using a web browser regardless of their location or what device they are
using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralisation of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilisation and efficiency improvements for systems that are often only 1020% utilised.
Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business
continuityand disaster recovery.
Scalability and Elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without
users having to engineer for peak loads.
Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
Security could improve due to centralisation of data, increased security-focused resources, etc., but concerns can persist about loss of control
over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part
because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of
security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being
shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part
motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer.
Introduction: Cloud Architectures are designs of software applications that use Internet-accessible on-demand services. Applications built on Cloud
Architectures are such that the underlying computing infrastructure is used only when it is needed (for example to process a user request), draw the
necessary resources on-demand (like compute servers or storage), perform a specific job, then relinquish the unneeded resources and often dispose
themselves after the job is done. While in operation the application scales up or down elastically based on resource needs.
In the first section, we describe an example of an application that is currently in production using the on-demand infrastructure provided by Amazon
Web Services. This application allows a developer to do pattern-matching across millions of web documents. The application brings up hundreds of
virtual servers on-demand, runs a parallel computation on them using an open source distributed processing framework called Hadoop, then shuts
down all the virtual servers releasing all its resources back to the cloudall with low programming effort and at a very reasonable cost for the caller.
In the second section, we discuss some best practices for using each Amazon Web Service - Amazon S3, Amazon SQS, Amazon SimpleDB and
Amazon EC2 - to build an industrial-strength scalable application. the simple cloud architechture is:
Why Cloud Architectures?: Cloud Architectures address key difficulties surrounding large-scale data processing. In traditional data processing it is
difficult to get as many machines as an application needs. Second, it is difficult to get the machines when one needs them. Third, it is difficult to
distribute and co-ordinate a large-scale job on different machines, run processes on them, and provision another machine to recover if one machine
fails. Fourth, it is difficult to auto-scale up and down based on dynamic workloads. Fifth, it is difficult to get rid of all those machines when the job is
done. Cloud Architectures solve such difficulties. Applications built on Cloud Architectures run in-the-cloud where the physical location of the
infrastructure is determined by the provider. They take advantage of simple APIs of Internet-accessible services that scale on-demand, that are
industrial-strength, where the complex reliability and scalability logic of the underlying services remains implemented and hidden inside-the-cloud.
The usage of resources in Cloud Architectures is as needed, sometimes ephemeral or seasonal, thereby providing the highest utilization and optimum
bang for the buck.
Business Benefits of Cloud Architectures: There are some clear business benefits to building applications using Cloud Architectures. A few of
these are listed here:
Almost zero upfront infrastructure investment: If you have to build a large-scale system it may cost a fortune to invest in real estate, hardware
(racks, machines, routers, backup power supplies), hardware management (power management, cooling), and operations personnel. Because
of the upfront costs, it would typically need several rounds of management approvals before the project could even get started. Now, with
utility-style computing, there is no fixed cost or startup cost.
Just-in-time Infrastructure: In the past, if you got famous and your systems or your infrastructure did not scale you became a victim of your
own success. Conversely, if you invested heavily and did not get famous, you became a victim of your failure. By deploying applications
in-the-cloud with dynamic capacity management software architects do not have to worry about pre-procuring capacity for large-scale
systems. The solutions are low risk because you scale only as you grow. Cloud Architectures can relinquish infrastructure as quickly as you
got them in the first place (in minutes).
More efficient resource utilization: System administrators usually worry about hardware procuring (when they run out of capacity) and better
infrastructure utilization (when they have excess and idle capacity). With Cloud Architectures they can manage resources more effectively
and efficiently by having the applications request and relinquish resources only what they need (on-demand).
Usage-based costing: Utility-style pricing allows billing the customer only for the infrastructure that has been used. The customer is not
liable for the entire infrastructure that may be in place. This is a subtle difference between desktop applications and web applications. A
desktop application or a traditional client-server application runs on customer's own infrastructure (PC or server), whereas in a Cloud
Architectures application, the customer uses a third party infrastructure and gets billed only for the fraction of it that was used.
Potential for shrinking the processing time: Parallelization is the one of the great ways to speed up processing. If one compute-intensive or
data-intensive job that can be run in parallel takes 500 hours to process on one machine, with Cloud Architectures, it would be possible to
spawn and launch 500 instances and process the same job in 1 hour. Having available an elastic infrastructure provides the application with
the ability to exploit parallelization in a cost-effective manner reducing the total processing time.