Anda di halaman 1dari 12

CLOUD COMPUTING

INTRODUCTION TO CLOUD COMPUTING


*WHAT IS CLOUD COMPUTING*
Cloud computing is a comprehensive solution that delivers IT as a service. It is an Internet-based computing solution where shared resources are
provided like electricity distributed on the electrical grid. Computers in the cloud are configured to work together and the various applications use
the collective computing power as if they are running on a single system.
The flexibility of cloud computing is a function of the allocation of resources on demand. This facilitates the use of the system's
cumulative resources, negating the need to assign specific hardware to a task. Before cloud computing, websites and server-based
applications were executed on a specific system. With the advent of cloud computing, resources are used as an aggregated virtual
computer. This amalgamated configuration provides an environment where applications execute independently without regard for any
particular configuration.
We see Cloud Computing as a computing model, not a technology. In this model customers plug into the cloud to access IT resources which are
priced and provided on-demand. Essentially, IT resources are rented and shared among multiple tenants much as office space, apartments, or
storage spaces are used by tenants. Delivered over an Internet connection, the cloud replaces the company data center or server providing the same
service. Thus, Cloud Computing is simply IT services sold and delivered over the Internet.
Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided
to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).
Cloud computing is a computing paradigm, where a large pool of systems are connected in private or public networks, to provide dynamically
scalable infrastructure for application, data and file storage. With the advent of this technology, the cost of computation, application hosting, content
storage and delivery is reduced significantly.
The idea of cloud computing is based on a very fundamental principal of reusability of IT capabilities'. The difference that cloud
computing brings compared to traditional concepts of grid computing, distributed computing, utility computing, or autonomic
computing is to broaden horizons across organizational boundaries.
Forrester defines cloud computing as:
A pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end-customer applications and billed by
consumption.

Cloud computing is a marketing term for technologies that provide computation, software, data access, and storage services that do not require
end-user knowledge of the physical location and configuration of the system that delivers

the services. A parallel to this concept can be drawn with the electricity grid, wherein end-users consume power without needing to understand the
component devices or infrastructure required to provide the service.Cloud computing providers deliver applications via the internet, which are
accessed from web browsers and desktop and mobile apps, while the business software and data are stored on servers at a remote location. At the
foundation of cloud computing is the broader concept of infrastructure convergence (or Converged Infrastructure) and shared services. This type of
data center environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and
enables IT to more rapidly adjust IT resources (such as servers, storage, and networking) to meet fluctuating and unpredictable business demand.
Cloud Computing vendors combine virtualization (one computer hosting several virtual servers), automated provisioning (servers have software
installed automatically), and Internet connectivity technologies to provide the service. These are not new technologies but a new name applied to a
collection of older (albeit updated) technologies that are packaged, sold and delivered in a new way.
\
FUNDAMENTALS
*LAYERS OF CLOUD COMPUTING*
Client
A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery and that is in essence
useless without it. Examples include some computers, phones and other devices, operating systems, and browsers.
Application
Cloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run
the application on the customer's own computers and simplifying maintenance and support.
Platform
Cloud platform services, also known as platform as a service (PaaS), deliver a computing platform and/or solution stack as a service, often
consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying
and managing the underlying hardware and software layers. Cloud computing is becoming a major change in our industry, and one of the most
important parts of this change is the shift of cloud platforms. Platforms let developers write certain applications that can run in the cloud, or even use
services provided by the cloud. There are different names being used for platforms which can include the on-demand platform, or Cloud 9. It's your
choice on what you would like to call the platform, but they all have great potential in developing. When development teams create applications for
the cloud, they must build its own cloud platform.
Infrastructure
Cloud infrastructure services, also known as "infrastructure as a service" (IaaS), deliver computer infrastructure typically a platform
virtualisation environment as a service, along with raw (block) storage and networking. Rather than purchasing servers, software, data-center space
or network equipment, clients instead buy those resources as a fully outsourced service. Suppliers typically bill such services on a utility computing
basis; the amount of resources consumed (and therefore the cost) will typically reflect the level of activity.
Server
The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services,
including multi-core processors, cloud-specific operating systems and combined offerings.

*THE ECONOMICS*
Economies of scale and skill drive Cloud Computing economics. As with rented Real Estate, the costs of ownership are pooled and spread among all
tenants of the multi-tenant Cloud Computing solution. Consequentially, acquisition costs are low but tenants never own the technology asset and
might face challenges if they need to move or end the service for any reason.Something that is often overlooked when evaluating Cloud
Computing costs is the continued need to provide LAN services that are robust enough to support the Cloud solution. These costs are not always
small. For example, if you have 6 or more workstation computers, you will probably need to continue to maintain a server in a domain controller role
(to ensure name resolution), at least one switch (to connect all of the computers to each other and the router), one or more networked printers, and
the router for the Internet connection.

*WHAT DO I NEED TO USE CLOUD COMPUTING?*


All that is really needed to acquire and use Cloud Computing solutions is a credit card (or other payment method) and a LAN with an Internet
connection robust enough to support the Cloud delivered service. These two requirements are deceptively simple.From a technical point of view the
biggest challenge for businesses, particularly SMBs, may be the need for an appropriately robust LAN infrastructure and Internet
connection.Typically, Internet access is provided by a single commercial service ISP provider through a single port on a router. A characteristic of
this type installation is that all of the computers connecting through the LAN share the Internet bandwidth equally.This can quickly become an issue.
For example: Verizon FiOS Internet 15/2 (down/up) service might have a measured speed of 14420/1867 Kbps. This would seem to be plenty of
speed. However, suppose a business had 5 computers using a Cloud solution and sending data to the cloud for processing. The bandwidth available
to each computer would be 373Kbps (up 1867/5). That is about 46 (8 bit) characters per second to the cloud application and does not include any
communication or application data. The cloud solution might not work or responses so slow as to be unacceptable. It isnt the download speed that
becomes a limit, but he upload speed. Refer to Page 5 for a Bandwidth Chart.
The on-demand nature of Cloud Computing presents a dilemma: The on-demand model includes a self-service interface that allows users to
self-provision services (for example storage). This empowers users but can make services too easy to acquire and consume.
Consider the faculty member at the University of Massachusetts who quietly (without anyones knowledge) used a cloud service to back up 20 GB of
data each night over the Internet bringing the school LAN to its knees. How management controls Cloud Computing is unique to each organization
and is an IT Governance issue.

*WHY THE RUSH TO THE CLOUD?*


There are valid and significant business and IT reasons for the cloud computing paradigm shift. The fundamentals of outsourcing as a solution apply.
Reduced cost: Cloud computing can reduce both capital expense (CapEx) and operating expense (OpEx) costs because resources are only
acquired when needed and are only paid for when used.
Refined usage of personnel: Using cloud computing frees valuable personnel allowing them to focus on delivering value rather than
maintaining hardware and software.
Robust scalability: Cloud computing allows for immediate scaling, either up or down, at any time without long-term commitment.
*CLOUD COMPUTING BUILDING BLOCKS*
The cloud computing model is comprised of a front end and a back end. These two elements are connected through a network, in most cases the
Internet. The front end is the vehicle by which the user interacts with the system; the back end is the cloud itself. The front end is composed of a
client computer, or the computer network of an enterprise, and the
applications used to access the cloud. The back end provides the applications, computers, servers, and data storage that creates the cloud of services.
Layers: Computing as a commodity
The cloud concept is built on layers, each providing a distinct level of functionality. This stratification of the cloud's components has provided a
means for the layers of cloud computing to becoming a commodity just like electricity, telephone service, or natural gas. The commodity that cloud
computing sells is computing power at a lower cost and expense to the user. Cloud computing is poised to become the next mega-utility service.

The virtual machine monitor (VMM) provides the means for simultaneous use of cloud facilities (see Figure 1). VMM is a program on a host
system that lets one computer support multiple, identical execution environments. From the user's point of view, the system is a self-contained
computer which is isolated from other users. In reality, every user is being served by the same machine. A virtual machine is one operating system
(OS) that is being managed by an underlying control program allowing it to appear to be multiple operating systems. In cloud computing, VMM
allows users to monitor and thus manage aspects of the process such as data access, data storage, encryption, addressing, topology, and workload
movement. Figure 1. How the Virtual Machine Monitor works
These are the layers the cloud provides:
The infrastructure layer is the foundation of the cloud. It consists of the physical assets servers, network devices, storage disks,
etc. Infrastructure as a Service (IaaS) has providers such as the IBM Cloud. Using IaaS you dont actually control the underlying
infrastructure, but you do have control of the operating systems, storage, deployment applications, and, to a limited degree, control over
select networking components. Print On Demand (POD) services are an example of organizations that can benefit from IaaS. The POD
model is based on the selling of customizable products. PODs allow individuals to open shops and sell designs on products. Shopkeepers
can upload as many or as few designs as they can create. Many upload thousands. With cloud storage capabilities, a POD can provide
unlimited storage space.
The middle layer is the platform. It provides the application infrastructure. Platform as a Service (PaaS) provides access to operating
systems and associated services. It provides a way to deploy applications to the cloud using programming languages and tools supported
by the provider. You do not have to manage or control the underlying infrastructure, but you do have control over the deployed
applications and, to some degree over application hosting environment configurations. PaaS has providers such as Amazon's Elastic
Compute Cloud (EC2). The small entrepreneur software house is an ideal enterprise for PaaS. With the elaborated platform, world-class
products can be created without the overhead of in-house production.
The top layer is the application layer, the layer most visualize as the cloud. Applications run here and are provided on demand
to users. Software as a Service (SaaS) has providers such as Google Pack. Google Pack includes Internet accessible
applications, tools such as Calendar, Gmail, Google Talk, Docs, and many more. Figure 2 shows these layers.

Figure 2. Cloud computing layers embedded in the "as a Service" components

Cloud formations
There are three types of cloud formations: private (on premise), public, and hybrid.
Public clouds are available to the general public or a large industry group and are owned and provisioned by an organization selling cloud
services. A public cloud is what is thought of as the cloud in the usual sense; that is, resources dynamically provisioned over the Internet
using web applications from an off-site third-party provider that supplies shared resources and bills on a utility computing basis.
Private clouds exist within your company's firewall and are managed by your organization. They are cloud services you create and control
within your enterprise. Private clouds offer many of the same benefits as the public clouds the major distinction being that your
organization is in charge of setting up and maintaining the cloud.
Hybrid clouds are a combination of the public and the private cloud using services that are in both the public and private space.
Management responsibilities are divided between the public cloud provider and the business itself. Using a hybrid cloud, organizations can
determine the objectives and requirements of the services to be created and obtain them based on the most suitable alternative.

IT roles in the cloud


Let us consider the probability that management and administration will require greater automation, requiring a change in the tasks of personnel
responsible for scripting due to the growth in code production. You see, IT may be consolidating, with a need for less hardware and software
implementation, but it is also creating new formations. The shift in IT is toward the knowledge worker. In the new paradigm, the technical human
assets will have greater responsibilities for enhancing and upgrading general business processes.
The developer
The growing use of mobile devices, the popularity of social networking, and other aspects of the evolution of commercial IT processes and systems,
will guarantee work for the developer community; however, some of the traditional roles of development personnel will be shifted away from the
enterprise's developers due to the systemic and systematic processes of the cloud configuration model.
A recent survey by IBM, New developerWorks survey shows dominance of cloud computing and mobile application development(see Resources)
demonstrated that the demand for mobile technology will grow exponentially. This development, along with the rapid acceptance of cloud
computing across the globe, will necessitate a radical increase of developers with an understanding of this area. To meet the growing needs of mobile
connectivity, more developers will be required who understand how cloud computing works.
Cloud computing provides an almost unlimited capacity, eliminating scalability concerns. Cloud computing gives developers access to software and
hardware assets that most small and mid-sized enterprises could not afford. Developers, using Internet-driven cloud computing and the assets that are
a consequence of this configuration, will have access to resources that most could have only dreamed of in the recent past.
The administrator
Administrators are the guardians and legislators of an IT system. They are responsible for the control of user access to the network. This means
sitting on top of the creation of user passwords and the formulation of rules and procedures for such fundamental functionality as general access to
the system assets. The advent of cloud computing will necessitate adjustments to this process since the administrator in such an environment is no
longer merely concerned about internal matters, but also the external relationship of his enterprise and the cloud computing concern, as well as the
actions of other tenants in a public cloud.
This alters the role of the firewall constructs put in place by the administration and the nature of the general security procedures of the enterprise. It
does not negate the need for the guardian of the system. With cloud computing comes even greater responsibility, not less. Under cloud computing,
the administrator must not only ensure data and systems internal to the organization, they must also monitor and manage the cloud to ensure the
safety of their system and data everywhere.
The architect
The function of the architecture is the effective modeling of the given system's functionality in the real IT world. The basic responsibility of the
architect is development of the architectural framework of the agency's cloud computing model. The architecture of cloud computing is essentially
comprised of the abstraction of the three layer constructs, IaaS, PaaS, and SaaS, in such a way that the particular enterprise deploying the cloud
computing approach meets its stated goals and objectives. The abstraction of the functionality of the layers is developed so the decision-makers and
the foot soldiers can use the abstraction to plan, execute, and evaluate the efficacy of the IT system's procedures and processes.
The role of the architect in the age of cloud computing is to conceive and model a functional interaction of the cloud's layers. The architect must use
the abstraction as a means to ensure that IT is playing its proper role in the attainment of organizational objectives.

To cloud or not to cloud: Risk assessment


The main concerns voiced by those moving to the cloud are security and privacy. The companies supplying cloud computing services know this and
understand that without reliable security, their businesses will collapse. So security and privacy are high priorities for all cloud computing entities.
Governance: How will industry standards be monitored?
Governance is the primary responsibility of the owner of a private cloud and the shared responsibility of the service provider and service consumer
in the public cloud. However, given elements such as transnational terrorism, denial of service, viruses, worms and the like which do or could
have aspects beyond the control of either the private cloud owner or public cloud service provider and service consumer there is a need for some
kind of broader collaboration, particularly on the global, regional, and national levels. Of course, this collaboration has to be instituted in a manner
that will not dilute or otherwise harm the control of the owner of the process or subscribers in the case of the public cloud.
Bandwidth requirements
If you are going to adopt the cloud framework, bandwidth and the potential bandwidth bottleneck must be evaluated in your strategy. In the CIO.com
article: The Skinny Straw: Cloud Computing's Bottleneck and How to Address It, the following statement is made:
Virtualization implementers found that the key bottleneck to virtual machine density is memory capacity; now there's a whole new slew of servers
coming out with much larger memory footprints, removing memory as a system bottleneck. Cloud computing negates that bottleneck by removing the
issue of machine density from the equationsorting that out becomes the responsibility of the cloud provider, freeing the cloud user from worrying
about it.
For cloud computing, bandwidth to and from the cloud provider is a bottleneck.
So what is the best current solution for the bandwidth issue? In today's market the best answer is the blade server. A blade server is a server that has
been optimized to minimize the use of physical space and energy. One of the huge advantages of the blade server for cloud computing use is
bandwidth speed improvement. For example, the IBM BladeCenter is designed to accelerate the high-performance computing workloads both
quickly and efficiently. Just as the memory issue had to be overcome to effectively alleviate the bottleneck of virtual high machine density, the
bottleneck of cloud computing bandwidth must also be overcome, so look to the capabilities of your provider to determine if the bandwidth
bottleneck will be a major performance issue.
Financial impact
Because a sizable proportion of the cost in IT operations comes from administrative and management functions, the implicit automation of some of
these functions will per se cut costs in a cloud computing environment. Automation can reduce the error factor and the cost of the redundancy of
manual repetition significantly.
There are other contributors to financial problems such as the cost of maintaining physical facilities, electrical power usage, cooling systems, and of
course administration and management factors. As you can see, bandwidth is not alone, by any means.

Mitigate the risk


Consider these possible risks:
Adverse impact of mishandling of data.
Unwarranted service charges.
Financial or legal problems of vendor.
Vendor operational problems or shutdowns.
Data recovery and confidentiality problems.
General security concerns.
Systems attacks by external forces.
With the use of systems in the cloud, there is the ever present risk of data security, connectivity, and malicious actions interfering with the computing
processes. However, with a carefully thought out plan and methodology of selecting the service provider, and an astute perspective on general risk
management, most companies can safely leverage this technology.

HISTORY
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone
network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.
Cloud computing is a natural evolution of the widespread adoption of virtualisation, service-oriented architecture, autonomic, and utility
computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology
infrastructure "in the cloud" that supports them.The underlying concept of cloud network bandwidth more effectively. The cloud
symbol was used to denote the demarcation point between that which computing dates back to the 1960s, when John McCarthy opined
that "computation may someday be organised as a public utility." Almost all the modern-day characteristics of cloud computing (elastic
provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public,
private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer
Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the
author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers. The
actual term "cloud" borrows from telephony in that telecommunications companies, who until the 1990s offered primarily dedicated
point-to-point data circuits, began offering Virtual Private Network (VPN) services with comparable quality of service but at a much
lower cost. By switching traffic to balance utilisation as they saw fit, they were able to utilise their overall was the responsibility of
the provider and that which was the responsibility of the user. Cloud computing extends this boundary to cover servers as well as the
network infrastructure. After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernising
their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room
for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby
small, fast-moving "two-pizza teams" could add new features faster and more easily, Amazon initiated a new product development
effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula,
enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds,
and for the federation of clouds. In the same year, efforts were focused on providing QoS guarantees (as required by real-time interactive applications)
to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By
mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and
those who sell them" and observed that "[o]rganisations are switching from company-owned hardware and software assets to per-use service-based
models" so that the "projected shift to cloud computing ... will result in dramatic growth in IT products in some areas and significant reductions in other
areas.

ADVANTAGES AND DISADVANTAGES OF CLOUD COMPUTING


There is a huge amount of hype surrounding cloud computing but despite this more and more C-level executives and IT decision makers agree that it
is a real technology option. It has moved from futuristic technology to a commercially viable alternative to running applications in-house.
Vendor organisations such as Amazon, Google, Microsoft and Salesforce.com have invested many millions in setting up cloud
computing platforms that they can offer out to 3rd parties. They clearly see a big future for cloud computing.

5 Reasons to Consider Adopting Cloud Computing


1.Scalability :As mentioned above, scalability is a key aspect of cloud computing. The ability of the platform to expand and contract automatically
based on capacity needs (sometimes referred to as elasticity), and the charging model associated with this, are key elements that distinguish cloud
computing from other forms of hosting.Cloud computing provides resources on-demand for many of the typical scaling points that an organisation
needs including servers, storage and networking. The on-demand nature of cloud computing means that as your demand grows (or contracts) you can
more easily match your capacity (and costs) to your demand. There is no need to over-provison for the peaks. At the software level cloud computing
allows developers and IT operations to develop, deploy and run applications that can easily grow capacity, work fast and never or at least rarely
fail, all without any concern as to the nature and location of the underlying infrastructure.
One shouldnt forget the advantage cloud computing can offer newer or smaller players. With easy access to a cost effective, flexible
technology platform small competitors can punch well above their weight in terms of application capacity and scalability and can
quickly turn into significant adversaries.
2.Cost Saving :There is still some debate about whether there are real cost savings with cloud computing. McKinsey recently
published a report claiming that there was no cost saving to be had and that, on the contrary, it could work out more expensive. This
report has since been debunked by others claiming that it only focussed on one (failed) project and that it didnt accurately reflect the
true cost of running systems internally.On the other hand, a report by Forrester emphasises the fact that use of cloud computing
matches cash flow to system benefits more appropriately than the traditional model.
In the old way of doing things, a large investment is made early in the project prior to system build out, and well before the business benefits
(presumably financial in some shape or form) are realised. This model is even more troubling given the risk factors associated with IT systems: they
are notorious for failing to deliver their promised benefits, and a large percentage of projects end up scrapped due to poor user acceptance.With cloud
computing you move from a capital investment to an operational expense.Whilst cost wont be the only driver in the adoption of cloud computing it
is often seen as the key factor. Clearly if a decision to adopt cloud computing (or not) is to be based primarily on the potential cost savings then the
true cost of operating an application internally needs to be understood and this is something that most organisations are not good at.
Many organisations home in on the cost of provisioning a server internally (including software licences) and end up comparing that with the cost of a
cloud-based solution. This inevitably will lead to the conclusion that cloud computing is more expensive.
The problem with this type of costing is that it omits a whole range of costs that are often not assigned to the internal servers costs such as
a. technical personnel necessary to keep a data centre up and running
b. extra personnel necessary to manage server procurement
c. utility bills and capital expense investments for power and cooling
d. internal technical people to do assessments and trials of different hardware offerings
e. procurement people to do the negotiating for hardware purchase
f. internal and external costs for data centre designers, facilities management, etc
g. contract and account people to keep track of all the various licenses, leases, etc.

In addition, most organisations data centres are often oversized when they are built and typically will run at a utilisation rate of less
than 60%. It is also reckoned that over the four year life of a server, the combined facility, capital and operational expense will be up
to four times greater than the cost of the server itself. One of the key advantages offered by cloud computing is that you can pay on a
consumption basis e.g. per hour, per gigabyte etc. This has a huge impact on the economics. When a true comparison is done, using a
fully costed model, the decision weighs more favourably towards cloud compu ng. And when the other advantages are taken into
account then cloud compu ng can really stack up as a viable op on.

Research firm IDC summed it up the thus - "The cloud model offers a much cheaper way for businesses to acquire and use IT. In an
economic downturn the appeal of that cost advantage will be greatly magnified".
3. Business Agility :One of the understated advantages of cloud computing is that it enables an organisation to be more agile.The speed at which
new computing capacity can be requisitioned is a vital element of cloud computing. Adding additional storage, network bandwidth, memory,
computing power etc can be done rapidly and often instantaneously.Most cloud providers employ infrastructure software that can easily add, move,
or change an application with very little, if any, intervention by cloud provider personnel. This dynamic, elastic nature of cloud computing is what
gives it a big advantage over an in-house data centre. Many internal IT departments have to work through procurement processes just to add
additional capacity. Once the procurement has been authorised it can still take weeks to acquire and rack new equipment. In many cases the demand
for IT services is outstripping the ability of the IT department to manage using traditional practices. Cloud computing allows organisations to react
more quickly to market conditions and to scale up and down as needed. New applications can be quickly released with lower up-front costs. The
flexibility offered by cloud computing enables innovative ideas to be rapidly tried and tested without the need to divert existing IT staff from their
daily routine. Increasingly people wont want to spend capital on these new ideas. Theyll want to pay for them operationally. They may represent a
new market, a new technique, a new set of standards, or a new set of technologies. If youre starting a new line of business, you can launch with a
robust, state-of-the-art infrastructure without tying up limited capital.
For development projects, organisations can provision multiple production-scale systems on demand in the cloud saving time and
expense over traditional testing scenarios and enabling faster handoff from development to operations. And when the project is
finished they can be turned off again with nothing else to pay.
4. Built-in Disaster Recovery & Back-up Sites :With cloud computing, the burden of managing technology is placed on the technology provider. It
is their responsibility to provide built-in data protection, fault tolerance, self-healing and disaster recovery. Typical disaster recovery costs are
estimated at twice the cost of the infrastructure. With a cloud-based model, true disaster recovery is estimated to cost little more than one times the
costs, a significant saving. Additionally, because cloud service providers replicate their data, even the loss of one or two data centres will not result in
lost data.Cloud computing provides a high level of redundancy at a price point traditional managed solutions cannot match. Now every business can
put a plan in place to ensure they are able to continue their business in the face of radical environment changes. Even the cloud computing cynics are
agreed on this.
5. Device & Location Independence:Cloud computing is already enabling greater device independence, greater portability, and
greater opportunities for interconnection and collaboration.With applications and data located in the cloud it becomes much easier to
enable users to access systems regardless of their location or what device they are using. Teleworkers can be quickly brought online,
remote offices can be quickly connected, temporary teams can be easily set up on site, mobile access can be easily enabled. With the
growing use of smartphones, netbooks and other hand-held devices there is also an increasing need for data access on the go. The
success of devices such as the iPhone and its App store is also opening up a whole new world of mobile applications. Connecting
these types of applications to data stores will be significantly easier through the cloud. Location-based applications will reach their
potential through cloud computing. Many smartphones are now location-aware (using inbuilt GPS facilities) and we will increasingly
see applications that take advantage of this capability.
Cloud computing will facilitate innovation in many areas. Much of it will be driven by the ease with which different devices can
connect to cloud-based applications.
6. BONUS REASON: Its Greener As mentioned above, most internal data centres are oversized and dont run at anything like full capacity. Most
servers run significantly below capacity (real world estimates of server utilisation in data centres range from 5% to 20%) yet they still consume close
to the same amount of power and require the same amount of cooling as a full capacity machine (granted that Virtualisation is changing this in some
cases). A typical data centre consumes up to 100 times more power than an equivalent sized office building. The carbon footprint of a typical data
centre is therefore a significant concern for many organisations. In a cloud computing environment resources are shared across applications (and
even customers) resulting in greater use of the resources for a similar energy cost. For corporations spread over different time zones the computing
power lying idle at one geographic location (during off-work hours) could be harnessed at a location in a different time zone. This reduces not only
the power consumption but also the amount of physical hardware required.
With cloud computing virtual offices can be quickly set up. Employees can easily work from home. Travelling salespeople can have all their data
available in any location without needing to visit the office. These are just some of the other examples of how the carbon footprint can be reduced.

5 Reasons to Consider Avoiding Cloud Computing


1.Security :In nearly every survey done about cloud computing the top reason given for not adopting it is a concern over security. Putting your
business-critical data in the hands of an external provider still sends shivers down the spines of most CIOs. Only by giving up some control over the
data can companies get the cost economies that are available. CIOs, along with other C-level executives, must decide if that trade-off is worthwhile.
In deciding on the trade-off some of the questions to consider are:
- What happens if the data stored or processed on a cloud machine gets compromised?
- Will we know?
- If we do not know, how will we notify our constituents, especially when data breach notification laws are in place?
- How will we know to improve our security?
There is a school of thought that says that holding your data in the cloud is not much more insecure than having it on internal servers connected to
the Internet. The recent case in the UK of a hacker who hacked his way into the US Government network shows that supposedly secure networks are
just as likely to be breached.
Companies need to be realistic about the level of security they achieve inside their own business, and how that might compare to a cloud provider.
Its well known that more than 70% of intellectual property breaches are a result of attacks made inside the organisation.
Clearly security will be raised as a concern around cloud computing for many years to come. There is still some work to be done before more
formalised standards are in place. Organisations like the Cloud Security Alliance are at the forefront of addressing these issues.
In the same way that some banks took longer than others to offer internet banking facilities so it will be with cloud computing. Some organisations
will evaluate the risks and adopt cloud computing quickly. Other more conservative organisations will hang back and watch developments.

2. Data Location & Privacy :Data in a cloud computing environment has to exist on physical servers somewhere in the world and the physical
location of those servers is important under many nations laws. This is especially important for companies that do business across national
boundaries, as different privacy and data management laws apply in different countries.
For example, the European Union places strict limits on what data can be stored on its citizens and for how long. Many banking regulators also
require customers' financial data to stay in their home country. Many compliance regulations require that data not be intermixed with other data, such
as on shared servers or databases.
In another example, Canada is concerned about its public sector projects being hosted on U.S.-based servers because under the U.S. Patriot Act, it
could be accessed by the U.S. government. Some of the larger cloud providers (e.g. Microsoft, Google) have recognised this issue and are starting to
allow customers to specify the location of their data.
Another data issue to address is: What happens to your data in a legal entanglement?
What if you miss paying a bill, or decide not to pay a bill for various reasons, like dissatisfaction with the service? Do you lose access to your data?
This is something that can be addressed at the contract stage to ensure that the right safeguards are be put in place to prevent a provider from
withholding access to your data.

3. Internet Dependency, Performance & Latency :A concern for many organisations is that cloud computing relies on the availability, quality and
performance of their internet connection.
Dependency on an internet connection raises some key questions:

- What happens if we lose our internet connection?


- How long can we run our business?
Moving an existing in-house application to the cloud will almost certainly have some trade-offs in terms of performance. Most existing enterprise
applications wont have been designed with the cloud in mind.
Organisations considering investing in cloud computing will certainly have to factor in costs for improving the network infrastructure required to run
applications in the cloud.
On the plus side, bandwidth continues to increase and approaches such as dynamic caching, compression, pre-fetching and other related
web-acceleration technologies can result in major performance improvements for end users, often exceeding 50%.
At the software level, applications will have to be architected for the cloud in order to achieve maximum performance. Were well down the road
with thin-client, browser based applications but many of these would still need re-designing in order to benefit from a cloud environment.
In reality the vast majority of applications dont require the levels of nano-second performance that might be impacted by putting them in the cloud.
In the real world, scalability is likely to be a more important issue across more applications.
Latency (the time taken, or delay, for a packet of data to get from one designated point to another) will undoubtedly be an issue for certain
applications. Trading applications that require near-zero latency will probably be run in-house for many years to come. One of the key problems with
putting these applications in the cloud is that latency on the Internet is highly variable and unpredictable.
There are many cloud computing commentators who claim that the cloud will never be able to support these types of applications. However, vendors
such as Juniper and IBM are already demonstrating extremely low latency capabilities in the cloud so its a case of watch this space.

4. Availability & Service Levels :One of the most common concerns regarding cloud computing is the potential for down-time if the system isn't
available for use. This is a critical issue for line-of-business apps, since every minute of downtime is a minute that some important business function
can't be performed. Every minute of down-time can not only affect revenue but can also cause reputation damage. These concerns are further
exacerbated by the recent highly public outages at some of the major cloud providers such Google, Salesforce.com and Amazon.
As a counter to these concerns, cloud-computing advocates are quick to point out that few enterprise IT infrastructures are as good as those run by
major cloud providers.
Whilst highly public outages get lots of press coverage and help feed the views of the cloud computing cynics one needs to compare this against
in-house outages which rarely get publicised. Just how many times per year are internal systems down and/or unavailable? How does this compare to
a typical cloud computing scenario?
Many companies thinking of adopting cloud computing will look to the service-level agreements (SLAs) to give them some comfort about
availability. Surprisingly, some cloud providers don't even offer SLAs and many others offer inadequate SLAs (in terms of guaranteed uptime.)
Cloud providers will need to get serious about offering credible SLAs if the growth of cloud computing is not to stall. Increased competition will
help and will push the early entrants to provide greater assurances to their customers.
Just because a provider says that they can deliver a particular service over the Web better than an internal IT organisation can doesn't make it
necessarily true. And even if they can, how do you know they are doing that consistently and, if they aren't, what compensation is due back to the
customer?
Cynics will say service level agreements are not worth the paper they are written on. They will point out that an SLA doesn't necessarily assist in
obtaining high quality uptime, but provides the basis for conflict negotiation when things don't go well. A bit like a pre-nuptial agreement.

5. Current Enterprise Applications Can't Be Migrated Easily :Moving an existing application to a cloud platform is not as easy as it might first
appear. Different cloud providers impose different application architectures which are invariably dissimilar to architectures of enterprise applications.
So for a complex multitier application that depends on internal databases and that serves thousands of users with ever-changing access rights its not
going to be an easy switch-over to a cloud platform.

In reality most organisations that adopt cloud computing will end up doing it with new applications. Existing applications will probably continue to
run on-premise for some time. Thats not to say that these applications cant be converted but that the costs of conversion will often outweigh the
benefits.
Amazon Web Services offers the most flexibility with regard to migrating applications because it provisions an "empty" image that you can put
anything into. However, applications cannot be easily moved due to its idiosyncratic storage framework.
With some other cloud platforms (e.g. Microsoft Azure) it may be possible to take existing applications and, with minimal effort, modify them to run
in the cloud. Much will depend on the existing architecture of the application though. If it was developed using underlying web services then it is
likely that it can be modified relatively easily to run in the cloud.
Whilst the lack of a convenient migration path for existing applications might hinder cloud computing adoption, in the longer term it is not going to
be a permanent barrier. As more vendors build applications for cloud deployment so will the uptake of cloud computing grow.

CLOUD TYPES
Normal clouds are classified into 4 categories:
1. High level clouds
2. Mid level clouds
3. Low level clouds
4. Vertically developed clouds
5. Other type clouds
Cloud computing is typically classified in two ways:
1. Location of the cloud computing
2. Type of services offered
Location of the cloud
Cloud computing is typically classified in the following three ways:.
Public Clouds: In Public cloud the computing infrastructure is hosted by the cloud vendor at the vendors premises. The customer has no visibility
and control over where the computing infrastructure is hosted. The computing infrastructure is shared between any organizations.
These are the clouds which are open for use by general public and they exist beyond the firewall of an organization, fully hosted and managed by
vendors like Google, Amazon, Microsoft, etc. They strictly follow Pay as you go model which helps start ups to start small and go big without
investing much in the IT infrastructure. Here a user does not have a control on the management of the resources. Everything is managed by the third
party and its their responsibility to apply software updates, security patches etc.
Though they are quite effective and eases an organization effort since everything is already there, it does face some criticism, esp on security related
issue.

Private Clouds: The computing infrastructure is dedicated to a particular organization and not shared with other organizations. Some experts
consider that private clouds are not real examples of cloud computing. Private clouds are more expensive and more secure when compared to public
clouds.
Private clouds are of two types: On-premise private clouds and externally hosted private clouds. Externally hosted private clouds are also exclusively
used by one organization, but are hosted by a third party specializing in cloud infrastructure. Externally hosted private clouds are cheaper than
On-premise private clouds.
These are the types of clouds which exist within the boundaries (firewall) of an organization. It is totally managed by an enterprise and has all the
features of Public Clouds with a major difference that it has to take care the underlying IT infrastructure. They are more secure as they are internal to
an organization and they shuffle resources according to their business needs.They are best suited for the applications which are related to tight
security and follows some stringent policies or are meant for regulatory purposes. It is not very easy for an organization to go with a Private Cloud
due to its complexity and management so they are often used by enterprises who have made huge investments in their IT infrastructure and have the
man power and abilities to manage it.

Hybrid Clouds: Organizations may host critical applications on private clouds and applications with relatively less security concerns on the public
cloud. The usage of both private and public clouds together is called hybrid cloud. A related term is Cloud Bursting. In Cloud bursting organization
use their own computing infrastructure for normal usage, but access the cloud for high/peak load requirements. This ensures that a sudden increase in
computing requirement is handled gracefully.
They consist of external and internal providers, viz a mix of public and private clouds. Secure & critical apps are managed by an organization and
the not-so-critical & secure apps by the third party vendor. They have a unique identity, bound by standard technology, thus enabling data and
application portability. They are used in the situations like Cloud Bursting. In most countries, we are going to see lot of investment in the Hybrid
Clouds in the next decade, for the simple reason, that lot of companies are skeptical about the Clouds Security and they prefer that the critical data
be managed by themselves and the non-critical data by the external provider.
From an end-user perspective, Public Clouds will be more interesting for them, we all use public clouds services like Microsoft Office Web apps,
Windows Live Mesh 2011, Google Docs, etc; whereas an enterprises will be having an interest in private & hybrid clouds. I will suggest them to
check Microsoft Exchange online, Share Point online etc for this .

Community cloud: involves sharing of computing infrastructure in between organizations of the same community. For example all
Government organizations within the state of California may share computing infrastructure on the cloud to manage data related to
citizens residing in California.
Classification based upon service provided
Based upon the services offered, clouds are classified in the following ways:
A lot of terms have become part of the cloud computing lexicon, none more popular than SaaS, PaaS and IaaS. Heres an attempt to remove the
layers of complexity and present them in a language even technophobes can understand.

SaaS: Software as a Service (SaaS) is software that is deployed over the Internet, available to the end user as and when wanted. Hence, its also
known as software on demand. Payment can either be as per usage, on a subscription model or even free if advertisement is part of the equation.
While SaaS offers several advantages like accessibility from any location, rapid scalability and bundled maintenance, there may be certain security
concerns, especially for users who desire high security and control, as that domain is in the hands of the provider. In fact, that is one of the arguments
forwarded by open-source proponent Richard Stallman against SaaS. (See: Who Doesnt Like Cloud Computing?)
SaaS may be considered the oldest and most mature type of cloud computing. Examples include Salesforce.com sales management applications,
NetSuite, Googles Gmail and Cornerstone OnDemand.

PaaS: Platform as a Service (PaaS) is a combination of a development platform and a solution stack, delivered as a service on demand. It provides
infrastructure on which software developers can build new applications or extend existing ones without the cost and complexity of buying and
managing the underlying hardware and software and provisioning hosting capabilities. In other words, it provides the supporting infrastructure to
enable the end user develop his own solutions.
In addition to firms IT departments who use PaaS to customize their own solutions, its users include independent software vendors (ISVs) as well,
those who develop specialized applications for specific purposes. While earlier application development required hardware, an operating system, a
database, middleware, Web servers, and other software, with the PaaS model only the knowledge to integrate them is required. The rest is taken care
of by the PaaS provider.
Sometimes, PaaS is used to extend the capabilities of applications developed as SaaS. Examples of PaaS include Salesforce.coms Force.com,
Googles App Engine, and Microsofts Azure.

IaaS: Infrastructure as a Service (IaaS) delivers computer infrastructure typically a platform virtualization environment as a service. This
includes servers, software, data-center space and network equipment, available in a single bundle and billed as per usage in a utility computing
model.
IaaS is generally used by organizations that have the in-house expertise to manage their IT requirements but dont have the infrastructure. They then
hire the required infrastructure from IaaS providers and load up their libraries, applications, and data, after which they configure them themselves. A
popular use of IaaS is in hosting websites, where the in-house infrastructure is not burdened with this task but left free to manage the business.
Amazons Elastic Compute Cloud (EC2) is a major example of IaaS. Rackspaces Mosso and GoGrids ServePath are other IaaS offerings.
One important thing to note here: there is considerable overlap between SaaS, PaaS and IaaS, and with the rapid changes in the field, definitions are
in a flux. In fact, the same service may be categorized into either of the three depending on who is making the categorization developer, system
administrator or a manager.

CHARACTERISTICS OF CLOUD COMPUTING


Cloud computing exhibits the following key characteristics:

Empowerment of end-users of computing resources by putting the provisioning of those resources in their own control, as opposed to the
control of a centralized IT service (for example)

Agility improves with users' ability to re-provision technological infrastructure resources.

Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way the
user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.

Cost is claimed to be reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure.[15] This is
purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or
infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are
required for implementation (in-house).[16]

Device and location independence enable users to access systems using a web browser regardless of their location or what device they are
using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.

Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:

Centralisation of infrastructure in locations with lower costs (such as real estate, electricity, etc.)

Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilisation and efficiency improvements for systems that are often only 1020% utilised.

Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business
continuityand disaster recovery.

Scalability and Elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without
users having to engineer for peak loads.

Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.

Security could improve due to centralisation of data, increased security-focused resources, etc., but concerns can persist about loss of control
over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part
because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of
security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being
shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part
motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.

Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer.

UNDRESTANDING OF CLOUD ARCHITECTURE

Introduction: Cloud Architectures are designs of software applications that use Internet-accessible on-demand services. Applications built on Cloud
Architectures are such that the underlying computing infrastructure is used only when it is needed (for example to process a user request), draw the
necessary resources on-demand (like compute servers or storage), perform a specific job, then relinquish the unneeded resources and often dispose
themselves after the job is done. While in operation the application scales up or down elastically based on resource needs.
In the first section, we describe an example of an application that is currently in production using the on-demand infrastructure provided by Amazon
Web Services. This application allows a developer to do pattern-matching across millions of web documents. The application brings up hundreds of
virtual servers on-demand, runs a parallel computation on them using an open source distributed processing framework called Hadoop, then shuts
down all the virtual servers releasing all its resources back to the cloudall with low programming effort and at a very reasonable cost for the caller.
In the second section, we discuss some best practices for using each Amazon Web Service - Amazon S3, Amazon SQS, Amazon SimpleDB and
Amazon EC2 - to build an industrial-strength scalable application. the simple cloud architechture is:

Why Cloud Architectures?: Cloud Architectures address key difficulties surrounding large-scale data processing. In traditional data processing it is
difficult to get as many machines as an application needs. Second, it is difficult to get the machines when one needs them. Third, it is difficult to
distribute and co-ordinate a large-scale job on different machines, run processes on them, and provision another machine to recover if one machine
fails. Fourth, it is difficult to auto-scale up and down based on dynamic workloads. Fifth, it is difficult to get rid of all those machines when the job is
done. Cloud Architectures solve such difficulties. Applications built on Cloud Architectures run in-the-cloud where the physical location of the
infrastructure is determined by the provider. They take advantage of simple APIs of Internet-accessible services that scale on-demand, that are
industrial-strength, where the complex reliability and scalability logic of the underlying services remains implemented and hidden inside-the-cloud.
The usage of resources in Cloud Architectures is as needed, sometimes ephemeral or seasonal, thereby providing the highest utilization and optimum
bang for the buck.

Business Benefits of Cloud Architectures: There are some clear business benefits to building applications using Cloud Architectures. A few of
these are listed here:

Almost zero upfront infrastructure investment: If you have to build a large-scale system it may cost a fortune to invest in real estate, hardware
(racks, machines, routers, backup power supplies), hardware management (power management, cooling), and operations personnel. Because
of the upfront costs, it would typically need several rounds of management approvals before the project could even get started. Now, with
utility-style computing, there is no fixed cost or startup cost.
Just-in-time Infrastructure: In the past, if you got famous and your systems or your infrastructure did not scale you became a victim of your
own success. Conversely, if you invested heavily and did not get famous, you became a victim of your failure. By deploying applications
in-the-cloud with dynamic capacity management software architects do not have to worry about pre-procuring capacity for large-scale
systems. The solutions are low risk because you scale only as you grow. Cloud Architectures can relinquish infrastructure as quickly as you
got them in the first place (in minutes).
More efficient resource utilization: System administrators usually worry about hardware procuring (when they run out of capacity) and better
infrastructure utilization (when they have excess and idle capacity). With Cloud Architectures they can manage resources more effectively
and efficiently by having the applications request and relinquish resources only what they need (on-demand).
Usage-based costing: Utility-style pricing allows billing the customer only for the infrastructure that has been used. The customer is not
liable for the entire infrastructure that may be in place. This is a subtle difference between desktop applications and web applications. A
desktop application or a traditional client-server application runs on customer's own infrastructure (PC or server), whereas in a Cloud
Architectures application, the customer uses a third party infrastructure and gets billed only for the fraction of it that was used.
Potential for shrinking the processing time: Parallelization is the one of the great ways to speed up processing. If one compute-intensive or
data-intensive job that can be run in parallel takes 500 hours to process on one machine, with Cloud Architectures, it would be possible to
spawn and launch 500 instances and process the same job in 1 hour. Having available an elastic infrastructure provides the application with
the ability to exploit parallelization in a cost-effective manner reducing the total processing time.

Anda mungkin juga menyukai