Anda di halaman 1dari 25

Cloud Computing

Books
Distributed and Cloud Computing from Parallel
processing to Internet of Things by Kai Hwang et al
Cloud Computing Bible by Barrie Sosinsky
Cloud Security and Privacy by Tim Mather et al
UNIT I
History of Centralized and Distributed Computing
Overview of Distributed Computing, Cluster computing,
Grid computing.
Technologies for Network based systems
System models for Distributed and cloud computing
Software environments for distributed systems and
clouds.
History of Centralized and Distributed
Systems
Computer Systems Network based systems
Centralized to distributed
Instead of using a centralized computer, parallel and
distributed computing system uses multiple computers to
solve large scale problems over the Internet.
Data Intensive and Network centric.
Evolution of Computing
Mechanical Systems
Electronic Systems
Mainframes
Minicomputers
Personal computers
Parallel Systems
Distributed Computing
World Wide web

Distributed Computing
Networked systems
Client server computing
Distributed systems
Distributed Object management
Evolution Of WWW
World Wide Web/ MOSAIC Http, HTML, XML E-
commerce Services Computing Semantic web
Social Media
What is the Internet?
A network of networks, joining many
government, university and private computers
together and providing an infrastructure for the
use of E-mail, bulletin boards, file archives,
hypertext documents, databases and other
computational resources
The vast collection of computer networks which
form and act as a single huge network for
transport of data and messages across
distances which can be anywhere from the
same office to anywhere in the world.
Computer Systems: Single -> Global
Computer Systems

Single System Distributed Systems


(multiple systems)

PC/Workstation SMP/NUMA Vector Mainframe


(Multi-Core)

Clusters Clouds Client Server Grids Peer-to-Peer

Control and Management


Centralised Decentralised
Data Deluge Enabling New
Challenges
Parallel and Distributed Computing
Evolutionary Changes in parallel and distributed computing
High performance computing
High throughput computing
They appear as
Computational clusters
Service oriented architectures
Computational grids
Peer to peer networks
Internet clouds
Internet of things
These systems are distinguished by their
Hardware architecture
OS platforms
Processing algorithms
Communication protocols
Service models applied
From Desktop/HPC/Grids to
Internet Clouds
HPC moving from centralized supercomputers to geographically
distributed desktops, clusters, and grids to clouds over last 30 years

R/D efforts on HPC, clusters, Grids, P2P, and virtual machines have
laid the foundation of cloud computing that has been greatly
advocated since 2007

Location of computing infrastructure in areas with lower costs in


hardware, software, datasets, space, and power requirements
moving from desktop computing to datacenter-based clouds
Internet Computing
To meet Billions of users requirement,
Supercomputer sites and Large data centers must provide
High performance computing services to users concurrently.
High Throughput systems are to be built with parallel and
distributed computing technologies
Data centers are to be upgraded using fast servers, storage
systems and high bandwidth networks
The purpose is to advance network based computing
and web services with the emerging new technologies.
The general computing trend is to leverage shared web
resources and massive amount of data over the
Internet.
On the HPC side
Supercomputers (MPPs) are gradually replaced by clusters of
cooperating computers
Emphasize raw speed of performance
Speed range from Gflops to Pflops
Development of Market oriented high end computing
systems change from HPC to HTC paradigm.
On the HTC side
P2P system is built over many client machines.
Performance goal is measure high throughput
HTC Technology needs to
improve batch processing speed
address the acute problem of cost, energy savings and reliability.
Evolutionary trend
Three New Computing Paradigms
With the introduction of SOA, Web 2.0 services become
available
Advances in virtualization make it possible to see the
growth of Internet clouds
Maturity of Radio frequency identification (RFID) and
Global positioning system (GPS) and sensor
technologies has triggered the development of Internet
of Things(IoT)
Future HPC and HTC
Will demand multicore or many core processors that handle large
number of computing threads per core
Will emphasize parallelism and distributed computing
Must satisfy huge demand in computing power in terms of
Throughput
Efficiency ( speed , programming and energy factors, i.e.
throughput per watt of energy consumed)
Scalability
reliability
Applications of HPC and HTC
Science and Engg.,
Scientific simulations, Genomic analysis etc
Earth quake prediction, global warming, weather forecasting
etc.
Business, education services, Industry and Health care
Telecommunication, content delivery, e- commerce etc
Banking, stock exchanges, transaction processing etc
Air traffic control, electric power grids, distance education etc.
Health care, hospital automation, telemedicine etc.
Internet and web services and government applications
Internet search, data centers, decision making systems, etc
Traffic monitoring, worm containment, cyber security etc.
Digital government, online tax return processing, social
networking etc.
Mission critical applications
Military command and control intelligent systems, crisis
management etc.
Design Objectives to meet the future
HPC and HTC
Efficiency measures
HPC
Utilization rate of resources in an execution model by exploiting
massive parallelism
HTC
Efficiency is more closely related to job throughput, data access,
storage and power efficiency
Dependability measures the reliability and self management
from the chip to the system and application levels.
Purpose is to provide high throughput service with quality of
service (QoS) assurance even under failure conditions
Adaptation in the programming model measures the ability
to support billions of job requests over massive data sets and
virtualized cloud resources under various workload and service
models.
Flexibility in application deployment measures the
ability of distributed systems to run in both HPC
(science & engineering) and HTC (business)
applications.

Anda mungkin juga menyukai