Anda di halaman 1dari 40

Parallel Computing: An Introduction

Bing Bing Zhou School of Information Technologies University of Sydney bing.zhou@sydney.edu.au

The Goals
! This course is to present foundational
concepts of high performance computing advanced computer architectures the basics of parallel programming algorithm design and analysis ! The main theme: A New Way of Thinking

Contents
! ! ! ! !
Issues in High Performance Computing Parallel Architectures Parallel Algorithm Design Performance Analysis MPI and Pthread

The Course
! Lectures:
Mon: 14:30 16:05, C12-N402 Wed: 19:00 21:30, C12-N403 Fri: 16:25 18:00, C12-N403

! Assignments: two small MPI

programming projects To pass this subject, you must get at least the pass mark for both assignments
4

References
! Introduction to Parallel Computing by Ananth
Grama, Anshul Gupta, George Karypis, and Vipin Kumar, Addison Wesley, 2003 ! Introduction to Parallel Processing: Algorithms and Architectures by Behrooz Parhami, Plenum Press, 1999 ! Parallel Programming in C with MPI and OpenMP, by Michael J. Quinn, Mcgraw-Hill, 2003

High Performance Computing


! High performance computing (HPC) is
the main motivation for using parallel supercomputers, computer clusters and other advanced parallel/distributed computing systems
High speed High throughput

Obtaining Performance
! Increasing the speed of sequential
microprocessors.

Increasing clock frequency Implicit parallelism (ILP)

! Parallelizing (explicitly) the process of


computing!

! Specialized computers for a certain class


of problems.

Parallel Computing
! Parallel Computing (or Processing)
refers to a large class of methods (algorithms, architectures, software tools, etc) that attempt to increase computing speed by performing more than one computation concurrently.

Why Parallel Computing?


! The promise of parallelism has fascinated
researchers for at least three decades. ! In the past, parallel computing efforts have shown promise and gathered investment, but in the end, uniprocessor computing always prevailed. ! We argue general-purpose computing is taking an irreversible step toward parallel architectures. Technology push Application driven

Moore s Law
! Each new chip
contained roughly twice as much capacity as its predecessor. released within 18-24 months of the previous chip.

! Each new chip was

Intel Corp
10

Processor Performance
! performance to
improve by 52% per year between 1986 and 2002, ! Since 2002, performance has improved less than 20% per year.
11

Technological Limitations
Frequency wall : Increasing frequencies and deeper pipelines has reached diminishing returns on performance. ! Power wall : The chip will melt if running any faster (higher clock rate). ! ILP wall : There are diminishing returns on finding more ILP (instruction-level parallelism). ! Memory wall : Load and store is slow, but multiply is fast. Modern microprocessors can take much more clock cycles to access Dynamic Random Access Memory (DRAM) than the clocks for floating-point multiplies.

12

13

14

15

16

Multi-Core Technology
!

Intel Announcement (12/2006):


Intel

Corporation researchers have developed the world s first programmable processor that delivers supercomputerlike performance (1+ TFlops, 1+ Tbs/s) from a single, 80-core chip not much larger than the size of a finger nail while using less electricity (~65W) than most of today s home appliances.

17

18

19

20

The New Wave


!

The rate of technological progress for networking is an astounding 10fold increase every 4 years (77.8% yearly compound rate). The emergence of network-centric computing (as opposed to processorcentric) distributed high performance/throughput computing
21

Parallel vs Distributed Computing


Parallel computing splits a single application up
into tasks that are executed at the same time and is more like a top-down approach Distributed computing considers a single application which is executed as a whole but at different locations and is more like a bottom-up approach

22

Parallel vs Distributed Computing


Parallel computing is about decomposition:

Distributed computing is about


composition:

how we can perform a single application concurrently, how we can divide a computation into smaller parts which may potentially be executed in parallel.

what happens if many distributed processes interact with each other if a global function can be achieved although there is no global time or state
23

Parallel vs Distributed Computing


Parallel computing considers how to reach a
maximum degree of concurrency

Scientific computing

Distributed computing considers reliability and


availability

Information/resource sharing

24

Parallel vs Distributed Computing


The differences are now blurred,
especially after the introduction of Grid computing and Cloud computing The two related fields have many things in common:

Multiple processors Networks connecting the processors Multiple computing activities and processes Input/output data distributed among processors
25

The Network is the Computer


LANs & WANs
(Wide Area Networks) (Local Area Networks)

Supercomputer

The Network is the Computer!


26

The Network is the Computer


LANs & WANs
(Wide Area Networks) (Local Area Networks)

The When the network is as fast as the Network computer's internal links, the machinethe Supercomputer is disintegrates across the net into Computer! a set of

special purpose appliances

27

Cluster Computing
! A computer cluster is a group of linked computers,
working together closely so that in many respects they form a single computer. ! The components of a cluster are commonly, but not always, connected to each other through fast local area networks. ! Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
28

The Internet

29

Computational Grids

30

Grid Computing
! Grid computing is the combination of computer
resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data. ! It is a form of distributed computing whereby a super and virtual computer is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks. ! This technology has been applied to computationally intensive scientific, mathematical, and academic problems, and used in commercial enterprise data intensive applications.
31

Cloud Computing
! Cloud computing is a model for enabling convenient,
on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (NIST s definition)

! Cloud computing describes a new supplement,

consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources (storage, platform, infrastructure, and software) as a service over the Internet.
32

Computing Milieu
! When computing cost improves, the opportunities
of computers multiply. ! Science
storm forecasting and climate prediction understanding biochemical processes of living organisms

! Engineering
computational fluid dynamics and airplane design earthquake and structural modelling molecular nanotechnology

! Business
computational finance data mining
33

Course Theme: A New Way of Thinking


In sequential computing, operations are
performed one at a time, making it straightforward to reason about the correctness and performance characteristics of a program.

In parallel computing, many operations


take place at once, complicating our reasoning about the correctness and performance.

34

Course Theme: A New Way of Thinking


! A sequential algorithm is evaluated by its
runtime ! The asymptotic runtime of a sequential program is identical on any serial platform. ! The parallel runtime of a program depends on

! A parallel algorithm must therefore be

the input size, the number of processors, the communication parameters of the machine.

analyzed in the context of the underlying platform.


35

Parallel Programming
Work

Partition
w3
worker

w1
worker

w2
worker

r1

r2

r3

Result

Combine
36

Parallel Programming

DECOMPOSE

TASK (JOB)

ASSIGN

Tasks

Processes

MAP Distributed Environment

NP-complete problems Non-deterministic Polynomial time complete


37

Parallel Programming
! Problems to consider:

How do we assign tasks to processes? What if we have more tasks than processes? What if processes need to share partial results? How do we aggregate partial results? How do we know all the processes have finished? What if processes die? What is the performance of a parallel program?
38

Main Topics
! Parallel/distributed computing
architectures ! Parallel algorithm design ! Analytical modelling of parallel programs ! Examples

39

References
! Jack Dongarra (U. Tenn.) CS 594 slides http://
www.cs.utk.edu/~dongarra/WEB-PAGES/ cs594-2010.htm

40

Anda mungkin juga menyukai