Anda di halaman 1dari 1

Amdahl's Law

Amdahl's Law is a law governing the speedup of using parallel processors on a


problem, versus using only one serial processor.Amdahl's law, also known as
Amdahl's argument,[1] is named after computer architect Gene Amdahl, and is used
to find the maximum expected improvement to an overall system when only part of
the system is improved. It is often used in parallel computing to predict the
theoretical maximum speedup using multiple processors.

Amdahl's law is a model for the relationship between the expected speedup of
parallelized implementations of an algorithm relative to the serial algorithm,
under the assumption that the problem size remains the same when parallelized.

For example, if a program needs 20 hours using a single processor core, and a
particular portion of 1 hour cannot be parallelized, while the remaining promising
portion of 19 hours (95%) can be parallelized, then regardless of how many
processors we devote to a parallelized execution of this program, the minimal
execution time cannot be less than that critical 1 hour. Hence the speed up is
limited up to 20x, as the diagram illustrates.

Diagram

The speedup of a program using multiple processors in parallel computing is


limited by the sequential fraction of the program. For example, if 95% of the
program can be parallelized, the theoretical maximum speedup using parallel
computing would be 20x as shown in the diagram, no matter how many processors are
used.

The speedup of a program using multiple processors in parallel computing is


limited by the time needed for the sequential fraction of the program.

The speed of a program is the time it takes the program to excecute. This could be
measured in any increment of time. Speedup is defined as the time it takes a
program to execute in serial (with one processor) divided by the time it takes to
execute in parallel (with many processors). The formula for speedup is:

T(1)
S = -------------
T(j)

Where T(j) is the time it takes to execute the program when using j processors.
Efficiency is the speedup, divided by the number of processors used.

For example:-
If there are N workers working on a project, we may assume that they would be
able to do a job in 1/N time of one worker working alone. Now, if we assume the
strictly serial part of the program is performed in B*T(1) time, then the strictly
parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and
number manipulation, we get the formula for speedup as:
N
S = -----------------------
(B*N)+(1-B)

This formula is known as Amdahl's Law.

Anda mungkin juga menyukai