Anda di halaman 1dari 44

Computer representation of numbers

Stability and Conditioning


Computer Arithmetic
Stefano Berrone
Sandra Pieraccini
Politecnico di Torino, Dipartimento di Scienze Matematiche
stefano.berrone@polito.it
sandra.pieraccini@polito.it
http://calvino.polito.it/~sberrone,
http://calvino.polito.it/~pieraccini
Programmazione e Calcolo Scientico
Last update: September 27, 2013
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Positional system
Let N
+
, 2 be a base. Any real number x can be uniquely
written as a (possibly) innite sequence of digits 0 x
k
< as
x = (1)
s
[x
n
x
n1
. . . x
1
x
0
.x
1
x
2
. . . x
m
] = (1)
s
n

k=m
x
k

k
, x
n
,= 0.
In the positional system any digit x
k
has a weight dependent on its
position in the sequence representing the number.
36.789 = 3 10
1
+ 6 10
0
+ 7 10
1
+ 8 10
2
+ 9 10
3
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Normalized scientic notation
The (normalized) scientic notation system is based on this idea
but all the digits are shifted after the point (decimal point if
= 10, binary point if = 2) the weight they loose or gain with
this shift (the same for all the digits) is collected in a multiplicative
factor. Normalization requires x
n
,= 0.
x = (1)
s
0.[x
n
. . . x
1
x
0
x
1
. . . x
m
]
n+1
= (1)
s

n+1
n

k=m
x
k

kn1
.
123.4567 0.1234567
. .
mantissa
10
3exponent/characteristic
0.0078 = 0.0078 10
0
= 00.078 10
1
= 000.78 10
2
0.78 10
2
12.078 = 12.078 10
0
= 1.2078 10
1
= 0.12078 10
2
0.12078 10
2
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Floating point numbers (computer numbers)
Let us dene oating-point numbers (machine numbers) with a
mantissa of t digits, a base and a range (L, U) for the (unbiased)
exponent, the following set of real numbers:
F(, t, L, U) = 0
_
x R : x = (1)
s

e
t

i =1
d
i

i
_
t, N
+
with 2
0 d
i
1, i = 1, .., t,
d
1
,= 0 (normalized representation)
L e U, U usually positive, L usually negative.
The integer e is called exponent, whereas m =

t
i =1
d
i

i
is
called mantissa.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning

1
= m
min
m m
max
= 1
t
.
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
1 2 3 4 t1 t
1 0 0 0 0 0 0 0 m
min
=0. =
1
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
1 2 3 4 t1 t
1 1 1 1 1 1 1 1 m
max
=0. =1
t
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Let us prove that x
max
= 1
t
:
Resorting to the sum of the rst t-terms of the geometric
sequence:

t1
j =0

j
=
1
t
1
t

k=1
( 1)
k
=
1

t
t

k=1

tk
=
1

t
t1

j =0

j
=
1

t
1
t
1
=

t
1

t
= 1
t
Proving that x
max
+
t
= 1
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
0 0 0 0 0 0 0 1
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
= 1. =1
+0. =
t
0. =1
t
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
For each real number x F(, t, L, U) we have
x
min
=
L1
[x[
U
_
1
t
_
= x
MAX
.
With normalization is not possible to represent any number with
absolute value less than x
min
but zero (all digit in mantissa 0
notwithstanding normalization)
The international standard IEEE754 (IEEE Standard for Binary
Floating-Point Arithmetic 754-1985, 754-2000, 754-2008) allows
to violate normalization when the exponent is equal to the
minimum possible value L in order to represent numbers with
absolute value less than x
min
, this is the denormalized
representation. In this case we have mantissas in the range

t
m
1

t
.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Approximate a real number with its machine representation
In a computer with nite dimensional registers we can not
represent all the real numbers neither that contained in a xed
range. This implies that each real number x = sign(x) m
e
whose
exponent e ts the range [L, U] will be approximated by a
(eventually) dierent real number x F(, t, L, U). With this
process the number x is representative for a equivalence class of
numbers (innite and uncountable) all approximated by x in the
computer arithmetic.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
The standard IEEE754-2008 denes ve rounding rules. Two
round to a nearest value; the others are called directed roundings:
Roundings to nearest
1
Round to nearest, ties to even: rounds to the nearest value;
if the number falls midway it is rounded to the nearest value
with an even (zero) least signicant bit; default for binary
oating-point, recommended default for decimal f.p.
2
Round to nearest, ties away from zero: rounds to the
nearest value; if the number falls midway it is rounded to the
nearest value above for positive numbers or below for negative
numbers; this is intended as an option for decimal f.p.
Directed roundings
1
Round toward 0: directed rounding towards zero (also known
as truncation or chopping).
2
Round toward plus Innity: directed rounding towards
positive innity (also known as rounding up or ceiling).
3
Round toward minus Innity: directed rounding towards
negative innity (also known as rounding down or oor).
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Absolute error, relative error
Let x R a real number and let x F(, t, L, U) its
approximation:
x = (1)
s
m
e
x = (1)
s
m
e
.
Let us dene:
Absolute error:
E
a
[x x[
Relative error:
E
r

[x x[
[x[
, x ,= 0
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Rounding/Truncation Errors
Let us consider only Round to nearest, ties to even and Round
toward 0 the we denote by rounding and truncation, respectively.
Remark
Mantissas m of machine numbers are equally spaces by
t
, in
fact two consecutive machine mantissas are
m
low
= 0.d
1
d
2
d
3
. . . d
t1
d
t
m
up
= 0.d
1
d
2
d
3
. . . d
t1
(d
t
+ 1) = 0.d
1
d
2
d
3
. . . d
t1
d
t
+
t
and each mantissa m ( m
low
, m
up
) will be approximated either
with m
low
or with m
up
.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Rounding/Truncation Errors
Truncation: mantissas m [ m, m +
t
) are approximated
by m. The absolute error is
0 m m <
t
.
Rounding: mantissas m ( m
1
2

t
, m +
1
2

t
) are
approximated by m. The absolute error is
[m m[ <
1
2

t
,
with m m sometimes positive and sometime negative.
When m = m +
1
2

t
we approximate it with m if the last
digit of m is 0, else m is approximated by m +
t
. In both
cases the absolute error is [m m[ =
1
2

t
.
Resuming we have
[m m[
1
2

t
.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Comparing truncation and rounding is clear that rounding is
preferable although more expensive. Rounding introcudes smaller
errors, prevents statistical bias, treats positive and negative values
symmetrically and guarantees numeric stability.
[m m[ <
t
, truncation,
[m m[
1
2

t
, rounding.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
The absolute error of the approximation of x to x is: vale:
[x x[ = [m m[
e
<
et
, truncation,
[x x[ = [m m[
e

1
2

et
, rounding.
Since for normalized representations m 0.1000 . . . =
1
, we
have
[x[ = m
e

1

e
and the relative errors
[x x[
[x[

[ x x[

e1
< u
1t
, truncation,
[x x[
[x[

[ x x[

e1
u
1
2

1t
, rounding.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Let us denote by eps
1t
the machine epsilon.
Let us denote by u the machine precision of roundo unit. It is
a characteristic quantity for each oating-point arithmetic and
represents the best relative accuracy achievable on that
computer with that data types.
Practically, all numbers whose relative dierence is less than
roundo unit have to be considered equivalent for the
computation, i.e., is meaningless to strive to reach relative
approximations better than u.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Standard IEEE
IEEE Standard 754 for Floating-Point Arithmetic specify:
1
32 bit for single precision, 1+8+23 (hidden bit)
2
64 bit for double precision, 1+11+52 (hidden bit)
3
128 bit for quadruple precision, 1+15+112 (hidden bit)
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Let = 2. In binary normalized oating-point systems the rst
digit to store in the space reserved for the mantissa digits is always
1, for this reason we can drop this storage and use the space to
store one bit more. The result is that eps and u are as the number
of digits for the mantissa is t + 1. This gained bit is called hidden
bit.
1
Single precision: L = 126, U = bias = 127,
eps = 2
1(23+1)
= 2
23
10
7
;
2
Double precision: L = 1022, U = bias = 1023,
eps = 2
1(52+1)
= 2
52
10
16
;
x
min,denorm
= 2
1074
= 2
102252
, x
min,norm
= 2
1022
,
x
max,norm
= (1 + (1 2
52
)) 2
1023
;
3
Quadruple precision: L = 16382, U = bias = 16383,
eps = 2
1(112+1)
= 2
112
10
34
;
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Mantissa representation with hidden bit
Let t the number of physical bits used to represent the mantissa:
1 = m
min
m m
max
= 1 + (1 2
t
).
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
1 2 3 4 t1 t
0 0 0 0 0 0 0 0 m
min
=1. =1
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
1 2 3 4 t1 t
1 1 1 1 1 1 1 1 m
max
=1. =1+(12
t
)
[x x[
[x[


t

e
< u
t
, truncation,
[x x[
[x[

1
2

e
u
1
2

t
, rounding.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Overow e Underow
1
Overow: occurs when a oating point operation produces a
result that is greater in magnitude than the larges machine
number [x[ > x
max,norm
, i.e., (e > U);
2
Underow: occur when the true result of a oating point
operation is closer to zero than the smallest value
representable (denormalized) number [x[ < x
min,denorm
, i.e.,
e < L.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Machine oating-point operations
Now let us dene on the set of the machine numbers F(, t, L, U)
an arithmetic which is as close as possible to the arithmetic on R
we are used to.
Denition
Let us call machine operation one of the elementary algebraic
operations performed by the computer such that the nal result is
the same we would obtain performing the corresponding exact
arithmetic operation and then approximating the result to a
machine number.
Let us denote by (x) the approximation (rounding or truncation)
called oating performed on x R and with , , , the
machine operation corresponding to +, , , /.
Example
a b := ((a) + (b))
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Let be the relative error due to the oating of x:
=
(x) x
x
(x) = x(1 + )
The denition of u and the denition of the machine operations
imply [[ u, being only due to the oating of the exact result
of the machine arithmetic operation:
(x) = x(1 + ), [[ u.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
For the machine operations we have:
a b = ((a) + (b)) = ((a) + (b))(1 +
1
), [
1
[ u
a b = ((a) (b)) = ((a) (b))(1 +
2
), [
2
[ u
a b = ((a) (b)) = ((a) (b))(1 +
3
), [
3
[ u
a b = ((a) / (b)) = ((a) / (b))(1 +
4
), [
4
[ u
Remark
This imply that, regardless of any error present in the operands
(i.e. if we assume that the operands are already machine
numbers), the relative error due to the machine arithmetic
operation never exceeds the machine precision u.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Remark
Not all the mathematical properties of the arithmetic operations
are satised by the machine operations. This fact can, sometimes,
lead to unexpected bahaviours.
The following properties are preserved by the machine operations:
a b = b a, a b = b a,
The following properties are lost in nite arithmetic:
a (b c) ,= (a b) c,
a (b c) ,= (a b) c,
a (b c) ,= (a b) (a c),
(a b) b ,= a,
(a b) b ,= a,
(a b) c ,= (a c) b.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Remark
In nite arithmetic unexpected situations may happen as the
following:
a b = (a)
when
0 < [(b)[ [(a)[.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
In general dierent expressions equivalent (with the same result) in
exact (or innite) arithmetic lead to dierent results in machine
(or nite) arithmetic.
Denition
We will name equivalent two expressions that in nite arithmetic
provide results whose relative distance are at most of magnitude of
machine precision u for that data type (single, double,... precision)
and approximation (rounding, truncation,...).
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Numerical cancellation
Numerical cancellation is known as the most dangerous eect of
the nite arithmetic.
This phenomenon may happens when we perform a subtraction
between two numbers very close each other.
Denition
We call numerical cancellation the loss of signicant digits (i.e. a
large decrease in accuracy) that happens when we subtract
machine numbers (operands) very close each other, i.e. the
result is much smaller (in absolute value) than both the operands.
The origin of this phenomenon is not an error introduced by the
machine subtraction, but it is a phenomenon of huge
amplication of the operands errors performed by the (nite
and innite) subtraction.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Example (Cancellation 1)
Let us consider the following normalized oating-point numbers
x
1
= 0.19101972 10
3
, x
2
= 0.19101708 10
3
and let us perform operation x
1
x
2
on a machine implementing
an arithmetic with =10, t = 6 and working with truncation. We
have u=10
t1
=10
5
and
(x
1
) = 0.191019 10
3
, (x
2
) = 0.191017 10
3
.
Up to now we have only oated the data, so the introduced error is
[(x
1
) x
1
[ = 0.720000 10
3
[(x
1
) x
1
[
[x
1
[
=
0.720000 10
3
0.19101972 10
3
0.37692 10
5
< u
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Example
[(x
2
) x
2
[
[x
2
[
=
0.800000 10
4
0.19101708 10
3
0.41881 10
6
< u
Let us compute the machine dierence
(x
1
) (x
2
) = 0.000002 10
3
= 0.200000 10
2
.
The exact result should be: x
1
x
2
= 0.264000 10
2
and the
relative error is
[((x
1
) (x
2
)) (x
1
x
2
)[
[(x
1
x
2
)[
= 0.2424 >> u,
Why we get a so large error? Do we respect the rule
a b = ((a) (b)) = ((a) (b))(1 +
2
), [
2
[ u
or not?
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
The result we get is conforming with the previous rule, but the
previous rule is telling us that the relative error of the dierence
performed on the machine approximations (a), (b) of a, b is in
the order of the machine precision, not that the relative error of
the dierence between a, b is in the order of u. What happened?
In the approximation (truncation) of the mantissas of x
1
and x
2
we
have thrown away digits following the sixth (t = 6) (the
truncation is still less than u).
The dierence performed on (a), (b) promoted the truncation
error due to the fact that the most signicant digits have been
removed by the operation. The truncation error in the data is
something around the sixth digit in the data, but it shifted up to
the rst digit in the solution of the operation.
(x
1
) = 0.191019 10
3
, (x
2
) = 0.191017 10
3
(x
1
) (x
2
) = 0.000002 10
3
= 0.200000 10
2
.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Example
What about using rounding instead of truncation?
(x
1
) = 0.191020 10
3
, (x
2
) = 0.191017 10
3
(x
1
) (x
2
) = 0.000003 10
3
= 0.300000 10
2
[((x
1
) (x
2
)) (x
1
x
2
)[
[(x
1
x
2
)[
= 0.1363
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
In this example the machine subtraction does not introduce any
error, the result of the subtraction between (x
1
), (x
1
) is a
machine number. The problem lies in the rounding/truncation
errors that are amplied by the (nite/innite) subtraction.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Is possible to avoid/prevent cancellation?
Sometimes dierent non equivalent from the nite arithmetic point
of view expressions for the same computation are possible
Example (Cancellation 2)
f

(x
0
)
f (x
0
+ h) f (x
0
)
h
, h small
Let be f (x) = sin(x)
sin(x
0
+ h) sin(x
0
)
h
. .
Algorithm I
=
2
h
cos
2x
0
+ h
2
sin
h
2
. .
Algorithm II (prosthaphaeresis formulas)
cos sin =
sin( + ) sin( )
2
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Example (Cancellation: Statistics, variance computation)
var (X) =
1
n
n

i =1
(x
i
x)
2
. .
Algorithm I
=
1
n
n

i =1
x
2
i
x
2
. .
Algorithm II
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Stability of an algorithm and conditioning of a problem
Essentially here we investigate how errors on data of a
problem/algorithm spread over the result.
Denition
A problem is said well posed if it has a unique solution
continuously depending on the data.
If a problem is not well posed it is said ill posed.
In the following let us assume to deal only with well posed
problems.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Let us consider the numerical solution of a well posed problem. In
the investigation of the propagation of errors two aspects have to
be considered:
1
Conditioning of the problem
2
Stability of the chosen algorithm for the numerical
solution
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Let us consider a generic problem whose structure is: given the
datum d, nd the solution x such that
x = f (d) (1)
Let us consider
d a perturbation of d d + d
x = f (d + d) the exact solution of the problem
corresponding to the perturbed datum d + d
x the result of the numerical algorithm used to solve the
perturbed problem with datum d + d.
Note: Usually x ,= x
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Conditioning
Conditioning refers to how the problem reacts to the unavoidable
perturbations on the data.
Let x = x x.
What is interesting for the evlauation of the conditioning of the
considered problem is to compare x with respect to d
independently of the algorithm we are planning to use for solving
the problem. This behaviour is totally depending on the nature of
the problem.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Denition (Qualitative, not quantitative)
A problem is said to be well conditioned if small perturbations on
the data induce small errors on the results.
A problem is said to be ill conditioned if small perturbations on
the data induce large errors on the results.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Denition (Condition number)
Let us assume that does exist a constant K = K(d) such that a
relation of this type holds true
|x|
|x|
K
|d|
|d|
or
|x|
|x|
K
|d|
|d|
,
the factor K(d) is named condition number of the problem.
Denition
A problem is said well conditioned if K(d) is small ( 1),
whereas is said ill conditioned if K(d) is large ( 10
316
).
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Example (Conditioning of the sum between two real numbers)
Given two real numbers d
1
, d
2
R let us consider the problem of
nding their sum
x = d
1
+ d
2
.
Is this problem well conditioned?
Let x the exact solution,

d
1
= d
1
+ d
1
and

d
2
= d
2
+ d
2
. Then
x + x = d
1
+ d
1
+ d
2
+ d
2
that implies
x = d
1
+ d
2
.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
[ x x[
[x[
=
[x[
[x[
=
[d
1
+ d
2
[
[d
1
+ d
2
[

[d
1
[
[d
1
+ d
2
[
+
[d
2
[
[d
1
+ d
2
[
=
[d
1
[
[d
1
+ d
2
[
[d
1
[
[d
1
[
+
[d
2
[
[d
1
+ d
2
[
[d
2
[
[d
2
[
Constants
K
d
1
=
[d
1
[
[d
1
+ d
2
[
, K
d
2
=
[d
2
[
[d
1
+ d
2
[
are amplifying coecients of relative perturbations
|d
1
|
|d
1
|
|d
2
|
|d
2
|
for
this problem.
Let us take as condition number K = max(K
a
, K
b
).
How we can classify the problem?
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
In general the answer to this question is not unique, in the sense
that the answer depend on the data. If d
1
+ d
2
0, we have
K
d
1
, K
d
2
, so the problem is ill conditioned.
When d
1
+ d
2
is small? When d
1
and d
2
are very close in modulus
but opposite in sign...
this is the situation originating the cancellation phenomenon.
Cancellation is nothing but the ill conditioning of the algebraic sum
when d
1
+ d
2
is small with respect to each of the arguments.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Stability: how the algorithm propagates and accumulate
errors (performing computations)
If we consider a numerical algorithm as a sequence of simple
problems its stability depend on how the approximation of the data
and the partial errors produced at each step grow or dispel.
Denition (qualitative)
An algorithm is said to be numerically stable if the sequence of
machine operations does not amplify the relative
rounding/truncation errors introduced at each step over the
machine precision.
All the partial results as well as the nal result produced by a
stable numerical algorithm have a relative error not exceeding
the machine precision u. If
| x x|
| x|
u
the algorithm is said stable.
Stefano Berrone Sandra Pieraccini Computer Arithmetic
Computer representation of numbers
Stability and Conditioning
Conditioning
Stability
Given a problem (well posed) we can consider stable algorithms or
unstable algorithms as previously seen discussing cancellation.
Example Cancellation 2, Algorithm I: unstable because the
nal error can not be bounded by u.
Example Cancellation 2, Algorithm II: stable because the nal
error can be bounded by u.
Stefano Berrone Sandra Pieraccini Computer Arithmetic

Anda mungkin juga menyukai