Correspondence to: Yimin Wei, School of Mathematical Sciences, Fudan University, Shanghai 200433, Peoples
Republic of China.
E-mail: ymwei@fudan.edu.cn
Copyright 2011 John Wiley & Sons, Ltd.
206 Z.-C. LI, H.-T. HUANG AND Y. WEI
dealt with carefully. Moreover, the stability analysis shows that the condition number is a better
criterion for numerical PDE than the 2-norm of the solution vector.
Consider the over-determined system of linear algebraic equations [1, 2]
Ax=b, (1)
where AR
mn
(mn), xR
n
and bR
m
. Its perturbed system is
A(xDx) =bDb, (2)
or more general
(ADA)(xDx) =bDb, (3)
where the perturbations DAR
mn
, DxR
n
and DbR
m
. To measure the sensitivity of the
solution to the perturbations in the data, traditionally we use the 2-norm condition number dened
by [1]
Cond(A) =
max
min
=|A||A
|, (4)
where
max
and
min
are the maximal and the minimal singular values of matrix A, respectively,
and A
is the MoorePenrose inverse of A. For Equation (2), there exists the classical upper bound,
|x|
|x|
Cond(A)
|b|
|b|
, (5)
where | | is the spectral norm (i.e. the 2-norm). In addition to the 2-norm condition number, the
mixed and componentwise condition numbers are developed in [35].
Recently, in [68], the effective condition number dened by
Cond_eff(A) =
|b|
min
|x|
=|A
|
|b|
|x|
(6)
is proposed and a sharp bound for (2)
|x|
|x|
Cond_eff(A)
|b|
|b|
(7)
is also given.
The Cond_eff(A) can be much smaller than the traditional Cond(A). Comparing Cond_eff(A)
with Cond(A),
min
=1/|A
|, not
max
=|A|, is intrinsic to stability. In practice, some numerical
methods, such as the spectral methods, the method of fundamental solutions (MFS), the radial
basis function method, etc. are often very ill-conditioned, i.e.
min
is very close to zero (see [9]). In
this case, both Cond(A) and Cond_eff(A) are large. To reduce the severe instability, two techniques
can be employed:
(1) the TSVD and
(2) the TR.
Both TSVD and TR play an important role in ltering [10], and are successful in noise reduction
in the least squares (LS) problems. For the related important references, TSVD is discussed by
Hansen and co-workers [1113], Chan and Hansen [14], and used by Chen et al. [15, 16]. TR was
rst proposed by Tikhonov [17] in 1963, introduced in Tikhonov and Arsenin [18] and analyzed
in [1927]. Recently, TSVD and TR are studied in [14, 12, 28], and Fierro et al. [29] for the
regularization by truncated total LS.
From (4) and (6), we have
Cond(A) =c
0
Cond_eff(A)|x|, (8)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 207
where the ratio c
0
=|A|/|b|. When c
0
.O(1), where a .b or a .O(b) denotes that there exist
two positive constants C
1
and C
2
such that C
1
baC
2
b, Equation (8) leads to
Cond(A) .Cond_eff(A)|x|. (9)
Suppose that the data error Db includes both the rounding and the truncation (i.e. the discretiza-
tion) errors [8]. Equation (7) indicates that Cond_eff is an enlargement factor of the solution errors
from the data errors. A large |x| indicates the occurrence of the subtraction cancelation, which is
a source of instability [30]. Interestingly, from (9) Cond denotes the whole stability, whereas |x|
denotes only part of stability. Hence, for the stability analysis of regularization, condition number
Cond is more important than |x|. This is a distinct feature of this paper from existing literature.
For numerical PDE, the small errors are most desirable. Under a xed machine precision, the
large enlargement factor Cond may cost many working digits, leaving the rest of the working digits
for accuracy of the numerical solutions. Hence, the accuracy is retained as long as there are enough
working digits left from stability. Multiple precision can be used, but will cost more CPU time and
computer memory. Therefore, to reach the same accuracy, we should employ longer precision only
for more ill-conditioned problems. The accuracy is of the most concern in numerical PDE, and it
is often required to be very high. This is another distinct feature of this paper from the existing
literature, such as in image processing, where the errors are only required within about 1%.
We will explore these intrinsic properties of the regularization algorithms and cope with them
in numerical PDE. One important feature of our paper is that when the regularization is applied,
the stability is improved signicantly, but the accuracy is reduced moderately. Hence a suitable
balance between stability and accuracy must be taken. In this paper, for numerical PDE, we explore
condition numbers and error bounds for two kinds of regularization: (1) the TSVD and (2) the TR.
The application of this paper, in particular to seek the optimal regularization parameter, will be
discussed in our future work.
This paper is organized as follows. In Section 2, the TSVD and the TR algorithms are described.
In Section 3, the bounds for both condition number and effective condition number are derived
for TSVD and TR. In Section 4, a brief error analysis is done and error bounds are derived. In the
last section, numerical tests for discrete Laplace equation are reported, solved by the MFS.
2. ALGORITHMS OF REGULARIZATION
In this paper, we assume that Rank(A) =n. The singular value decomposition of A is expressed by
A=URV
T
, (10)
where UR
mm
and VR
nn
are orthogonal matrices, and RR
mn
is a diagonal matrix with
positive singular values
2
n
>0, (11)
where we denote simply
1
=
max
=|A| and
n
=
min
=1/|A
i =1
i
v
i
, (12)
where
i
=u
T
i
b. When
n
is very close to zero, the solution x
0
in (12) may be large and even
huge if
n
,=0. Also when v
n
is highly oscillating, the solution x
0
is also highly oscillating.
One way to overcome this difculty is to discard the part of (12) involving very small
i
, say
i =k 1, k 2, . . . , n. We then obtain the TSVD [12]
x
k
=
k
i =1
i
v
i
, k<n. (13)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
208 Z.-C. LI, H.-T. HUANG AND Y. WEI
The other approach dealing with very small
n
is TR. Consider the following minimization
problem with a parameter , known as the regularization parameter,
min
xR
n
{|Axb|
2
2
|Lx|
2
], (14)
where the matrix LR
pn
( pn). If L is the identify matrix IR
nn
, then Equation (14) leads to
min
xR
n
{|Axb|
2
2
|x|
2
], (15)
which is the standard form of TR. In fact, we have from Equation (12)
|x
0
|
2
=
n
i =1
_
i
_
2
.
When
n
,=0 and
n
=
min
0, we have |x
0
|. Equation (15) is used to control |x| from
being large, and thus to reduce the severe instability. The solution of (15) can be represented by
x
=(A
T
A
2
I)
1
A
T
b. (16)
Hence a better stability can be achieved by preventing very large values of |x
k
| from
min
(tending
to 0). From the SVD (10), we have
x
=
n
i =1
2
i
k
2
i
v
i
. (17)
In this paper, we assume that the parameter satises
min
max
,
min
_
max
. (18)
Equation (17) is called the TR solution. Both Equations (13) and (17) can overcome the instability
caused by the small
min
.
3. NEW ESTIMATES OF THE CONDITION NUMBER AND THE
EFFECTIVE CONDITION NUMBER
For TSVD of (13), the condition number and the effective condition number for the matrix A are
given by
Cond
k
(A) =
k
(19)
and
Cond_eff
k
(A) =
|b|
k
|x
k
|
. (20)
When k =n, Cond
n
(A) =Cond(A) and Cond_eff
n
(A) =Cond_eff(A). Evidently, Cond_eff
k
(A)
in (20) is smaller or much smaller than Cond
k
(A) in (19).
Now we consider TR in (17). Denote the singular values of the matrix of TR by
i
=
i
i
. (21)
Then Equation (17) can be rewritten as
x
=
n
i =1
i
v
i
. (22)
(A) =
max
i
i
min
i
i
, (23)
and
Cond_eff
(A) =
|b|
(min
i
i
)|x|
. (24)
Based on the denitions in (19), (20), (23) and (24), we obtain
Cond_eff
k
(A) Cond
k
(A), (25)
Cond_eff
(A) Cond
(A). (26)
Hence the effective condition number is smaller than the condition number.
Next, we have the following lemma.
Lemma 3.1
If (18) holds, then
min
=min
i
i
2.
Proof
Dene the function
f (y) =y
2
y
, y [
min
,
max
]. (27)
The singular values
i
in (21) are then equal to f (
i
). By the basic inequality,
f (y) =y
2
y
2
_
y
2
y
=2,
where the minimum of (21) will be attained if =
i
.
Lemma 3.2
Let (18) hold, then
max
i
i
=
max
max
if
min
max
(28)
and
max
i
i
=
min
min
if
min
max
. (29)
Proof
From f (y) in (27), we may seek the extreme values of the continuous function f (y) for y
[
min
,
max
], where
1
=
max
and
n
=
min
. The stationary point is given by
0= f
/
( y) =1
2
y
2
, (30)
which implies y =[
min
,
max
] by (18).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
210 Z.-C. LI, H.-T. HUANG AND Y. WEI
Then the largest value must occur at the two boundary points, i.e.
max
y[
min
,
max
]
f (y) =max{ f (
min
), f (
max
)]. (31)
To show (28), it is sufcient to prove
f (
max
)f (
min
), (32)
i.e.
max
max
min
min
, (33)
which is equivalent to
max
min
2
_
1
min
max
_
=
2
max
min
min
max
. (34)
Equation (34) holds if and only if
min
max
. (35)
This is the rst desired result (28). Relation (29) can be proven in a similar manner.
From Lemma 3.2, we obtain the following theorem.
Theorem 3.1
If (18) holds, then the effective condition number and the condition number for TR are given by
Cond_eff
(A)
|b|
2|x
|
, (36)
Cond
(A)
2
max
2
2
max
if
min
max
, (37)
Cond
(A)
2
min
2
2
min
if
min
max
, (38)
where the equality of (36)(38) will be attained if =
i
.
Corollary 3.1
Let (18) hold for
i
in (21). Then the following bound holds:
|x
|
|x
|b|
|b
0
|
, (39)
where b is a perturbation of vector b, b
0
is the projection of vector b onto the range space of A,
and the condition number is given in [12] by
=
max
if
min
max
, (40)
min
if
min
max
. (41)
Proof
When
min
max
, we have from Theorem 3.1 and (18)
Cond
(A)
2
max
2
2
max
=
2
2
max
2
max
2
max
2
max
max
. (42)
This is the rst desired result (40).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 211
Similarly, when
min
max
, also from Theorem 3.1 and (18) we can deduce
Cond
(A)
min
2
2
min
=
min
1
2
_
min
min
_
min
=
. (43)
This is the second desired result (41).
Corollary 3.1 shows that the denitions of Cond
(A)<
, when
min
<<
max
.
A relation between the condition numbers for TSVD and TR is presented in the following
lemma.
Corollary 3.2
Suppose that (18) holds. When =
k
min
max
,
Cond
(A)
max
2
k
k
2
max
=
1
2
_
Cond
k
(A)
1
Cond
k
(A)
_
, (44)
and when =
k
min
max
,
Cond
(A)
min
2
k
k
2
min
=
1
2
_
Cond
k
(A)
Cond(A)
Cond(A)
Cond
k
(A)
_
. (45)
Corollary 3.2 shows that when =
k
min
max
, Cond
, (46)
|x
|
|b|
2
. (47)
Proof
For
i
(i k), from (13) we have
|x
k
|
2
=
k
i =1
2
i
2
i
2
k
i =1
2
i
2
n
i =1
2
i
2
m
i =1
2
i
=
1
2
|b|
2
, (48)
where
|b|=
_
m
i =1
2
i
. (49)
This gives the rst desired result (46).
Now, from Lemma 3.1 and Equation (22) we obtain,
|x
|
2
=
n
i =1
_
i
_
2
1
(min
i
i
)
2
n
i =1
2
i
1
(2)
2
n
i =1
2
i
1
(2)
2
|b|
2
. (50)
This deduces the second desired result (36).
Corollary 3.3 implies that for |b|=O(1),
|x
k
|=O
_
1
_
, |x
|=O
_
1
_
. (51)
When
min
, the norms of x
k
and x
min
. (52)
For TR, when =
k
min
max
,
Cond(A)
Cond
(A)
=
min
_
_
_
_
_
_
2
1
1
Cond(A)
_
2
min
max
_
_
_
, (53)
and when =
k
min
max
,
Cond(A)
Cond
(A)
=
min
_
_
_
_
_
2
1
Cond(A)
min
max
_
_
. (54)
Proof
We have from
k
,
Cond(A)
Cond
k
(A)
=
(
max
/
min
)
(
max
/
k
)
=
k
min
min
. (55)
This is the rst result (52).
Next we consider Cond(A)/Cond
(A). When =
k
min
max
, we have the equality from
Theorem 3.1,
Cond(A)
Cond
(A)
=
max
/
min
(
2
max
2
)/(2
max
)
=
min
_
_
_
_
_
2
1
_
max
_
2
_
_
=
min
_
_
_
_
_
_
2
1
min
max
_
2
min
max
_
_
_
=
min
_
_
_
_
_
_
2
1
1
Cond(A)
_
2
min
max
_
_
_
. (56)
Similarly, when =
k
min
max
, from Theorem 3.1 we have
Cond(A)
Cond
(A)
=
max
/
min
(
2
min
2
)/(2
min
)
=
min
_
_
_
_
_
2
min
max
min
max
_
_
=
min
_
_
_
_
_
2
1
Cond(A)
min
max
_
_
. (57)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 213
Remark 3.1
Let (18) hold. We assume that
=
k
=
min
max
,
min
_
max
. (58)
Then, the following bounds hold:
Cond(A)
Cond
k
(A)
1 (59)
and
Cond(A)
Cond
(A)
2. (60)
Another relation between the condition numbers for TSVD and TR is derived in the following
theorem.
Theorem 3.3
Let (18) hold and =
k
. Then there exists the approximation, when
min
max
,
Cond
(A)
1
2
Cond
k
(A), (61)
and when
min
max
, the following bounds hold:
Cond
(A) Cond
k
(A) if
_
2
min
max
2
min
, (62)
Cond
(A) Cond
k
(A) if
_
2
min
max
2
min
. (63)
Proof
When =
k
min
max
, we derive from (19) and (37) with the equality
Cond
(A) =
max
max
2
=
max
2
k
max
2
k
=
max
2
k
k
2
max
1
2
Cond
k
(A), (64)
by noting that
k
=
min
max
_
max
.
Next, when =
k
min
max
, we have from (19) and (38) with the equality that
Cond
k
(A)Cond
(A) =
max
2
min
2
2
min
=
max
2
min
2
k
2
k
min
=
1
2
k
min
(2
max
min
2
min
2
k
). (65)
When
k
=
_
2
max
min
2
min
,
Equation (65) leads to Cond
k
(A)Cond
(A)Cond
k
(A). On the other
hand, when
k
=
_
2
max
min
2
min
. Equation (65) leads to Cond
k
(A)Cond
(A)0, i.e.
Cond
(A)Cond
k
(A).
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
214 Z.-C. LI, H.-T. HUANG AND Y. WEI
4. BRIEF ERROR ANALYSIS
The aim of this paper is to apply the regularization to numerical PDE, where the minimal singular
value of the discrete matrices is close to zero. The PDE problems, discussed here, are assumed
to be well-posed, however this does not hold for the corresponding numerical solution methods,
such as the MFS, where the computation of the solution requires high accuracy. When solving
systems of linear equations, to control the accuracy, one usually estimates the difference between
the original and the computed regularized solution. When solving PDEs numerically, the latter
difference involves also the discretization error (cf. e.g. [8]). In this paper we use error as a
general measure of the accuracy of the numerically computed discrete solution of the original
problem.
First, we consider the relative error (or the discrepancy)
k
=
|x
0
x
k
|
|x
0
|
, (66)
for TSVD. We have
x
0
x
k
=
n
i =k1
i
v
i
. (67)
Lemma 4.1
Let k<n and
k
=
|x
0
x
k
|
|x
0
|
. (68)
A necessary condition for relation (68) is
i
|x
0
|, i k 1. (69)
Proof
We have from Equation (67) that
|x
0
x
k
|=
_
n
i =k1
_
i
_
2
_1
2
. (70)
Hence, from (68) we obtain
i
|x
0
x
k
||x
0
|, i =k 1, k 2, . . . , n. (71)
When
i
=u
i
T
b(i >k) is not small, in order to satisfy (69), the relative error of x
k
by TSVD
may not be small, either.
Next, we consider the relative error for TR,
=
|x
0
x
|
|x
0
|
. (72)
For TR, the error analysis is more complicated. We have the following theorem.
Theorem 4.1
Let (18) hold. Then the following bounds hold for TR,
2
max
|x
0
x
|
|x
0
|
2
2
min
2
. (73)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 215
Proof
From Equation (17) we have
x
0
x
=
n
i =1
_
1
2
i
2
_
i
v
i
=
n
i =1
_
2
2
i
2
_
i
v
i
. (74)
Hence we derive the upper bound in (73),
|x
0
x
|
2
=
n
i =1
_
2
2
i
2
_
2 _
i
_
2
max
i
_
2
2
i
2
_
2
n
i =1
_
i
_
2
=
_
2
2
min
2
_
2
|x
0
|
2
. (75)
Next from (74),
|x
0
x
|
2
min
i
_
2
2
i
2
_
2
n
i =1
_
i
_
2
=
_
2
2
max
2
_
2
|x
0
|
2
, (76)
which gives the lower bound in (73).
Corollary 4.1
Let
=
min
max
,
min
_
max
. (77)
Then the following bounds hold:
1
Cond(A)
c
0
|x
0
x
|
|x
0
|
<1, (78)
where Cond(A) =
max
/
min
.
Proof
From the assumption (77), we have
2
min
2
=
max
min
max
<1 (79)
and
2
max
2
=
min
min
max
=
1
max
min
1
1
_
max
min
_ =
1
Cond(A)
.
m
i =1
i
u
i
, where
i
=u
T
i
b. Denote
b
0
=Ax
0
=
n
i =1
i
u
i
,
b=
m
i =n1
i
u
i
, (80)
where mn is dened in Equation (10). Then we have
|b|=
_
m
i =1
2
i
, |b
0
|=
_
n
i =1
2
i
, |
b|=
_
m
i =n1
2
i
, (81)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
216 Z.-C. LI, H.-T. HUANG AND Y. WEI
and
|b|
2
=|b
0
|
2
|
b|
2
. (82)
We get the following theorem.
Theorem 4.2
When (18) holds, then the following bounds hold,
2
max
|b
0
Ax
|
|b
0
|
2
2
min
2
. (83)
Proof
We have
|Ax
b
0
|
2
=
_
_
U
T
AVV
T
x
U
T
b
0
_
_
2
=
_
_
_
_
_
n
i =1
_
2
i
2
i
2
1
_
i
u
i
_
_
_
_
_
2
=
_
_
_
_
_
n
i =1
_
2
2
i
2
_
i
u
i
_
_
_
_
_
2
=
n
i =1
_
2
2
i
2
_
2
2
i
. (84)
Then
_
2
2
max
2
_
2
n
i =1
2
i
i =1
_
2
2
i
2
_
2
2
i
_
2
2
min
2
_
2
n
i =1
2
i
. (85)
The desired result (83) follows from (84) and (81).
From Theorems 4.1 and 4.2, although the errors are bounded by the factor
2
/(
2
min
2
)(<1),
they may not be small, which is undesirable for numerical PDE. For problems with noisy data,
the exact solutions may not exist, or are meaningless even if they exist. The useful solutions, as in
the case of image processing, may allow a certain range of errors, which may not be very small,
though. More analysis is given by Hansen [12].
Remark 4.1
From (80) and (84), we have
|Ax
b|
2
=|
b
0
|
2
|b
0
Ax
|
2
, (86)
where
|b
0
Ax
|
2
=
n
i =1
_
2
2
i
2
_
2
2
i
, (87)
Then we have
|Ax
b|
2
|
b
0
|
2
i =1
2
i
=|
b
0
|
2
|b
0
|
2
=|b|
2
as .
Since large cannot reduce the errors, we should choose
max
. Moreover, from (21) when
<
min
, the minimal singular value
min
=O(
min
). Then we choose
min
. For both accuracy
and stability of the TR solutions, we conclude that the assumption (18) for is reasonable.
5. NUMERICAL TESTS
The instability of the MFS is very severe. In this section, we investigate discrete Laplace operator
by MFS by carrying out numerical experiments to verify our analysis.
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 217
Figure 1. A rectangular domain.
5.1. The MFS
Consider the Dirichlet problem of the Laplace operator
u =0 in S, u =g on *S, (88)
where S ={(x, y)[1<x<1, 0<y<1]. We choose the smooth solution
u =sin(kx) sinh(ky), k =1 or 2. (89)
The Dirichlet boundary conditions are given explicitly by (see Figure 1)
u[
ABCDAD
=0, u[
BC
=g=sin(kx) sinh(k). (90)
Let G(0,
1
2
) be the center of the polar coordinates (r, ), then r
max
=max
S
r =GB=
5
2
. We use
MFS in Li [31] and the stability analysis in [9]. Choose the source points Q
i
={(r, )[r =R, =i h]
uniformly on the circle, where R>r
max
, and h =2/N. The fundamental solutions are given by
i
(P) =ln[ PQ
i
[, i =1, 2, . . . , N, (91)
where P S*S. We choose their linear combination
u
N
=
N
i =1
c
i
i
(P), (92)
as the approximate solutions of (88), where c
i
are the coefcients to be determined. Since functions
(92) are harmonic, we may establish the collocation equations directly for satisfying the Dirichlet
boundary conditions (90). Hence we have
u
N
(P
j
) =
N
i =1
c
i
i
(P
j
) =0, P
j
ABCDAD, (93)
u
N
(P
j
) =
N
i =1
c
i
i
(P
j
) =g(P
j
), P
j
BC. (94)
For simplicity, we choose the uniform collocation points P
j
. Such an approach can be described
as the LS method: Find u
N
V
L
such that [32]
u
N
= min
vV
L
I (v), (95)
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
218 Z.-C. LI, H.-T. HUANG AND Y. WEI
where V
L
is the set of the approximate solution (92), and
I (v) =
(vg)
2
. (96)
In (96), =*S, g=0 on ABCDAD, g=sin(kx) sinh(k) on BC, and the approximation
of
_
is given by the central rule. We may also establish the collocation equations by the
Gaussian rule,
w
j
N
i =1
c
i
i
(P
j
) =0, P
j
ABCDAD, (97)
w
j
N
i =1
c
i
i
(P
j
) =w
j
g(P
j
), P
j
BC, (98)
where w
j
and P
j
are the weights and integration nodes, respectively. Let M denote the number of
uniform collocation nodes along AB. Hence the total number of collocation equations is 6M, see
Figure 1. Choose 6M>N and R=2>
5
2
. Then Equations (97) and (98), as well as (93) and (94),
are represented by the over-determined linear system (1).
The errors and condition numbers are given in Table I, where
||
B
=
__
(u
N
g)
2
_1
2
(99)
denotes the real errors of numerical solutions, and it is more important than the discrepancy
|x
0
x
|/|x
0
| for numerical PDE. We will focus on the error ||
B
, which is also different from
the existing literature. From Table I, we can see
||
B
=O(0.57
N
),
max
=O(1),
min
=O(0.5
N
),
Cond(A) =O(2.04
N
), Cond_eff(A) =O(1.47
N
),
|x| =O(1.48
N
).
5.2. The regularization
Since
min
is close to zero, the coefcients c
i
are very large and the ill-conditioning is very severe.
To reduce the ill-conditioning, we use TSVD and TR. First choose =
k
in TSVD with N =71
and M=50, the errors and condition numbers are listed in Table II. In Table II, when k =71, the
solution is the same as that of Table I with N =71 and M=50. Evidently, when k =57 the solution
x
k
with coefcients c
i
is reduced from O(10
8
) to O(10
2
), and Cond
k
(A) from O(10
24
) to O(10
17
).
When k =57, the errors ||
B
=O(10
15
) increase only by a factor of 10, whereas the effective
condition number Cond_eff
k
(A) =O(10
14
) decreases by a factor of 200.
Next, we choose =
m
in TR, the errors and condition numbers are listed in Table III. We list
the data in Table III when =
57
,
||
B
=0.172(14), Cond
(A) =0.319(17), |x
k
|=178. (100)
In (100) small |x
k
| and huge Cond
(A) =0.319(17),
Copyright 2011 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl. 2011; 18:205221
DOI: 10.1002/nla
ILL-CONDITIONING OF THE TSVD 219
Table I. The error norms and condition numbers by MFS for Laplace operator with R=2.
N 28 42 56 71
M 20 30 40 50
||
B
0.272(5) 0.122(8) 0.673(12) 0.171(15)
|x| 0.143(4) 0.139(7) 0.267(7) 0.331(9)
|b| 2.579 2.108 1.826 1.633
max
1.331 1.331 1.331 1.340
min
0.414(10) 0.360(15) 0.241(20) 0.487(25)
Cond(A) 0.321(11) 0.370(16) 0.552(21) 0.275(26)
Cond_eff(A) 0.435(8) 0.421(10) 0.283(15) 0.101(18)
Table II. Using TSVD by MFS for Laplace operator with N=71, M=50 and
k
.
k(
k
) 71 69 65 59 57 55 51 50
||
B
0.171(15) 0.181(15) 0.204(15) 0.949(15) 0.171(14) 0.259(14) 0.922(12) 0.111(11)
|x
0
x
k
| 0 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9) 0.331(9)
|x
0
x
k
|
|x
0
|
0 0.999 0.999 1 1 1 1 1
|x
k
| 0.331(9) 0.653(7) 0.785(6) 0.213(4) 179 177 177 177
max
1.34 1.34 1.34 1.34 1.34 1.34 1.34 1.34
k
0.487(25) 0.102(22) 0.197(21) 0.614(18) 0.209(16) 0.198(13) 0.876(12) 0.353(11)
Cond
k
(A) 0.275(26) 0.131(24) 0.679(22) 0.218(19) 0.639(17) 0.678(14) 0.153(13) 0.379(12)
Cond_eff
k
(A) 0.101(18) 0.245(17) 0.105(17) 0.125(16) 0.434(15) 0.468(12) 0.105(11) 0.262(10)
Table III. Using TR by MFS for Laplace operator with N =71, M=50 and =
m
.
m(=
m
) 71 69 65 59 57 55 51 50
||
B
0.171(15) 0.183(15) 0.206(15) 0.114(14) 0.172(14) 0.123(13) 0.683(12) 0.234(11)
|x
|
|x
0
|
0.504 0.113(1) 0.217(2) 0.331(5) 0.536(6) 0.534(6) 0.534(6) 0.534(6)
max
1.34 1.34 1.34 1.34 1.34 1.34 1.34 1.34
m
0.489(25) 0.102(22) 0.197(21) 0.614(18) 0.209(16) 0.198(14) 0.876(12) 0.353(11)
Cond
(A)
1
2
Cond
k
(A), (101)
completely consistent with (61) in Theorem 3.3. The Cond
k
(A) and Cond