Abstract
We study the application of a successive discretization algorithm to the
Tikhonov regularization of the inverse problem of electrical impedance
tomography. Instead of exact data, now we consider perturbations in mea-
sured data and we analyze global convergence to the solution of the contin-
uous problem, working in their discrete approximations and considered
a priori and Morozovs a posteriori election of the regularization
parameter. The approach can be seen as a regularization method itself
of the continuous inverse problem by a sequential regularization of the
discretized versions.
1 Introduction
In [7] we proposed an algorithm for the well known inverse problem of elec-
trical impedance tomography, with exact data, and analyzed its convergence
properties. The direct problem consists of the potential equation div("ru) = 0
in a unit circle , with Neumanns condition, describing the behavior of the
electrostatic potential u(x; y) in a medium with a conductivity "(x; y).
We suppose that at each time a current i is applied to the boundary
of the circle (Neumanns data) and it is possible to measure the corresponding
potential 'i (Dirichlet data) but now with some errors. The inverse problem
is to nd "(x; y) given a nite number of Cauchy pairs measurements ('i ; i ),
i = 1; :::; N , using Tikhonovs regularization.
The traditional approaches to solve this nonlinear inverse problem try to
regularize one discretized problem, obtained by nite dierences or nite element
methods, and resolve it by diverse optimization algorithms, like Newton or
Gauss-Newtons methods or with more sophisticated SQP methods, sometimes
Universidad Catlica de la Santsima Concepcin, Chile
y Universidadde la Frontera, Temuco, Chile
1
with the inclusion of error estimations. In an incomplete list we can mention
([3], [4], [8], [12], [13], [17], [15]), and more references can be found in survey
[27].
Our work employs similar tools, but with aims to solve the continuous prob-
lem using successive discrete problems using the ideas given in [7]. Incidentally,
the results explain the fact that the solution of a well discretized problem is a
good approximation of the regularized continuous solution, and they also give
us some tools to judge the discretization. Ideas near to ours were recently
developed in ([16],[18]) but with dierent assumptions and results.
In [7] the continuous least square problem:
N Z
1X 2
J(") = jFi (") 'i j dS ! min ; (1)
2 i=1 "2Ead
was considered, where Fi are the operators which associate each parameter func-
tion " with its corresponding potential ui j , solution of Neumanns problem,
restricted to the boundary and Ead is a convex closed set.
Analogously, the discretized version of the problem was also considered:
Z
1 2
Jh ("h ) = Fih ("h ) ' ih dS ! min ; (2)
2 "h 2R
NT
h
where Fih associates each parameter vector "h with its corresponding discrete
solution uih j ; restricted to the boundary. Here the function uih ; 'ih are the
approximations of ui ; 'i obtained by application of nite element method to
Neumanns problem, with choosing a regular mesh with NTh triangles, and "h
is the piecewise approximation of ", considered constant at each triangle.
The algorithm in [7] nds an approximation of the continuous solution "
of (1) applying a least squares optimization method to successive discretized
problems (2), with the increasing number of triangles NTh , but controlling the
mesh size by checking Wolfes global convergence conditions for the continuous
problem (see also [11],[21]).
Fundamental conditions for algorithms success are the existence of contin-
uous solution " , and the convergence of gradients in L1 norm (see Lema 5.1
in [7]):
rJh ("h ) ! rJ(" ), if "h ! " :
h!0
2
2 Regularization of continuous problem
We shall analyze the application of the algorithm proposed in [7] to the regu-
larizing problem. To this end we consider the following Tikhonov functional
; 2
J (") = J (") + " "0 L2 ( ) ! min (3)
2
where is the open unit ball, its boundary, " 2 L1 ( ), "0 2 L2 ( ) represents
a priori information about the unknown conductivity " 2 L1 ( ) and > 0
is the regularization parameter. The functional J (") is associated with the
variational formulation of the potential equation problem:
div("ru) = 0 en
@u ;
" @n = en
and since i being currents applied to the boundary, they are considered
without errors. To simplify notation, we shall identify the class ui with any
representative ui 2 ui .
The gradient of J ; ("), as a function in L1 ( ), is given by:
;
rJ (") = rJ (") + (" "0 ) 2 L1 ( ) (7)
with:
N
X
rJ (") = rui rpi ; (8)
i=1
3
which has been calculated in [7], where ui is a solution of (5) and pi 2 V satisfy
the variational adjoint equation:
Z Z
"rpi rvdx = ('i ui )dS; 8v 2 V; 1 i N: (9)
We formally dene:
Fi : L2 ( ) ! L2 ( ); 1 i N; as Fi (") = ui j , (10)
i.e. they are the operators which associate each parameter function " with its
corresponding potential ui , solution of (5), restricted to the boundary. With
more precision, Fi shall be dened on the set:
In addition, we also need to work with L2 Hilbert spaces to apply the results
of convergence for nonlinear regularization problems.
The formulation of the continuous inverse problem with exact data 'i is:
and because Fi are nonlinear, problem (12) not necessarily has a solution and
is ill-conditioned in the sense of Hadamard. Instead, a least squares problem is
always considered:
N Z
1X 2
J(") = jFi (") 'i j dS ! min; " 2 Ead ; (13)
2 i=1
and again by nonlinearity, problem (13) not necessarily has a unique solution.
With inexact data, the regularizing continuous problems can be formulated
as follows:
N Z
; ; 1X 2 2
P : J (") = Fi (") 'i dS + " "0 L2 ( ) ! min; " 2 Ead :
2 i=1 2
(14)
Theorem 1 : Functions Fi ("); 1 i N; are continuous with respect to
L2 norms and are weakly closed, i.e. if "n ! "; yn ! y; yn = Fi ("n ); then
w w
Fi (") = y:
4
Moreover, if = ( ) is chosen in such a way that:
2
( ) ! 0 and ! 0 when ! 0; (15)
( )
then, problem (14) has a (not necessarily unique) solution and every set of solu-
tions " ; has a convergent sequence f" k ; k g when k ! 0 with k := ( k ).
The limit of any convergent sequence of the set " ; is a solution of problem
(13) and furthermore, if (13) has a unique solution " , then lim !0 " ( ); = " .
Proof : The result will be a consequence of Theorem 10.3 in [10]. Norm
continuity of operators Fi are known (see [9]), and it remains to show weak
closedness of Fi . We use the general notation F = Fi and prove that F is in
fact weakly continuous.
First, note that if "1 ; "2 2 Ead =) j"1 "2 j M K. Let be f"n g Ead
such that: Z Z
w
"n * " () "n vdx ! "vdx; 8v 2 L2 ( ); (16)
kF ("n ) F (")kL2 ( !
) n!1 0; (19)
and norm kvkV = krvkL2 ( ) , we apply the trace theorem [20] to have:
5
From (18) follows that:
Z
("n run "ru)rvdx = 0;
Ead is a convex closed set, then weakly closed, and "n ; " 2 Ead . Hence Gn
is continuous for all n, as a consequence of the inequalities:
Z
jGn (v)j j"n "j jrurvj dx (M K) krukL2 krvkL2 ; 8v 2 V; 8n 2 N:
C( )
kF ("n ) F (")kL2 ( ) = unj uj L2 ( )
kGn kV 0 (24)
but from (22) follows rgn = (" "n )ru, a.e., therefore:
Z Z
2 2 2 2 2
kGn kV 0 = kgn kV = hgn ; gn iV = jrgn j dx = j"n "j jruj dx: (25)
6
Dening:
1
[ 1
[
+
= fx 2 : "n " 0g ; = fx 2 : "n " < 0g ;
n=1 n=1
8 9
Z <Z Z =
2 2 2 2
=) j"n "j jruj dx (M K) ("n ") jruj + dx + (" "n ) jruj dx ;
: ;
(26)
where denotes the indicator functions of the measurable sets .
Since L2 ( ) is a dense subset of L1 ( ); we have:
w w
"n ! " in L2 ( ) =) "n ! " in L1 ( ):
Z
("n ") v dx < ;
2
and then:
Z Z Z
("n ")vdx j"n "j jv v j dx + ("n ") v dx
Z Z
(M K) jv v j dx + ("n ") v dx < + = :
2 2
2
Now, taking into account that functions jruj 2 L1 ( ), we conclude:
Z Z
2 2
("n ") jruj + dx + (" "n ) jruj dx ! 0; if n ! 1;
7
and from (25),(26):
Z
2 2 2
kGn kV 0 = j"n "j jruj dx ! 0; if n ! 1:
n o NTh
[
h = T1 ; T2 ; :::; TNTh ; Ti \ Tj = ?; i 6= j; Tj = ;
j=1
corresponding to the application of the nite element method for the numerical
solution of Neumanns problem (5), h is the regularization parameter, k:k h
is the norm in RNTh dened by:
2 t
"h "0h = "h "0h h ("h "0h ); where:
and "h ; "0h 2 RNTh are vectors representing the piecewise constant discretization,
with respect to triangular mesh, of the functions " and "0 .
8
We design by "h (:) the canonical extension of the vector "h to a function of
L1 ( ), dened a.e. by:
"h (x; y) = ("h )k ; 8(x; y) 2 Tk ; 8Tk 2 h; k = 1; 2; :::; NTh ;
and analogously for "0h . We denote by RNTh (:) the set of all functions which are
canonical extensions of vectors 2 RNTh , for each xed triangulation h .
We also assume that "0 2 L2 ( ) is selected in such a way that:
"0 "0h (:) L2 ( )
! 0, when h ! 0:
u ih 2 Vh , 1 i N:
As usual in nite element approach, equation (29) is equivalent to the so-
lution of a system of linear equations Az = b; its solution vector z are the
coe cients of a linear combination of a basis = f 1 ; :::; Nh g, Nh being the
number of nodes in the triangulation h , which approximate the continuous
solution by the equality:
Nh
X
u(x) uih (x) = zj j (x); x2 ; (30)
j=1
9
The gradient of Jh h ; is given by:
rJh h ; ("h ) (:) = rJh ("h ) (:) + h ("h (:) "0h (:)) (34)
where the expresion for the rst term in (34) was obtained in [7]:
N
X
rJh ("h ) [:] = ruih rpih : (35)
i=1
Here uih ; pih 2 Vh are the respective solutions of the discrete variational
equations:
Z Z
"h (:)ruih rvh dx = i vh ds
Z Z (36)
"h (:)rpih rvh dx = ('ih uih )vh ds; 8vh 2 Vh
10
4 Convergence of Gradients
Theorem 2: Suppose f"h gh>0 RNTh (:) converge to " in L1 ( ) when h ! 0,
and parameters ; h are choosing such that:
Then we have:
and " ; satisfy rst order optimality conditions for the regularizing continuous
problem (14).
Proof : From (7),(34) and triangle inequality we have:
11
and then:
(" "0 ) h ("h (:) "0h (:))
h L1 ( i )
0
j hj k"kL2 ( )+ " L2 ( )
+ (42)
h i
0
+j hj k" "h (:)kL2 ( ) + " "0h (:) L2 ( )
By (39) we obtain:
12
for sequences hk ; k ! 0, then:
2 2
k k
; 8k =) > ; 8k k0 ;
hk ( k ) ( k)
and this contradicts (15) for ( ).
Theorem 3: Let be "h h ; a set of solutions of the regularizing discrete
problems (27) where h 2 (0; ). Suppose ; h ( ) are chosen satisfying (39),(40),
( );
and "h h ! " in L1 ( ), then:
!0
rJ h;
("h h ; ) rJ(") ! 0; if ! 0:
L1 ( )
Moreover we have:
and therefore, every accumulation point " of "h h ; , for ! 0, satises opti-
mality conditions for the continuous problem (13) and " = " in case of a unique
solution " .
Proof : By triangle inequality we have:
rJ h;
("h h ; ) rJ(") rJ h;
("h h ; ) rJ ;
(") + rJ ;
(") rJ(") L1 ( )
:
L1 ( ) L1 ( )
rJ h;
("h h ; ) rJ ;
(") ! 0 for each > 0;
L1 ( ) h!0
N
X
( );
rJ (") = rui rpi + ( )(" "0 ) (44)
i=1
Z Z
"rpi rvdx = ('i ui )vdS; 8v 2 V; (46)
13
Z Z
"rpi rvdx = ('i ui )vdS; 8v 2 V; (47)
1
where " 2 L1 ( ); " K > 0 a.e. and 'i ; i 2 H 2 ( ):
Since the bilinear form associated with the last three variational problems
(45),(46) and (47) is the same continuous and coercive bilinear form:
Z
a(w; v) = "rwrvdx;
rJ ( ); (") rJ(") L ( )
1
XN N
X
rui rpi rui rpi + ( ) (" "0 ) L1 ( )
;
i=1 i=1 L1 ( )
N
X
rui r(pi pi ) L1 ( )
+ ( ) " "0 L1 ( )
;
i=1
C ( )
kui kL2 ( ) k i kH 1 ; (49)
2 ( )
C ( )
r(pi pi ) L2 ( )
'i 'i 1 :
H 2 ( )
But we have:
therefore:
14
C ( )
rpi rpi L2 ( )
: (50)
The solutions "h h ; satisfy rst order optimality conditions for problem
(27), and since Ead is a closed convex set, we have:
h i
rJh h ; ("h h ; ) "h h ; 0; 8 2 Ead ;
and " satisfy rst order optimality conditions for problem (13).
Remark: As a consequence of Theorem 3, the algorithm given in [7] can
be applied to the regularizing continuous problem (14). If ; h ( ) is chosen
to satisfy (15),(39),(40) and taking h , the algorithm in [7], applied to the
problem (14), can be interpreted as a regularization method to the continuous
problem (13), working in regularization of successive discrete problems (27).
Remark: Note that a priori selection of parameters ; h remains unsolved,
and the application of Theorem 10.3 in [10] demands the use of an unknown
vector w. It seems to be preferable a posteriori selection of regularizing para-
meters. Therefore, in the next section we examine application of the commonly
used Morozovs discrepancy method.
15
Furthermore, if DF (:) denotes the adjoint derivative of operator F , and
assuming " ! DF (") is weakly/strongly continuous, i.e.
w
"n * " in L2 ( ) =) DF ("n ) ! DF (") in L2 ( ); 8 2 L2 ( ); (53)
then, in [22] (Theorem 2:7), it is shown that the set " k ; of optimal solu-
tions of a sequence of problems P k ; shall have, at least, one convergent
subsequence in L2 ( ).
Theorem 4: Operator F , dened in (10), satises property (53), if
Z Z Z Z
iii) "n run rvdx = vds; iv) "n run rvdx = vds;
for all v 2 V:
From i) and iii), reapeting similar steps as in the proof of Theorem 1 after
(20), it follows that z = (u un ) 2 V is the unique solution of the variational
problem:
Z Z
"n rzrvdx = ("n ")rurvdx; 8v 2 V;
with the same linear functional Gn of (22). With the same arguments given
from (23) onwards, it can be shown that:
run ! ru in L2 ( ); n ! 1; (55)
run ! ru in L2 ( ); n ! 1: (56)
and by Lemma 5:1 in [9], if "n ; " 2 C k;1 ( ); for some k 2 Z; k > 0, then,
run ,ru 2 L1 ( ) and the following inequalities hold:
16
Together with:
17
iii) xy satisfy the regularity conditions [?]:
0
9 w 2 Y such that (xy x0 ) = F (xy )w; (60)
1
with L kwk < : (61)
2
Then, if the regularization parameter is selected by Morozovs criteria:
;
y F (x ) c1 (62)
; y 2(1 + c1 ) kwk 2 p
x x (63)
1 2L kwk
Proof:
The proof is based on that given in Theorem 10:4 of [?] combined with the
one given in Theorem 2:9 of [22]. In fact, condition (58) gives:
;
F (x ) = F (xy ) + DF (xy )(x ;
xy ) + r ;
; 2
where r L x xy . Now, following (10:10) in [?] and using (60) and
(61) we obtain:
; 2 ; 2
F (x ) y + x xy
2 ; ; 2
+2 kwk + 2 kwk F (x ) y + 2 L kwk x xy ;
then:
; 2 2 2
(1 2L) kwk x xy F (x ;
) y +2 kwk ( + F (x ;
) y ;
18
5.0.1 Algorithm
Our rst algorithm for problem P ; (see [7]) is a globally convergent multidi-
rectional descent algorithm, which obtain at each iteration, one approximated
solution of the continuous problem, computed by solving the discretized problem
P h ; h , until Wolfes global convergent conditions are fullled for the continuous
problem.
The algorithm to determine the parameter satisfying Morozovs principle,
is developed by minimizing J ; for a set of regularizing parameters until the
desired inequality holds. Numbers c1 > 1; 0 > 0; 0 < q < 1; j = q j 0 are
chosen, and solutions " j ; are computed until:
j;
' F (" ) c1 : (64)
;
Both algorithms are combined, where the approximated solution "h j
" j;
of the continuous problem, computed by minimizing Jh ; , is used to check
the Morozov inequality (64).
ALGORITHM 2:
Choose c1 > 1; 0 < q0 < 1; u > 0; j = 0;
0) Set j = qj u ;
;
1) Compute "h j using algorithm in [7]:
1:1) Set h0 = hinic ; "0 (:) = "h0 (:) 2 L1 ( ); l = 0; k = 0:
;
1:2) If krJ("l (:))kL1 ( ) 0 stop, function "l (:) = "h j is an approxi-
mation of a local minima " j ; for the continuous problem P j ; , go to step
2):
Otherwise, go to step 1:3:
1:3) Set k ! k + 1; k = k2 1 ; hk = hk2 1
1:4) Dene "hk = "l at the new triangulation hk :
1:5) Verify if
j; ; j;
rJ ("hk (:)) rJhkj ("hk )(:) k rJ ("hk (:)) L1 ( )
L1 ( )
19
;
then " j ; = "h j is an approximation of the continuous problems which
satises the Morozovs inequality (64) for the accepted regularizing parameter
j,
;
else if ' F ("h j ) > c1 then qj+1 = q0 qj ;
else qj+1 = qj + (1 qj ) 21 ; j = u;
3) j+1 = qj+1 j ; j = j + 1; go to step 1):
6 Conclusions
Algorithm given in [7] can be applied to the regularization of the continu-
ous problem (14), provided the regularization parameters ; h can be chosen.
Moreover, if the a priori regularization parameters ; h of both problems are
adequately chosen then, the regularized discrete solutions "h h ; converge to the
solution " ; of the regularized continuous problem. Theorem 3 shows that al-
gorithm in [7] can be seen as a regularization method by itself. On the other
hand, application of Morozovs principle gives a more practical approach and
the new combined Algorithm2 can be used.
In future work,analizaremos la aplicabilidad e implementacin a nue-
stro problema de esquemas ms generales de la regularizacin de
Tikhonov, tales como los que se estudian en [2].
References
[1] Adler, A., R. Gaburro, W. Lionheart(2011). Electrical Impedance Tomog-
raphy. Chapter 14 in Handbook of mathematical methods in imaging, Ed.
Otmar Scherzer, Springer Science Business Media.
[2] Anzengruber, S.W., R. Ramlau(2010). Morozovs discrepancy principle
for Tikhonov-type functionals with non-linear operators,Inverse Prob-
lems,26,025001.
[3] Bakushinsky, A.B., M. Kokurin(2005). Iterative Methods for Approximate
solution of inverse problems. Springer Verlag.
[4] Borcea, L. (2001). A nonlinear multigrid for imaging electrical conductivity
and permitivity at low frequency, Inverse Problems ,17:329-359.
[5] Borcea, L. (2002). Electrical impedance tomography,Topical Review. In-
verse Problems 18:R99-R136.
[6] Butler, J.E., R.T. Bonnecaze(2000). Inverse method for imaging a free sur-
face using electrical impedance tomography. Chemical engineering science,
55: 1193-1204.
20
[7] Carrillo, M., J.A. Gmez (2015). A globally convergent algorithm for a
PDE constrained optimization problem arising in electrical impedance to-
mography. Num. Funct. Anal. Optim. 36: 748776.
[8] Dalmasso, R. (2004). An inverse problem for an elliptic equation, Pub.
RIMS, Kyoto Univ.,40:91-123.
21
[21] Nocedal, J., S.J.Wright (1999). Numerical Optimization. Springer Series
in Operations Research. Springer Verlag.
[22] Ramlau, R. (2001) Morozovs discrepancy principle for Tikhonov regular-
ization of nonlinear operators. Zentrum fr Technomathematik, Report 01-
08. University of Bremen.
22