Anda di halaman 1dari 22

On a discretization algorithm for regularization

of inverse problem in electrical impedance


tomography
Mauricio Carrillo Juan A. Gmezy

Abstract
We study the application of a successive discretization algorithm to the
Tikhonov regularization of the inverse problem of electrical impedance
tomography. Instead of exact data, now we consider perturbations in mea-
sured data and we analyze global convergence to the solution of the contin-
uous problem, working in their discrete approximations and considered
a priori and Morozovs a posteriori election of the regularization
parameter. The approach can be seen as a regularization method itself
of the continuous inverse problem by a sequential regularization of the
discretized versions.

Key words: Inverse problem, Tikhonov regularization, least squares


formulation, succesive nite element discretization

1 Introduction
In [7] we proposed an algorithm for the well known inverse problem of elec-
trical impedance tomography, with exact data, and analyzed its convergence
properties. The direct problem consists of the potential equation div("ru) = 0
in a unit circle , with Neumanns condition, describing the behavior of the
electrostatic potential u(x; y) in a medium with a conductivity "(x; y).
We suppose that at each time a current i is applied to the boundary
of the circle (Neumanns data) and it is possible to measure the corresponding
potential 'i (Dirichlet data) but now with some errors. The inverse problem
is to nd "(x; y) given a nite number of Cauchy pairs measurements ('i ; i ),
i = 1; :::; N , using Tikhonovs regularization.
The traditional approaches to solve this nonlinear inverse problem try to
regularize one discretized problem, obtained by nite dierences or nite element
methods, and resolve it by diverse optimization algorithms, like Newton or
Gauss-Newtons methods or with more sophisticated SQP methods, sometimes
Universidad Catlica de la Santsima Concepcin, Chile
y Universidadde la Frontera, Temuco, Chile

1
with the inclusion of error estimations. In an incomplete list we can mention
([3], [4], [8], [12], [13], [17], [15]), and more references can be found in survey
[27].
Our work employs similar tools, but with aims to solve the continuous prob-
lem using successive discrete problems using the ideas given in [7]. Incidentally,
the results explain the fact that the solution of a well discretized problem is a
good approximation of the regularized continuous solution, and they also give
us some tools to judge the discretization. Ideas near to ours were recently
developed in ([16],[18]) but with dierent assumptions and results.
In [7] the continuous least square problem:
N Z
1X 2
J(") = jFi (") 'i j dS ! min ; (1)
2 i=1 "2Ead

was considered, where Fi are the operators which associate each parameter func-
tion " with its corresponding potential ui j , solution of Neumanns problem,
restricted to the boundary and Ead is a convex closed set.
Analogously, the discretized version of the problem was also considered:
Z
1 2
Jh ("h ) = Fih ("h ) ' ih dS ! min ; (2)
2 "h 2R
NT
h

where Fih associates each parameter vector "h with its corresponding discrete
solution uih j ; restricted to the boundary. Here the function uih ; 'ih are the
approximations of ui ; 'i obtained by application of nite element method to
Neumanns problem, with choosing a regular mesh with NTh triangles, and "h
is the piecewise approximation of ", considered constant at each triangle.
The algorithm in [7] nds an approximation of the continuous solution "
of (1) applying a least squares optimization method to successive discretized
problems (2), with the increasing number of triangles NTh , but controlling the
mesh size by checking Wolfes global convergence conditions for the continuous
problem (see also [11],[21]).
Fundamental conditions for algorithms success are the existence of contin-
uous solution " , and the convergence of gradients in L1 norm (see Lema 5.1
in [7]):
rJh ("h ) ! rJ(" ), if "h ! " :
h!0

We shall extend this result in section 4 to least square problems, generated by


Tikhonov regularization method, and an extended formulation of the algorithm
shall be given in section 5, to deal with selection of regularization parameter.
In the remaining sections, we formulate the regularizing continuous problem in
section 2, including the analysis of weakly closed property of the operators Fi ;
in section 3 we formulate the regularizing discrete problems via nite element
method and in section 6 we comment some conclusions and future work.

2
2 Regularization of continuous problem
We shall analyze the application of the algorithm proposed in [7] to the regu-
larizing problem. To this end we consider the following Tikhonov functional
; 2
J (") = J (") + " "0 L2 ( ) ! min (3)
2
where is the open unit ball, its boundary, " 2 L1 ( ), "0 2 L2 ( ) represents
a priori information about the unknown conductivity " 2 L1 ( ) and > 0
is the regularization parameter. The functional J (") is associated with the
variational formulation of the potential equation problem:

div("ru) = 0 en
@u ;
" @n = en

and is given by:


N Z
1X 2
J (") = ui 'i dS; (4)
2 i=1

where ui is the solution of the i variational equation:


Z Z
"rui rvdx = i vdS; 8v 2 V;
(5)
ui 2 V , 1 i N:
1
'i 2 H ( ); 1
2 i N are the data of the potential function, measured
on the boundary, which satisfy 'i 'i 1 , where 'i are supposed to be
H2
the exact data and > 0 is the maximum error of the measurements. In what
follows we shall assume that 'i ; 'i ; i are a ne functions on .
V is the quotient space V = H 1 ( )=R with norm kukV = krvkL2 ( ) ; 8v 2 u;
1
where u designs the equivalent class of u 2 H 1 ( ). Functions i 2 H 2 ( ) also
satisfy the compatibility condition:
Z
i dS = 0; 1 i N; (6)

and since i being currents applied to the boundary, they are considered
without errors. To simplify notation, we shall identify the class ui with any
representative ui 2 ui .
The gradient of J ; ("), as a function in L1 ( ), is given by:
;
rJ (") = rJ (") + (" "0 ) 2 L1 ( ) (7)

with:
N
X
rJ (") = rui rpi ; (8)
i=1

3
which has been calculated in [7], where ui is a solution of (5) and pi 2 V satisfy
the variational adjoint equation:
Z Z
"rpi rvdx = ('i ui )dS; 8v 2 V; 1 i N: (9)

We formally dene:

Fi : L2 ( ) ! L2 ( ); 1 i N; as Fi (") = ui j , (10)

i.e. they are the operators which associate each parameter function " with its
corresponding potential ui , solution of (5), restricted to the boundary. With
more precision, Fi shall be dened on the set:

Ead = f" 2 L2 ( ) : 0 < K " M; a:e:g ;

for some constant K, in order to ensure existence and uniqueness of Neumanns


problem and with M large enough. It is well known that, for any " 2 Ead ,
each i problem of (5)-(6) has a unique solution ui in V; and with application
of Lax-Milgram lemma [24], the following inequality holds:

kui kV = krui kL2 ( ) Ck i kL2 ( ) ; 8ui 2 ui : (11)

In addition, we also need to work with L2 Hilbert spaces to apply the results
of convergence for nonlinear regularization problems.
The formulation of the continuous inverse problem with exact data 'i is:

Find " 2 Ead such that:


(12)
Fi (") = 'i ; 1 i N:

and because Fi are nonlinear, problem (12) not necessarily has a solution and
is ill-conditioned in the sense of Hadamard. Instead, a least squares problem is
always considered:
N Z
1X 2
J(") = jFi (") 'i j dS ! min; " 2 Ead ; (13)
2 i=1

and again by nonlinearity, problem (13) not necessarily has a unique solution.
With inexact data, the regularizing continuous problems can be formulated
as follows:
N Z
; ; 1X 2 2
P : J (") = Fi (") 'i dS + " "0 L2 ( ) ! min; " 2 Ead :
2 i=1 2
(14)
Theorem 1 : Functions Fi ("); 1 i N; are continuous with respect to
L2 norms and are weakly closed, i.e. if "n ! "; yn ! y; yn = Fi ("n ); then
w w
Fi (") = y:

4
Moreover, if = ( ) is chosen in such a way that:
2
( ) ! 0 and ! 0 when ! 0; (15)
( )

then, problem (14) has a (not necessarily unique) solution and every set of solu-
tions " ; has a convergent sequence f" k ; k g when k ! 0 with k := ( k ).
The limit of any convergent sequence of the set " ; is a solution of problem
(13) and furthermore, if (13) has a unique solution " , then lim !0 " ( ); = " .
Proof : The result will be a consequence of Theorem 10.3 in [10]. Norm
continuity of operators Fi are known (see [9]), and it remains to show weak
closedness of Fi . We use the general notation F = Fi and prove that F is in
fact weakly continuous.
First, note that if "1 ; "2 2 Ead =) j"1 "2 j M K. Let be f"n g Ead
such that: Z Z
w
"n * " () "n vdx ! "vdx; 8v 2 L2 ( ); (16)

we must prove that:


Z Z
w
F ("n ) * F (") () unj wdS ! uj wdS; 8w 2 L2 ( ); (17)

where un ; u 2 V are the solutions of the variational problems:


Z Z
"rurvdx = vds
Z Z (18)
"n run rvdx = vds, 8v 2 V

In fact, we shall prove norm convergence in L2 ( ):

kF ("n ) F (")kL2 ( !
) n!1 0; (19)

which means that F is strongly continuous.


Recalling that V = H 1 ( )=R is a Hilbert space with scalar product
Z
hw; viV = rwrvdx,

and norm kvkV = krvkL2 ( ) , we apply the trace theorem [20] to have:

kF ("n ) F (")kL2 ( ) = unj uj L2 ( )


C( ) krun rukL2 ( ) : (20)

Incluir Remark 4??????????????????????? YO PIENSO QUE S, PARA


EXPLICAR LA DESIGUALDAD 7.3.

5
From (18) follows that:
Z
("n run "ru)rvdx = 0;

and (un u) is the solution of the variational problem:


Z Z
"n r(un u)rvdx = (" "n )rurvdx; 8v 2 V; 8n 2 N; (21)

with the linear functional Gn : V ! R dened by:


Z
Gn (v) = ("n ")rurvdx: (22)

Ead is a convex closed set, then weakly closed, and "n ; " 2 Ead . Hence Gn
is continuous for all n, as a consequence of the inequalities:
Z
jGn (v)j j"n "j jrurvj dx (M K) krukL2 krvkL2 ; 8v 2 V; 8n 2 N:

For each n 2 N, the bilinear form:


Z
an (w; v) = "n rwrvdx;

is coercive with a coercivity constant > 0 that can be chosen independet of


n, since "n is uniformly bounded from below. Then we can apply Lax-Milgram
theorem [24] to (21) obtaining:
1
kr(un u)kL2 ( ) kGn kV 0 (23)

and from (20) the inequality:

C( )
kF ("n ) F (")kL2 ( ) = unj uj L2 ( )
kGn kV 0 (24)

By Rieszs representation theorem, there exists gn 2 V such that:


Z
Gn (v) = hgn ; viV = rgn rvdx; with kGn kV 0 = kgn kV ;

but from (22) follows rgn = (" "n )ru, a.e., therefore:
Z Z
2 2 2 2 2
kGn kV 0 = kgn kV = hgn ; gn iV = jrgn j dx = j"n "j jruj dx: (25)

6
Dening:
1
[ 1
[
+
= fx 2 : "n " 0g ; = fx 2 : "n " < 0g ;
n=1 n=1

we have the estimation:


Z Z Z
2 2 2 2
j"n "j jruj dx ("n ")("n ") jruj dx + (" "n )(" "n ) jruj dx
8 +
9
<Z Z =
2 2
(M K) ("n ") jruj dx + (" "n ) jruj dx ;
: ;
+

8 9
Z <Z Z =
2 2 2 2
=) j"n "j jruj dx (M K) ("n ") jruj + dx + (" "n ) jruj dx ;
: ;
(26)
where denotes the indicator functions of the measurable sets .
Since L2 ( ) is a dense subset of L1 ( ); we have:
w w
"n ! " in L2 ( ) =) "n ! " in L1 ( ):

In fact, if v 2 L1 ( ) is arbitrary, for any > 0 there exists v 2 L2 ( ) such


that:
kv v kL1 ( ) < :
2(M K)
For N = N (v; ) large enough, we have for n > N :

Z
("n ") v dx < ;
2

and then:
Z Z Z
("n ")vdx j"n "j jv v j dx + ("n ") v dx

Z Z
(M K) jv v j dx + ("n ") v dx < + = :
2 2

2
Now, taking into account that functions jruj 2 L1 ( ), we conclude:
Z Z
2 2
("n ") jruj + dx + (" "n ) jruj dx ! 0; if n ! 1;

7
and from (25),(26):
Z
2 2 2
kGn kV 0 = j"n "j jruj dx ! 0; if n ! 1:

Finally, from (24) we have:

kF ("n ) F (")kL2 ( ) = unj uj L2 ( )


! 0; if n ! 1;

therefore F is strongly continuos and then, weakly closed.


To be precise, the convergence in Theorem 1 is a minimal norm convergence,
i.e. is a convergence to a " satisfying

" "0 = min " "0 : " is a solution of (13) :

We do not examine the uniqueness of problem (13) and in what follows


we assume that (13) has a unique solution " . Therefore, if " ( ); >0 is
a set of optimal solutions of problem (14) with ( ) satisfying (15), and if
" ( ); ! " ; in L2 ( ) then, " is the solution of the least squares problem
!0
with exact data.

3 Regularization of discretized problem


We consider the discretized versions of the regularizing continuous problems (3)
or (14) given by:
h 2
Jh h ; ("h ) = Jh ("h ) + "h "0h ! min; (27)
2 h

where h > 0 is the norm of a regular mesh dened by a triangulation h of


with NTh triangles:

n o NTh
[
h = T1 ; T2 ; :::; TNTh ; Ti \ Tj = ?; i 6= j; Tj = ;
j=1

corresponding to the application of the nite element method for the numerical
solution of Neumanns problem (5), h is the regularization parameter, k:k h
is the norm in RNTh dened by:
2 t
"h "0h = "h "0h h ("h "0h ); where:

h = diag jT1 j ; jT2 j ; :::; TNTh ;

and "h ; "0h 2 RNTh are vectors representing the piecewise constant discretization,
with respect to triangular mesh, of the functions " and "0 .

8
We design by "h (:) the canonical extension of the vector "h to a function of
L1 ( ), dened a.e. by:
"h (x; y) = ("h )k ; 8(x; y) 2 Tk ; 8Tk 2 h; k = 1; 2; :::; NTh ;
and analogously for "0h . We denote by RNTh (:) the set of all functions which are
canonical extensions of vectors 2 RNTh , for each xed triangulation h .
We also assume that "0 2 L2 ( ) is selected in such a way that:
"0 "0h (:) L2 ( )
! 0, when h ! 0:

The functional Jh ("h ) is given by:


Z
1 2
Jh ("h ) = u ih 'ih dS; "h 2 RNTh ; (28)
2

where uih is the solution of the discrete variational equation:


Z Z
"h (:)ruih rvh dx = i vh dS; 8vh 2 Vh ; (29)

u ih 2 Vh , 1 i N:
As usual in nite element approach, equation (29) is equivalent to the so-
lution of a system of linear equations Az = b; its solution vector z are the
coe cients of a linear combination of a basis = f 1 ; :::; Nh g, Nh being the
number of nodes in the triangulation h , which approximate the continuous
solution by the equality:
Nh
X
u(x) uih (x) = zj j (x); x2 ; (30)
j=1

zj = uih (xj ); xj a node of h; j = 1; :::; Nh ;


Vh is the subspace of V dened by:
Vh = vh 2 C( ) \ V : vhjT 2 P1 (T ); 8T 2 h (31)
and it is considered as the space of solutions of the discretized problem, where
P1 (T ) is the set of polynomials of degree less or equal to one dened on T .
Functions 'i ; i in (12),(5) are supposed to be a ne functions on . As a
consequence:
N h
X
'ih (x) = 'i (xi ) i (x);
i=1
where N h is the number of nodes xi on and i are the function of basis
corresponding to those nodes. As we did for "h 2 RNTh , we can identify 'ih as
vectors in RN h and likewise for 'ih :
'ih = ('i (xi ))1 i N :
h

9
The gradient of Jh h ; is given by:

rJh h ; ("h ) = rJh ("h ) + 2 h h ("h "0h ) 2 RNTh (32)


and it is easily seen that we have the equalities:
NTh
X Z
h ("h "0h ); h RNTh = ("hk "0hk )t h k = ("h (:) "0h (:)) h (:)dx; (33)
k=1

for all h 2 RNTh .


Using (32) and (33) the canonical extension rJh h ; ("h )(:) of the gradient
vector rJh h ; ("h ) can be seen as a linear functional over L1 ( ), which we
denote rJh h ; ("h )[:], by the formula:
Z
h;
rJh ("h )[ ] = rJh h ; ("h )(x) (x)dx; 8 2 L1 ( ):

We shall identify this linear functional with the L1 ( ) function:

rJh h ; ("h ) (:) = rJh ("h ) (:) + h ("h (:) "0h (:)) (34)
where the expresion for the rst term in (34) was obtained in [7]:
N
X
rJh ("h ) [:] = ruih rpih : (35)
i=1

Here uih ; pih 2 Vh are the respective solutions of the discrete variational
equations:
Z Z
"h (:)ruih rvh dx = i vh ds

Z Z (36)
"h (:)rpih rvh dx = ('ih uih )vh ds; 8vh 2 Vh

for all i = 1; 2; :::; N:


Like in the continuos case, and for each xed mesh h , we dene the operators
Fih : RNTh ! RN h as Fih ("h ) = uih j = 'ih , which associate a parameter
vector "h with its corresponding discrete solution, satisfying (30), restricted to
. Then, the discrete problem with exact data can be formulated as follows:
Find "h 2 Ead such that:
(37)
Fih (") = 'ih ; 1 i N:
and its regularizing discrete problem as:
N Z
; 1X 2 h 2
Jh ("h ) = Fih ("h ) 'ih dS + "h "0h ! min; "h 2 Ead
2 i=1 2
(38)

10
4 Convergence of Gradients
Theorem 2: Suppose f"h gh>0 RNTh (:) converge to " in L1 ( ) when h ! 0,
and parameters ; h are choosing such that:

( ) = lim h( ); 8 > 0: (39)


h!0

Then we have:

rJh h ; ("h ) [:] ! rJ ;


(") in L1 ( ); 8 > 0:
h!0

Moreover, if h ( ) is a regularization parameter of the discrete problem (27),


satisfying (15), and
2
lim = 0; (40)
(h; )!(0;0) h( )
then ( ) also satises (15), and it is a regularization parameter for the con-
tinuous problem (14). Conversely, if ( ) is a regularization parameter for (14)
satisfying (15) then, h ( ) is a regularization parameter for (27), provided that
(39) is fullled uniformly for
n 2 (0;o 0 ), with 0 > 0.
h;
Remark: Note that if "h is a set of optimal solutions of the regu-
h>0
larized discrete problem (38) which converges, if h ! 0, to " ; in L1 ( ) then,
using convexity of Ead , we have:
h i
rJh h ; ("h ) "h h ; 0 =) rJ ; (")[ " ; ] 0; 8 2 Ead ;

and " ; satisfy rst order optimality conditions for the regularizing continuous
problem (14).
Proof : From (7),(34) and triangle inequality we have:

rJh h ; ("h ) [:] rJ ;


(")
L1 ( ) (41)
rJh ("h ) rJ (") L1 ( )
+ (" "0 ) h ("h (:) "0h ) L1 ( )

Using Lema 5.1 of [7], for xed > 0, we obtain:

rJh ("h ) rJ (") L1 ( ) h!0


! 0:

In addition, by triangle and Holder inequalities we have:

(" "0 ) h ("h (:) "0h (:)) L1


k " h "h (:)kL1 + "0 0
h "h (:) L1
k( h )"kL1 + k h (" "h (:)kL1 +
0 0
+ ( h )" L1
+ h (" "0h (:) L1
j h j k"kL2 + h k" "h (:)kL2 +
0 0
+ j hj " L + h "
2
"0h (:) L2 ;

11
and then:
(" "0 ) h ("h (:) "0h (:))
h L1 ( i )
0
j hj k"kL2 ( )+ " L2 ( )
+ (42)
h i
0
+j hj k" "h (:)kL2 ( ) + " "0h (:) L2 ( )

By (39) we obtain:

(" "0 ) h ("h (:) "0h (:)) L1 h!0


! 0;

and the convergence of gradients.


Furthermorme, it is easy to see that we can applied Theorem 1 to the nite
dimensional discretized problem (27), using Tikhonov method (38), because
each Fih ("h ) are RNTh continuous and therefore weakly closed. The selection
of ; h ( ) satisfying (15) and (40) gives a regularization parameter of each
(discrete and continuous) problem.
In fact, if h ( ) satises (15),(40) we have:

lim ( ) = lim lim h( ) = lim lim h( ) = 0;


!0 !0 h!0 h!0 !0
2 2 2
lim = lim lim = lim lim = 0;
!0 ( ) !0 h!0 h( ) h!0 !0 h( )

then, assumptions of Theorem 1 are fullled and ( ) is a regularization para-


meter for (14).
On the other hand, if ( ) satises (15) and there exists 0 > 0 such that
(39) is satised, uniformly for 2 (0; 0 ), necessarily h ( ) should satisfy (15)
for h small enough. If this not so, there exists > 0 and sequences hk ; k ! 0,
such that hk ( k ) ; for all k. By assumption, there exists h0 > 0 such that:

h 2 (0; h0 ) =) j h( ) ( )j < ; for all 2 (0; 0 );


2
then hk 2 (0; h0 ) and k 2 (0; 0) for all k k0 , for some k0 2 N, therefore:

( k) > hk ( k ) ; for all k k0 :


2
But this implies:
( k) ; for all k k0 ;
2
contradicting (15) for ( ).
2
Analogously, it can be shown that lim !0 h( ), for h small enough. In other
case, we obtain as before:
2 2 2
k k k
> > ; 8k k0 ;
( k) hk ( k ) + 2 hk ( k )

12
for sequences hk ; k ! 0, then:
2 2
k k
; 8k =) > ; 8k k0 ;
hk ( k ) ( k)
and this contradicts (15) for ( ).
Theorem 3: Let be "h h ; a set of solutions of the regularizing discrete
problems (27) where h 2 (0; ). Suppose ; h ( ) are chosen satisfying (39),(40),
( );
and "h h ! " in L1 ( ), then:
!0

rJ h;
("h h ; ) rJ(") ! 0; if ! 0:
L1 ( )

Moreover we have:

rJ(") [ "] 0; 8 2 Ead ;

and therefore, every accumulation point " of "h h ; , for ! 0, satises opti-
mality conditions for the continuous problem (13) and " = " in case of a unique
solution " .
Proof : By triangle inequality we have:

rJ h;
("h h ; ) rJ(") rJ h;
("h h ; ) rJ ;
(") + rJ ;
(") rJ(") L1 ( )
:
L1 ( ) L1 ( )

By Theorem 2 we have for the rst term:

rJ h;
("h h ; ) rJ ;
(") ! 0 for each > 0;
L1 ( ) h!0

and the result follows if the second term tends to 0 if h ! 0:


In fact, we know the gradient formula for those continuous functionals:
N
X
rJ(") = rui rpi (43)
i=1

N
X
( );
rJ (") = rui rpi + ( )(" "0 ) (44)
i=1

where ui ; pi ; pi 2 V are, respectively, the solutions of the following variational


problems, for 1 i N :
Z Z
"rui rvdx = i vdS; 8v 2 V; (45)

Z Z
"rpi rvdx = ('i ui )vdS; 8v 2 V; (46)

13
Z Z
"rpi rvdx = ('i ui )vdS; 8v 2 V; (47)

1
where " 2 L1 ( ); " K > 0 a.e. and 'i ; i 2 H 2 ( ):
Since the bilinear form associated with the last three variational problems
(45),(46) and (47) is the same continuous and coercive bilinear form:
Z
a(w; v) = "rwrvdx;

hence, using (43),(44), we have:

rJ ( ); (") rJ(") L ( )
1
XN N
X
rui rpi rui rpi + ( ) (" "0 ) L1 ( )
;
i=1 i=1 L1 ( )
N
X
rui r(pi pi ) L1 ( )
+ ( ) " "0 L1 ( )
;
i=1

and with Holder inequality:


N
X
( );
rJ (") rJ(") krui kL2 ( ) r(pi pi ) L2 ( )
+ ( ) " "0 L1 ( )
:
L1 ( )
i=1
(48)
Using Lax-Milgram theorem in (45) we can write:

C ( )
kui kL2 ( ) k i kH 1 ; (49)
2 ( )

where C ( ) > 0 is the constant associated to trace operator and > 0 is


the coercivity constant of a(w; v). In addition, from (46),(47) we obtain that
(pi pi ) is the solution of the variational problem:
Z Z
"r(pi pi )rvdx = ('i 'i )vds; 8v 2 V;

and using Lax-Milgram again:

C ( )
r(pi pi ) L2 ( )
'i 'i 1 :
H 2 ( )

But we have:

'i 'i 1 'i 'i 1 ;


H 2 ( ) H2( )

therefore:

14
C ( )
rpi rpi L2 ( )
: (50)

From (48),(49) and (50) we obtain:


" N
#
C 2( ) X
rJ ( ); (") rJ(") 2 k i kH 1 + ( ) " "0 L1 ( )
L1 ( ) 2 ( )
i=1
(51)
and nally:
( );
rJ (") rJ(") !0:
L1 ( ) !0

The solutions "h h ; satisfy rst order optimality conditions for problem
(27), and since Ead is a closed convex set, we have:
h i
rJh h ; ("h h ; ) "h h ; 0; 8 2 Ead ;

Then, taking limit for ! 0 we obtain:

rJ(") [ "] 0; 8 2 Ead ;

and " satisfy rst order optimality conditions for problem (13).
Remark: As a consequence of Theorem 3, the algorithm given in [7] can
be applied to the regularizing continuous problem (14). If ; h ( ) is chosen
to satisfy (15),(39),(40) and taking h , the algorithm in [7], applied to the
problem (14), can be interpreted as a regularization method to the continuous
problem (13), working in regularization of successive discrete problems (27).
Remark: Note that a priori selection of parameters ; h remains unsolved,
and the application of Theorem 10.3 in [10] demands the use of an unknown
vector w. It seems to be preferable a posteriori selection of regularizing para-
meters. Therefore, in the next section we examine application of the commonly
used Morozovs discrepancy method.

5 About a posteriori selection of regularizing


parameter
To simplify matters, in this section we shall consider Fi = F , i.e. only one data
'i = ' is given or a vector notation F = (F1 ; F2 ; :::; FN ) is used.
Following the approach given in [23],[22], for a given c1 > 1, it is necessary
to nd = ( ; ' ) > 0 such that, there exists a solution " ; of problem P ;
in (14) satisfying:
' F (" ; ) L2 ( ) c1 ; (52)
By Theorem 2:6 in [22], the existence of such is a consequence of the
strongly continuous property of operator F in (10), which we have already
proved in Theorem 1.

15
Furthermore, if DF (:) denotes the adjoint derivative of operator F , and
assuming " ! DF (") is weakly/strongly continuous, i.e.
w
"n * " in L2 ( ) =) DF ("n ) ! DF (") in L2 ( ); 8 2 L2 ( ); (53)

then, in [22] (Theorem 2:7), it is shown that the set " k ; of optimal solu-
tions of a sequence of problems P k ; shall have, at least, one convergent
subsequence in L2 ( ).
Theorem 4: Operator F , dened in (10), satises property (53), if

"n ; " 2 C k;1 ( ); for some k 2 Z; k > 0: (54)

Proof: We have: (see Remark 3:1 in [9])

DF (") = ruru 2 L2 ( ) and DF ("n ) = run run 2 L2 ( ) ;

where u; u ; un ; un 2 V satisfy the respective variational formulations:


Z Z Z Z
i) "rurvdx = vds; ii) "ru rvdx = vds;

Z Z Z Z
iii) "n run rvdx = vds; iv) "n run rvdx = vds;

for all v 2 V:
From i) and iii), reapeting similar steps as in the proof of Theorem 1 after
(20), it follows that z = (u un ) 2 V is the unique solution of the variational
problem:
Z Z
"n rzrvdx = ("n ")rurvdx; 8v 2 V;

with the same linear functional Gn of (22). With the same arguments given
from (23) onwards, it can be shown that:

run ! ru in L2 ( ); n ! 1; (55)

Analogously, with ii) and iv), the following convergence is obtained:

run ! ru in L2 ( ); n ! 1: (56)

Now, using triangular inequality in L2 ( ) , we have:

krun run ruru kL2 ( ) (57)


krun (run ru )kL2 ( ) + kru (run ru)kL2 ( )

and by Lemma 5:1 in [9], if "n ; " 2 C k;1 ( ); for some k 2 Z; k > 0, then,
run ,ru 2 L1 ( ) and the following inequalities hold:

krun kL1 ( ) K krun kL2 ( ) and kru kL1 ( ) K kru kL2 ( ) :

16
Together with:

krun kL2 ( ) C k kL2 ( ) and kru kL2 ( ) C k kL2 ( ) ;

and with (11), (55), (56); (57), the result follows.


Moreover, Theorem 2:8 in [22] gives the adequate convergence result for our
inverse problem:
Theorem 5:([22]) Consider the inverse problem (12) and the corresponding
perturbed problem P ; in (14). If ' k denotes a perturbed data such that
' k ' L2 ( ) k ; if k ! 0 for k ! 1 and if " k ; k is a solution of
problem P k ; k with k selected by Morozovs principle (52), then " k; k has
a convergent subsequence. The limit of any convergent subsequence of " k; k
is a minimum norm solution of problem (12) and furthermore, if "y is unique,
then " k; k ! "y for k ! 1:
Theorem 5 assured that we can approximate the inverse problem solution
with the selection of parameter using Morozovs principle. Nevertheless, the
convergence can be arbitrarily slow, and it is needed a result with an estimate
p
of convergence order. Theorem 2:9 in [22] guaranteed convergence of order
under regularity assumptions for "y but with Lipschitz continuous condition for
0
the derivative operator F .
Unfortunately, this last condition does not hold for the operator F in (??)
(see [9]). Nevertheless, when analyzing the proof of Theorem 2:7(2:9?) it can be
seen that Lipschitz continuity is used to establish uniformly boundedness of the
derivative F 0 (" k ; ). This property was proven in ([9], Lemma 4:2) whenever
" k; Ead and under the assumption (54).
In order to obtain the estimate, we consider a more general condition for a
Frechet dierentiable operator F : X ! Y; dened on Hilbert spaces X; Y; and
satisfying:
0 2
9L > 0 : F (x + x) F (x) F (x) x L k xk ; for x; x + x 2 M; (58)

where M is a xed norm bounded set in X.


Theorem 2:9 in [22] can be reformulated as follows:
Theorem 6: Let X; Y be Hilbert spaces and F : D X ! Y a strongly
continuous and Frechet dierentiable operator with D = dom(F ) a convex
set. Let be xy a minimum norm solution of F (x) = y, and y 2 Y satisfy-
ing y y . Dene the Tikhonov functional as:
2 2
F (x) = F (x) y + kx x0 k ;

and lets also assume the following hypothesis:


i) x0 2 X is such that:

9c1 > 1 : y F (x0 ) > c1 ; (59)

ii) The inequality (58) holds,

17
iii) xy satisfy the regularity conditions [?]:
0
9 w 2 Y such that (xy x0 ) = F (xy )w; (60)
1
with L kwk < : (61)
2
Then, if the regularization parameter is selected by Morozovs criteria:
;
y F (x ) c1 (62)

where x ; is a minimizer of the Tikhonovs functional, the following estimates


is fullled: 1

; y 2(1 + c1 ) kwk 2 p
x x (63)
1 2L kwk
Proof:
The proof is based on that given in Theorem 10:4 of [?] combined with the
one given in Theorem 2:9 of [22]. In fact, condition (58) gives:
;
F (x ) = F (xy ) + DF (xy )(x ;
xy ) + r ;

; 2
where r L x xy . Now, following (10:10) in [?] and using (60) and
(61) we obtain:

; 2 ; 2
F (x ) y + x xy
2 ; ; 2
+2 kwk + 2 kwk F (x ) y + 2 L kwk x xy ;

then:
; 2 2 2
(1 2L) kwk x xy F (x ;
) y +2 kwk ( + F (x ;
) y ;

and (63) is a consequence of (62).


Remark: We already know that our operator F is strongly continuous and
Frechet dierentiable. Assumption (59) is natural since, in other case, vector
x0 can be considered an approximation of the solution xy .
Remark: Dobson ([9],Lema 4:2) shows that condition (58) holds for the
operator F of our problem, provided "y 2 C k;1 ;for some k 2 Z; k > 0, i.e.
0 2
F ("y + ") F ("y ) F ( ") C k "kL2 ( ) ;
L2 ( )

whenever "y + " 2 Ead : p


Therefore, Theorem 6 guarantee order of convergence to the solution of
the inverse problem (12), under the above mentioned regularity assumptions for
xy .

18
5.0.1 Algorithm
Our rst algorithm for problem P ; (see [7]) is a globally convergent multidi-
rectional descent algorithm, which obtain at each iteration, one approximated
solution of the continuous problem, computed by solving the discretized problem
P h ; h , until Wolfes global convergent conditions are fullled for the continuous
problem.
The algorithm to determine the parameter satisfying Morozovs principle,
is developed by minimizing J ; for a set of regularizing parameters until the
desired inequality holds. Numbers c1 > 1; 0 > 0; 0 < q < 1; j = q j 0 are
chosen, and solutions " j ; are computed until:
j;
' F (" ) c1 : (64)
;
Both algorithms are combined, where the approximated solution "h j
" j;
of the continuous problem, computed by minimizing Jh ; , is used to check
the Morozov inequality (64).

ALGORITHM 2:
Choose c1 > 1; 0 < q0 < 1; u > 0; j = 0;
0) Set j = qj u ;
;
1) Compute "h j using algorithm in [7]:
1:1) Set h0 = hinic ; "0 (:) = "h0 (:) 2 L1 ( ); l = 0; k = 0:
;
1:2) If krJ("l (:))kL1 ( ) 0 stop, function "l (:) = "h j is an approxi-
mation of a local minima " j ; for the continuous problem P j ; , go to step
2):
Otherwise, go to step 1:3:
1:3) Set k ! k + 1; k = k2 1 ; hk = hk2 1
1:4) Dene "hk = "l at the new triangulation hk :
1:5) Verify if

j; ; j;
rJ ("hk (:)) rJhkj ("hk )(:) k rJ ("hk (:)) L1 ( )
L1 ( )

If it holds, go to step 1:6;


If it does not hold, take hk = h2k and go to step 1:4:
N
1:6) Choose step "hk 2 R Thk from "hk and k > 0 satisfying Wolfes
conditions for the discrete problem (27).
1:7) Dene "hk = "hk + k "hk and verify if its cannonical extension
"hk (:) satises Wolfes conditions for the continuous problem P j ; .
h ;
If they hold or if rJhk k ("hk ) = 0; take "l+1 (:) = "hk (:); l = l + 1;
go to step 1:2:
h ;
If they do not hold and rJhk k ("hk ) 6= 0, take "hk = "hk ; go to
step 1:6:
;
2) If ' F ("h j ) c1 ;

19
;
then " j ; = "h j is an approximation of the continuous problems which
satises the Morozovs inequality (64) for the accepted regularizing parameter
j,
;
else if ' F ("h j ) > c1 then qj+1 = q0 qj ;
else qj+1 = qj + (1 qj ) 21 ; j = u;
3) j+1 = qj+1 j ; j = j + 1; go to step 1):

6 Conclusions
Algorithm given in [7] can be applied to the regularization of the continu-
ous problem (14), provided the regularization parameters ; h can be chosen.
Moreover, if the a priori regularization parameters ; h of both problems are
adequately chosen then, the regularized discrete solutions "h h ; converge to the
solution " ; of the regularized continuous problem. Theorem 3 shows that al-
gorithm in [7] can be seen as a regularization method by itself. On the other
hand, application of Morozovs principle gives a more practical approach and
the new combined Algorithm2 can be used.
In future work,analizaremos la aplicabilidad e implementacin a nue-
stro problema de esquemas ms generales de la regularizacin de
Tikhonov, tales como los que se estudian en [2].

References
[1] Adler, A., R. Gaburro, W. Lionheart(2011). Electrical Impedance Tomog-
raphy. Chapter 14 in Handbook of mathematical methods in imaging, Ed.
Otmar Scherzer, Springer Science Business Media.
[2] Anzengruber, S.W., R. Ramlau(2010). Morozovs discrepancy principle
for Tikhonov-type functionals with non-linear operators,Inverse Prob-
lems,26,025001.
[3] Bakushinsky, A.B., M. Kokurin(2005). Iterative Methods for Approximate
solution of inverse problems. Springer Verlag.
[4] Borcea, L. (2001). A nonlinear multigrid for imaging electrical conductivity
and permitivity at low frequency, Inverse Problems ,17:329-359.
[5] Borcea, L. (2002). Electrical impedance tomography,Topical Review. In-
verse Problems 18:R99-R136.
[6] Butler, J.E., R.T. Bonnecaze(2000). Inverse method for imaging a free sur-
face using electrical impedance tomography. Chemical engineering science,
55: 1193-1204.

20
[7] Carrillo, M., J.A. Gmez (2015). A globally convergent algorithm for a
PDE constrained optimization problem arising in electrical impedance to-
mography. Num. Funct. Anal. Optim. 36: 748776.
[8] Dalmasso, R. (2004). An inverse problem for an elliptic equation, Pub.
RIMS, Kyoto Univ.,40:91-123.

[9] Dobson, D.C.(1992). Convergence of a reconstruction method for the


inverse conductivity problem, SIAM J. Appl. Math,(52),No. 2, 442-458.
[10] Engl, H., M. Hanke, A. Neubauer(2000). Regularization of inverse prob-
lems, Kluwer Academic Publishers,Dordrecht,Boston,London.

[11] Gmez, J.A., M. Romero(1998). Global convergence of a multidirectional


algorithm for unconstrained optimal control problems. Numer. Funct. Anal.
Optim., 19:9-10.
[12] Gmez S., M. Ono, C. Gamio, A. Fraguela (2003). Reconstruction of ca-
pacitance tomography images of simulated two-phase ow regimes, Applied
Numerical Mathematics 46:197-208.
[13] Herzog R., K. Kunisch (2010). Algorithms for PDE-constrained
optimization,Gamm-Mitteilungen, (33):163-176.
[14] Holder, D., Editor(2005). Electrical Impedance tomography. Institute of
Physics. Series in Medical Physics and Biomedical Engineering. United
Kingdom.
[15] Jin B., Khan T., P. Maass (2012). A reconstruction algorithm for electri-
cal impedance tomography based on sparsity regularization, Int. J. Nu-
mer.Meth.Engng, (89):337-353.
[16] Kaltenbacher, B., A. Kirchner, B. Vexler(2011). Adaptive discretizations
for the choice of a Tikhonov regularization parameter in nonlinear inverse
problems. Inverse Problems, 27:12, 125008.
[17] Kaltenbacher, B., A. Neubauer(2006). Convergence of projected iterative
regularization methods for nonlinear problems with smooth solutions. In-
verse Problems, 22:11051119.
[18] Kirchner, A.R.(2014). Adaptive regularization and discretization for non-
linear inverse problems with PDEs,Thesis Dr. rer. nat.,Technische Univer-
sitt Mnchen, Germany.

[19] A. Lechleiter, A. Rieder(2008). Newton regularization for impedance to-


mography: convergence by local injectivity. Inverse Problems, 24:6, 065009.
[20] Miyazaki, Y.(2008). New proofs of the trace theorem of Sobolev spaces.
Proc. Japan Acad., 84, Serie A, 112-116.

21
[21] Nocedal, J., S.J.Wright (1999). Numerical Optimization. Springer Series
in Operations Research. Springer Verlag.
[22] Ramlau, R. (2001) Morozovs discrepancy principle for Tikhonov regular-
ization of nonlinear operators. Zentrum fr Technomathematik, Report 01-
08. University of Bremen.

[23] Ramlau, R. (2002) Morozovs discrepancy principle for Tikhonov regular-


ization of nonlinear operators. Num. Funct. Anal. Optim. 2 3: 147172.
[24] Raviart P.A., J.M. Thomas (1998). Introduction Lanalyse numrique
des quations aux drivees partielles. Mathemtiques Apliques pour la
Maitrise. Dunod.
[25] Santucho E.M. A,A. Orlando,M. Luege(2013). Identicacin de cavidades
mediante la tomografa de impedancia elctrica. Mecnica Computacional
,Vol XXXII, 1737-1749.
[26] Seidman T.I, C.R. Vogel(1989). Well-posedness and convergence of some
regularization methods for nonlinear ill-posed problems. Inverse Problems,
5:227238.
[27] Uhlman, G.(2009). Electrical impedance tomography and Calderons prob-
lem. Inverse Problem 25: 123011.

22