2 tayangan

Diunggah oleh mjcarri

articulo

- The Pragmatic Theory Solution to the Netflix Grand Prize
- 03 Linear Regression
- Bachelor_Thesis_C_Caljouw.pdf
- Pol Izz Otto
- 258afd93c0aa17648119241ca17b873da5f1.pdf
- Clustered Compressive Sensingbased Image Denoising Using Bayesian Framework
- 39
- Variational Techniques for Image Denoising: A Review
- 34032_GreenFunc
- Global Formulations of Lagrangian and Hamiltonian Dynamics on Manifolds a Geometric Approach to Modeling and Analysis
- Terms - Functions, Limits, Continuity
- The Euler-PoincarÃ© EQs & Double Bracket Dissipation-96--Marsden-p49--pIRX
- Introduction to Mathematical Physics-Laurie Cossey
- Calculus Variations NOTES1
- AJP000044
- Taller4 Algebra 2019
- Lab-EDO2
- Lab-EDO8
- Sylabus Algebra
- ev1_in1008c_2016_1_pauta

Anda di halaman 1dari 22

tomography

Mauricio Carrillo Juan A. Gmezy

Abstract

We study the application of a successive discretization algorithm to the

Tikhonov regularization of the inverse problem of electrical impedance

tomography. Instead of exact data, now we consider perturbations in mea-

sured data and we analyze global convergence to the solution of the contin-

uous problem, working in their discrete approximations and considered

a priori and Morozovs a posteriori election of the regularization

parameter. The approach can be seen as a regularization method itself

of the continuous inverse problem by a sequential regularization of the

discretized versions.

formulation, succesive nite element discretization

1 Introduction

In [7] we proposed an algorithm for the well known inverse problem of elec-

trical impedance tomography, with exact data, and analyzed its convergence

properties. The direct problem consists of the potential equation div("ru) = 0

in a unit circle , with Neumanns condition, describing the behavior of the

electrostatic potential u(x; y) in a medium with a conductivity "(x; y).

We suppose that at each time a current i is applied to the boundary

of the circle (Neumanns data) and it is possible to measure the corresponding

potential 'i (Dirichlet data) but now with some errors. The inverse problem

is to nd "(x; y) given a nite number of Cauchy pairs measurements ('i ; i ),

i = 1; :::; N , using Tikhonovs regularization.

The traditional approaches to solve this nonlinear inverse problem try to

regularize one discretized problem, obtained by nite dierences or nite element

methods, and resolve it by diverse optimization algorithms, like Newton or

Gauss-Newtons methods or with more sophisticated SQP methods, sometimes

Universidad Catlica de la Santsima Concepcin, Chile

y Universidadde la Frontera, Temuco, Chile

1

with the inclusion of error estimations. In an incomplete list we can mention

([3], [4], [8], [12], [13], [17], [15]), and more references can be found in survey

[27].

Our work employs similar tools, but with aims to solve the continuous prob-

lem using successive discrete problems using the ideas given in [7]. Incidentally,

the results explain the fact that the solution of a well discretized problem is a

good approximation of the regularized continuous solution, and they also give

us some tools to judge the discretization. Ideas near to ours were recently

developed in ([16],[18]) but with dierent assumptions and results.

In [7] the continuous least square problem:

N Z

1X 2

J(") = jFi (") 'i j dS ! min ; (1)

2 i=1 "2Ead

was considered, where Fi are the operators which associate each parameter func-

tion " with its corresponding potential ui j , solution of Neumanns problem,

restricted to the boundary and Ead is a convex closed set.

Analogously, the discretized version of the problem was also considered:

Z

1 2

Jh ("h ) = Fih ("h ) ' ih dS ! min ; (2)

2 "h 2R

NT

h

where Fih associates each parameter vector "h with its corresponding discrete

solution uih j ; restricted to the boundary. Here the function uih ; 'ih are the

approximations of ui ; 'i obtained by application of nite element method to

Neumanns problem, with choosing a regular mesh with NTh triangles, and "h

is the piecewise approximation of ", considered constant at each triangle.

The algorithm in [7] nds an approximation of the continuous solution "

of (1) applying a least squares optimization method to successive discretized

problems (2), with the increasing number of triangles NTh , but controlling the

mesh size by checking Wolfes global convergence conditions for the continuous

problem (see also [11],[21]).

Fundamental conditions for algorithms success are the existence of contin-

uous solution " , and the convergence of gradients in L1 norm (see Lema 5.1

in [7]):

rJh ("h ) ! rJ(" ), if "h ! " :

h!0

Tikhonov regularization method, and an extended formulation of the algorithm

shall be given in section 5, to deal with selection of regularization parameter.

In the remaining sections, we formulate the regularizing continuous problem in

section 2, including the analysis of weakly closed property of the operators Fi ;

in section 3 we formulate the regularizing discrete problems via nite element

method and in section 6 we comment some conclusions and future work.

2

2 Regularization of continuous problem

We shall analyze the application of the algorithm proposed in [7] to the regu-

larizing problem. To this end we consider the following Tikhonov functional

; 2

J (") = J (") + " "0 L2 ( ) ! min (3)

2

where is the open unit ball, its boundary, " 2 L1 ( ), "0 2 L2 ( ) represents

a priori information about the unknown conductivity " 2 L1 ( ) and > 0

is the regularization parameter. The functional J (") is associated with the

variational formulation of the potential equation problem:

div("ru) = 0 en

@u ;

" @n = en

N Z

1X 2

J (") = ui 'i dS; (4)

2 i=1

Z Z

"rui rvdx = i vdS; 8v 2 V;

(5)

ui 2 V , 1 i N:

1

'i 2 H ( ); 1

2 i N are the data of the potential function, measured

on the boundary, which satisfy 'i 'i 1 , where 'i are supposed to be

H2

the exact data and > 0 is the maximum error of the measurements. In what

follows we shall assume that 'i ; 'i ; i are a ne functions on .

V is the quotient space V = H 1 ( )=R with norm kukV = krvkL2 ( ) ; 8v 2 u;

1

where u designs the equivalent class of u 2 H 1 ( ). Functions i 2 H 2 ( ) also

satisfy the compatibility condition:

Z

i dS = 0; 1 i N; (6)

and since i being currents applied to the boundary, they are considered

without errors. To simplify notation, we shall identify the class ui with any

representative ui 2 ui .

The gradient of J ; ("), as a function in L1 ( ), is given by:

;

rJ (") = rJ (") + (" "0 ) 2 L1 ( ) (7)

with:

N

X

rJ (") = rui rpi ; (8)

i=1

3

which has been calculated in [7], where ui is a solution of (5) and pi 2 V satisfy

the variational adjoint equation:

Z Z

"rpi rvdx = ('i ui )dS; 8v 2 V; 1 i N: (9)

We formally dene:

Fi : L2 ( ) ! L2 ( ); 1 i N; as Fi (") = ui j , (10)

i.e. they are the operators which associate each parameter function " with its

corresponding potential ui , solution of (5), restricted to the boundary. With

more precision, Fi shall be dened on the set:

problem and with M large enough. It is well known that, for any " 2 Ead ,

each i problem of (5)-(6) has a unique solution ui in V; and with application

of Lax-Milgram lemma [24], the following inequality holds:

In addition, we also need to work with L2 Hilbert spaces to apply the results

of convergence for nonlinear regularization problems.

The formulation of the continuous inverse problem with exact data 'i is:

(12)

Fi (") = 'i ; 1 i N:

and because Fi are nonlinear, problem (12) not necessarily has a solution and

is ill-conditioned in the sense of Hadamard. Instead, a least squares problem is

always considered:

N Z

1X 2

J(") = jFi (") 'i j dS ! min; " 2 Ead ; (13)

2 i=1

and again by nonlinearity, problem (13) not necessarily has a unique solution.

With inexact data, the regularizing continuous problems can be formulated

as follows:

N Z

; ; 1X 2 2

P : J (") = Fi (") 'i dS + " "0 L2 ( ) ! min; " 2 Ead :

2 i=1 2

(14)

Theorem 1 : Functions Fi ("); 1 i N; are continuous with respect to

L2 norms and are weakly closed, i.e. if "n ! "; yn ! y; yn = Fi ("n ); then

w w

Fi (") = y:

4

Moreover, if = ( ) is chosen in such a way that:

2

( ) ! 0 and ! 0 when ! 0; (15)

( )

then, problem (14) has a (not necessarily unique) solution and every set of solu-

tions " ; has a convergent sequence f" k ; k g when k ! 0 with k := ( k ).

The limit of any convergent sequence of the set " ; is a solution of problem

(13) and furthermore, if (13) has a unique solution " , then lim !0 " ( ); = " .

Proof : The result will be a consequence of Theorem 10.3 in [10]. Norm

continuity of operators Fi are known (see [9]), and it remains to show weak

closedness of Fi . We use the general notation F = Fi and prove that F is in

fact weakly continuous.

First, note that if "1 ; "2 2 Ead =) j"1 "2 j M K. Let be f"n g Ead

such that: Z Z

w

"n * " () "n vdx ! "vdx; 8v 2 L2 ( ); (16)

Z Z

w

F ("n ) * F (") () unj wdS ! uj wdS; 8w 2 L2 ( ); (17)

Z Z

"rurvdx = vds

Z Z (18)

"n run rvdx = vds, 8v 2 V

kF ("n ) F (")kL2 ( !

) n!1 0; (19)

Recalling that V = H 1 ( )=R is a Hilbert space with scalar product

Z

hw; viV = rwrvdx,

and norm kvkV = krvkL2 ( ) , we apply the trace theorem [20] to have:

C( ) krun rukL2 ( ) : (20)

EXPLICAR LA DESIGUALDAD 7.3.

5

From (18) follows that:

Z

("n run "ru)rvdx = 0;

Z Z

"n r(un u)rvdx = (" "n )rurvdx; 8v 2 V; 8n 2 N; (21)

Z

Gn (v) = ("n ")rurvdx: (22)

Ead is a convex closed set, then weakly closed, and "n ; " 2 Ead . Hence Gn

is continuous for all n, as a consequence of the inequalities:

Z

jGn (v)j j"n "j jrurvj dx (M K) krukL2 krvkL2 ; 8v 2 V; 8n 2 N:

Z

an (w; v) = "n rwrvdx;

n, since "n is uniformly bounded from below. Then we can apply Lax-Milgram

theorem [24] to (21) obtaining:

1

kr(un u)kL2 ( ) kGn kV 0 (23)

C( )

kF ("n ) F (")kL2 ( ) = unj uj L2 ( )

kGn kV 0 (24)

Z

Gn (v) = hgn ; viV = rgn rvdx; with kGn kV 0 = kgn kV ;

but from (22) follows rgn = (" "n )ru, a.e., therefore:

Z Z

2 2 2 2 2

kGn kV 0 = kgn kV = hgn ; gn iV = jrgn j dx = j"n "j jruj dx: (25)

6

Dening:

1

[ 1

[

+

= fx 2 : "n " 0g ; = fx 2 : "n " < 0g ;

n=1 n=1

Z Z Z

2 2 2 2

j"n "j jruj dx ("n ")("n ") jruj dx + (" "n )(" "n ) jruj dx

8 +

9

<Z Z =

2 2

(M K) ("n ") jruj dx + (" "n ) jruj dx ;

: ;

+

8 9

Z <Z Z =

2 2 2 2

=) j"n "j jruj dx (M K) ("n ") jruj + dx + (" "n ) jruj dx ;

: ;

(26)

where denotes the indicator functions of the measurable sets .

Since L2 ( ) is a dense subset of L1 ( ); we have:

w w

"n ! " in L2 ( ) =) "n ! " in L1 ( ):

that:

kv v kL1 ( ) < :

2(M K)

For N = N (v; ) large enough, we have for n > N :

Z

("n ") v dx < ;

2

and then:

Z Z Z

("n ")vdx j"n "j jv v j dx + ("n ") v dx

Z Z

(M K) jv v j dx + ("n ") v dx < + = :

2 2

2

Now, taking into account that functions jruj 2 L1 ( ), we conclude:

Z Z

2 2

("n ") jruj + dx + (" "n ) jruj dx ! 0; if n ! 1;

7

and from (25),(26):

Z

2 2 2

kGn kV 0 = j"n "j jruj dx ! 0; if n ! 1:

! 0; if n ! 1;

To be precise, the convergence in Theorem 1 is a minimal norm convergence,

i.e. is a convergence to a " satisfying

we assume that (13) has a unique solution " . Therefore, if " ( ); >0 is

a set of optimal solutions of problem (14) with ( ) satisfying (15), and if

" ( ); ! " ; in L2 ( ) then, " is the solution of the least squares problem

!0

with exact data.

We consider the discretized versions of the regularizing continuous problems (3)

or (14) given by:

h 2

Jh h ; ("h ) = Jh ("h ) + "h "0h ! min; (27)

2 h

with NTh triangles:

n o NTh

[

h = T1 ; T2 ; :::; TNTh ; Ti \ Tj = ?; i 6= j; Tj = ;

j=1

corresponding to the application of the nite element method for the numerical

solution of Neumanns problem (5), h is the regularization parameter, k:k h

is the norm in RNTh dened by:

2 t

"h "0h = "h "0h h ("h "0h ); where:

and "h ; "0h 2 RNTh are vectors representing the piecewise constant discretization,

with respect to triangular mesh, of the functions " and "0 .

8

We design by "h (:) the canonical extension of the vector "h to a function of

L1 ( ), dened a.e. by:

"h (x; y) = ("h )k ; 8(x; y) 2 Tk ; 8Tk 2 h; k = 1; 2; :::; NTh ;

and analogously for "0h . We denote by RNTh (:) the set of all functions which are

canonical extensions of vectors 2 RNTh , for each xed triangulation h .

We also assume that "0 2 L2 ( ) is selected in such a way that:

"0 "0h (:) L2 ( )

! 0, when h ! 0:

Z

1 2

Jh ("h ) = u ih 'ih dS; "h 2 RNTh ; (28)

2

Z Z

"h (:)ruih rvh dx = i vh dS; 8vh 2 Vh ; (29)

u ih 2 Vh , 1 i N:

As usual in nite element approach, equation (29) is equivalent to the so-

lution of a system of linear equations Az = b; its solution vector z are the

coe cients of a linear combination of a basis = f 1 ; :::; Nh g, Nh being the

number of nodes in the triangulation h , which approximate the continuous

solution by the equality:

Nh

X

u(x) uih (x) = zj j (x); x2 ; (30)

j=1

Vh is the subspace of V dened by:

Vh = vh 2 C( ) \ V : vhjT 2 P1 (T ); 8T 2 h (31)

and it is considered as the space of solutions of the discretized problem, where

P1 (T ) is the set of polynomials of degree less or equal to one dened on T .

Functions 'i ; i in (12),(5) are supposed to be a ne functions on . As a

consequence:

N h

X

'ih (x) = 'i (xi ) i (x);

i=1

where N h is the number of nodes xi on and i are the function of basis

corresponding to those nodes. As we did for "h 2 RNTh , we can identify 'ih as

vectors in RN h and likewise for 'ih :

'ih = ('i (xi ))1 i N :

h

9

The gradient of Jh h ; is given by:

and it is easily seen that we have the equalities:

NTh

X Z

h ("h "0h ); h RNTh = ("hk "0hk )t h k = ("h (:) "0h (:)) h (:)dx; (33)

k=1

Using (32) and (33) the canonical extension rJh h ; ("h )(:) of the gradient

vector rJh h ; ("h ) can be seen as a linear functional over L1 ( ), which we

denote rJh h ; ("h )[:], by the formula:

Z

h;

rJh ("h )[ ] = rJh h ; ("h )(x) (x)dx; 8 2 L1 ( ):

rJh h ; ("h ) (:) = rJh ("h ) (:) + h ("h (:) "0h (:)) (34)

where the expresion for the rst term in (34) was obtained in [7]:

N

X

rJh ("h ) [:] = ruih rpih : (35)

i=1

Here uih ; pih 2 Vh are the respective solutions of the discrete variational

equations:

Z Z

"h (:)ruih rvh dx = i vh ds

Z Z (36)

"h (:)rpih rvh dx = ('ih uih )vh ds; 8vh 2 Vh

Like in the continuos case, and for each xed mesh h , we dene the operators

Fih : RNTh ! RN h as Fih ("h ) = uih j = 'ih , which associate a parameter

vector "h with its corresponding discrete solution, satisfying (30), restricted to

. Then, the discrete problem with exact data can be formulated as follows:

Find "h 2 Ead such that:

(37)

Fih (") = 'ih ; 1 i N:

and its regularizing discrete problem as:

N Z

; 1X 2 h 2

Jh ("h ) = Fih ("h ) 'ih dS + "h "0h ! min; "h 2 Ead

2 i=1 2

(38)

10

4 Convergence of Gradients

Theorem 2: Suppose f"h gh>0 RNTh (:) converge to " in L1 ( ) when h ! 0,

and parameters ; h are choosing such that:

h!0

Then we have:

(") in L1 ( ); 8 > 0:

h!0

satisfying (15), and

2

lim = 0; (40)

(h; )!(0;0) h( )

then ( ) also satises (15), and it is a regularization parameter for the con-

tinuous problem (14). Conversely, if ( ) is a regularization parameter for (14)

satisfying (15) then, h ( ) is a regularization parameter for (27), provided that

(39) is fullled uniformly for

n 2 (0;o 0 ), with 0 > 0.

h;

Remark: Note that if "h is a set of optimal solutions of the regu-

h>0

larized discrete problem (38) which converges, if h ! 0, to " ; in L1 ( ) then,

using convexity of Ead , we have:

h i

rJh h ; ("h ) "h h ; 0 =) rJ ; (")[ " ; ] 0; 8 2 Ead ;

and " ; satisfy rst order optimality conditions for the regularizing continuous

problem (14).

Proof : From (7),(34) and triangle inequality we have:

(")

L1 ( ) (41)

rJh ("h ) rJ (") L1 ( )

+ (" "0 ) h ("h (:) "0h ) L1 ( )

! 0:

k " h "h (:)kL1 + "0 0

h "h (:) L1

k( h )"kL1 + k h (" "h (:)kL1 +

0 0

+ ( h )" L1

+ h (" "0h (:) L1

j h j k"kL2 + h k" "h (:)kL2 +

0 0

+ j hj " L + h "

2

"0h (:) L2 ;

11

and then:

(" "0 ) h ("h (:) "0h (:))

h L1 ( i )

0

j hj k"kL2 ( )+ " L2 ( )

+ (42)

h i

0

+j hj k" "h (:)kL2 ( ) + " "0h (:) L2 ( )

By (39) we obtain:

! 0;

Furthermorme, it is easy to see that we can applied Theorem 1 to the nite

dimensional discretized problem (27), using Tikhonov method (38), because

each Fih ("h ) are RNTh continuous and therefore weakly closed. The selection

of ; h ( ) satisfying (15) and (40) gives a regularization parameter of each

(discrete and continuous) problem.

In fact, if h ( ) satises (15),(40) we have:

!0 !0 h!0 h!0 !0

2 2 2

lim = lim lim = lim lim = 0;

!0 ( ) !0 h!0 h( ) h!0 !0 h( )

meter for (14).

On the other hand, if ( ) satises (15) and there exists 0 > 0 such that

(39) is satised, uniformly for 2 (0; 0 ), necessarily h ( ) should satisfy (15)

for h small enough. If this not so, there exists > 0 and sequences hk ; k ! 0,

such that hk ( k ) ; for all k. By assumption, there exists h0 > 0 such that:

2

then hk 2 (0; h0 ) and k 2 (0; 0) for all k k0 , for some k0 2 N, therefore:

2

But this implies:

( k) ; for all k k0 ;

2

contradicting (15) for ( ).

2

Analogously, it can be shown that lim !0 h( ), for h small enough. In other

case, we obtain as before:

2 2 2

k k k

> > ; 8k k0 ;

( k) hk ( k ) + 2 hk ( k )

12

for sequences hk ; k ! 0, then:

2 2

k k

; 8k =) > ; 8k k0 ;

hk ( k ) ( k)

and this contradicts (15) for ( ).

Theorem 3: Let be "h h ; a set of solutions of the regularizing discrete

problems (27) where h 2 (0; ). Suppose ; h ( ) are chosen satisfying (39),(40),

( );

and "h h ! " in L1 ( ), then:

!0

rJ h;

("h h ; ) rJ(") ! 0; if ! 0:

L1 ( )

Moreover we have:

and therefore, every accumulation point " of "h h ; , for ! 0, satises opti-

mality conditions for the continuous problem (13) and " = " in case of a unique

solution " .

Proof : By triangle inequality we have:

rJ h;

("h h ; ) rJ(") rJ h;

("h h ; ) rJ ;

(") + rJ ;

(") rJ(") L1 ( )

:

L1 ( ) L1 ( )

rJ h;

("h h ; ) rJ ;

(") ! 0 for each > 0;

L1 ( ) h!0

In fact, we know the gradient formula for those continuous functionals:

N

X

rJ(") = rui rpi (43)

i=1

N

X

( );

rJ (") = rui rpi + ( )(" "0 ) (44)

i=1

problems, for 1 i N :

Z Z

"rui rvdx = i vdS; 8v 2 V; (45)

Z Z

"rpi rvdx = ('i ui )vdS; 8v 2 V; (46)

13

Z Z

"rpi rvdx = ('i ui )vdS; 8v 2 V; (47)

1

where " 2 L1 ( ); " K > 0 a.e. and 'i ; i 2 H 2 ( ):

Since the bilinear form associated with the last three variational problems

(45),(46) and (47) is the same continuous and coercive bilinear form:

Z

a(w; v) = "rwrvdx;

rJ ( ); (") rJ(") L ( )

1

XN N

X

rui rpi rui rpi + ( ) (" "0 ) L1 ( )

;

i=1 i=1 L1 ( )

N

X

rui r(pi pi ) L1 ( )

+ ( ) " "0 L1 ( )

;

i=1

N

X

( );

rJ (") rJ(") krui kL2 ( ) r(pi pi ) L2 ( )

+ ( ) " "0 L1 ( )

:

L1 ( )

i=1

(48)

Using Lax-Milgram theorem in (45) we can write:

C ( )

kui kL2 ( ) k i kH 1 ; (49)

2 ( )

the coercivity constant of a(w; v). In addition, from (46),(47) we obtain that

(pi pi ) is the solution of the variational problem:

Z Z

"r(pi pi )rvdx = ('i 'i )vds; 8v 2 V;

C ( )

r(pi pi ) L2 ( )

'i 'i 1 :

H 2 ( )

But we have:

H 2 ( ) H2( )

therefore:

14

C ( )

rpi rpi L2 ( )

: (50)

" N

#

C 2( ) X

rJ ( ); (") rJ(") 2 k i kH 1 + ( ) " "0 L1 ( )

L1 ( ) 2 ( )

i=1

(51)

and nally:

( );

rJ (") rJ(") !0:

L1 ( ) !0

The solutions "h h ; satisfy rst order optimality conditions for problem

(27), and since Ead is a closed convex set, we have:

h i

rJh h ; ("h h ; ) "h h ; 0; 8 2 Ead ;

and " satisfy rst order optimality conditions for problem (13).

Remark: As a consequence of Theorem 3, the algorithm given in [7] can

be applied to the regularizing continuous problem (14). If ; h ( ) is chosen

to satisfy (15),(39),(40) and taking h , the algorithm in [7], applied to the

problem (14), can be interpreted as a regularization method to the continuous

problem (13), working in regularization of successive discrete problems (27).

Remark: Note that a priori selection of parameters ; h remains unsolved,

and the application of Theorem 10.3 in [10] demands the use of an unknown

vector w. It seems to be preferable a posteriori selection of regularizing para-

meters. Therefore, in the next section we examine application of the commonly

used Morozovs discrepancy method.

parameter

To simplify matters, in this section we shall consider Fi = F , i.e. only one data

'i = ' is given or a vector notation F = (F1 ; F2 ; :::; FN ) is used.

Following the approach given in [23],[22], for a given c1 > 1, it is necessary

to nd = ( ; ' ) > 0 such that, there exists a solution " ; of problem P ;

in (14) satisfying:

' F (" ; ) L2 ( ) c1 ; (52)

By Theorem 2:6 in [22], the existence of such is a consequence of the

strongly continuous property of operator F in (10), which we have already

proved in Theorem 1.

15

Furthermore, if DF (:) denotes the adjoint derivative of operator F , and

assuming " ! DF (") is weakly/strongly continuous, i.e.

w

"n * " in L2 ( ) =) DF ("n ) ! DF (") in L2 ( ); 8 2 L2 ( ); (53)

then, in [22] (Theorem 2:7), it is shown that the set " k ; of optimal solu-

tions of a sequence of problems P k ; shall have, at least, one convergent

subsequence in L2 ( ).

Theorem 4: Operator F , dened in (10), satises property (53), if

Z Z Z Z

i) "rurvdx = vds; ii) "ru rvdx = vds;

Z Z Z Z

iii) "n run rvdx = vds; iv) "n run rvdx = vds;

for all v 2 V:

From i) and iii), reapeting similar steps as in the proof of Theorem 1 after

(20), it follows that z = (u un ) 2 V is the unique solution of the variational

problem:

Z Z

"n rzrvdx = ("n ")rurvdx; 8v 2 V;

with the same linear functional Gn of (22). With the same arguments given

from (23) onwards, it can be shown that:

run ! ru in L2 ( ); n ! 1; (55)

run ! ru in L2 ( ); n ! 1: (56)

krun (run ru )kL2 ( ) + kru (run ru)kL2 ( )

and by Lemma 5:1 in [9], if "n ; " 2 C k;1 ( ); for some k 2 Z; k > 0, then,

run ,ru 2 L1 ( ) and the following inequalities hold:

16

Together with:

Moreover, Theorem 2:8 in [22] gives the adequate convergence result for our

inverse problem:

Theorem 5:([22]) Consider the inverse problem (12) and the corresponding

perturbed problem P ; in (14). If ' k denotes a perturbed data such that

' k ' L2 ( ) k ; if k ! 0 for k ! 1 and if " k ; k is a solution of

problem P k ; k with k selected by Morozovs principle (52), then " k; k has

a convergent subsequence. The limit of any convergent subsequence of " k; k

is a minimum norm solution of problem (12) and furthermore, if "y is unique,

then " k; k ! "y for k ! 1:

Theorem 5 assured that we can approximate the inverse problem solution

with the selection of parameter using Morozovs principle. Nevertheless, the

convergence can be arbitrarily slow, and it is needed a result with an estimate

p

of convergence order. Theorem 2:9 in [22] guaranteed convergence of order

under regularity assumptions for "y but with Lipschitz continuous condition for

0

the derivative operator F .

Unfortunately, this last condition does not hold for the operator F in (??)

(see [9]). Nevertheless, when analyzing the proof of Theorem 2:7(2:9?) it can be

seen that Lipschitz continuity is used to establish uniformly boundedness of the

derivative F 0 (" k ; ). This property was proven in ([9], Lemma 4:2) whenever

" k; Ead and under the assumption (54).

In order to obtain the estimate, we consider a more general condition for a

Frechet dierentiable operator F : X ! Y; dened on Hilbert spaces X; Y; and

satisfying:

0 2

9L > 0 : F (x + x) F (x) F (x) x L k xk ; for x; x + x 2 M; (58)

Theorem 2:9 in [22] can be reformulated as follows:

Theorem 6: Let X; Y be Hilbert spaces and F : D X ! Y a strongly

continuous and Frechet dierentiable operator with D = dom(F ) a convex

set. Let be xy a minimum norm solution of F (x) = y, and y 2 Y satisfy-

ing y y . Dene the Tikhonov functional as:

2 2

F (x) = F (x) y + kx x0 k ;

i) x0 2 X is such that:

17

iii) xy satisfy the regularity conditions [?]:

0

9 w 2 Y such that (xy x0 ) = F (xy )w; (60)

1

with L kwk < : (61)

2

Then, if the regularization parameter is selected by Morozovs criteria:

;

y F (x ) c1 (62)

is fullled: 1

; y 2(1 + c1 ) kwk 2 p

x x (63)

1 2L kwk

Proof:

The proof is based on that given in Theorem 10:4 of [?] combined with the

one given in Theorem 2:9 of [22]. In fact, condition (58) gives:

;

F (x ) = F (xy ) + DF (xy )(x ;

xy ) + r ;

; 2

where r L x xy . Now, following (10:10) in [?] and using (60) and

(61) we obtain:

; 2 ; 2

F (x ) y + x xy

2 ; ; 2

+2 kwk + 2 kwk F (x ) y + 2 L kwk x xy ;

then:

; 2 2 2

(1 2L) kwk x xy F (x ;

) y +2 kwk ( + F (x ;

) y ;

Remark: We already know that our operator F is strongly continuous and

Frechet dierentiable. Assumption (59) is natural since, in other case, vector

x0 can be considered an approximation of the solution xy .

Remark: Dobson ([9],Lema 4:2) shows that condition (58) holds for the

operator F of our problem, provided "y 2 C k;1 ;for some k 2 Z; k > 0, i.e.

0 2

F ("y + ") F ("y ) F ( ") C k "kL2 ( ) ;

L2 ( )

Therefore, Theorem 6 guarantee order of convergence to the solution of

the inverse problem (12), under the above mentioned regularity assumptions for

xy .

18

5.0.1 Algorithm

Our rst algorithm for problem P ; (see [7]) is a globally convergent multidi-

rectional descent algorithm, which obtain at each iteration, one approximated

solution of the continuous problem, computed by solving the discretized problem

P h ; h , until Wolfes global convergent conditions are fullled for the continuous

problem.

The algorithm to determine the parameter satisfying Morozovs principle,

is developed by minimizing J ; for a set of regularizing parameters until the

desired inequality holds. Numbers c1 > 1; 0 > 0; 0 < q < 1; j = q j 0 are

chosen, and solutions " j ; are computed until:

j;

' F (" ) c1 : (64)

;

Both algorithms are combined, where the approximated solution "h j

" j;

of the continuous problem, computed by minimizing Jh ; , is used to check

the Morozov inequality (64).

ALGORITHM 2:

Choose c1 > 1; 0 < q0 < 1; u > 0; j = 0;

0) Set j = qj u ;

;

1) Compute "h j using algorithm in [7]:

1:1) Set h0 = hinic ; "0 (:) = "h0 (:) 2 L1 ( ); l = 0; k = 0:

;

1:2) If krJ("l (:))kL1 ( ) 0 stop, function "l (:) = "h j is an approxi-

mation of a local minima " j ; for the continuous problem P j ; , go to step

2):

Otherwise, go to step 1:3:

1:3) Set k ! k + 1; k = k2 1 ; hk = hk2 1

1:4) Dene "hk = "l at the new triangulation hk :

1:5) Verify if

j; ; j;

rJ ("hk (:)) rJhkj ("hk )(:) k rJ ("hk (:)) L1 ( )

L1 ( )

If it does not hold, take hk = h2k and go to step 1:4:

N

1:6) Choose step "hk 2 R Thk from "hk and k > 0 satisfying Wolfes

conditions for the discrete problem (27).

1:7) Dene "hk = "hk + k "hk and verify if its cannonical extension

"hk (:) satises Wolfes conditions for the continuous problem P j ; .

h ;

If they hold or if rJhk k ("hk ) = 0; take "l+1 (:) = "hk (:); l = l + 1;

go to step 1:2:

h ;

If they do not hold and rJhk k ("hk ) 6= 0, take "hk = "hk ; go to

step 1:6:

;

2) If ' F ("h j ) c1 ;

19

;

then " j ; = "h j is an approximation of the continuous problems which

satises the Morozovs inequality (64) for the accepted regularizing parameter

j,

;

else if ' F ("h j ) > c1 then qj+1 = q0 qj ;

else qj+1 = qj + (1 qj ) 21 ; j = u;

3) j+1 = qj+1 j ; j = j + 1; go to step 1):

6 Conclusions

Algorithm given in [7] can be applied to the regularization of the continu-

ous problem (14), provided the regularization parameters ; h can be chosen.

Moreover, if the a priori regularization parameters ; h of both problems are

adequately chosen then, the regularized discrete solutions "h h ; converge to the

solution " ; of the regularized continuous problem. Theorem 3 shows that al-

gorithm in [7] can be seen as a regularization method by itself. On the other

hand, application of Morozovs principle gives a more practical approach and

the new combined Algorithm2 can be used.

In future work,analizaremos la aplicabilidad e implementacin a nue-

stro problema de esquemas ms generales de la regularizacin de

Tikhonov, tales como los que se estudian en [2].

References

[1] Adler, A., R. Gaburro, W. Lionheart(2011). Electrical Impedance Tomog-

raphy. Chapter 14 in Handbook of mathematical methods in imaging, Ed.

Otmar Scherzer, Springer Science Business Media.

[2] Anzengruber, S.W., R. Ramlau(2010). Morozovs discrepancy principle

for Tikhonov-type functionals with non-linear operators,Inverse Prob-

lems,26,025001.

[3] Bakushinsky, A.B., M. Kokurin(2005). Iterative Methods for Approximate

solution of inverse problems. Springer Verlag.

[4] Borcea, L. (2001). A nonlinear multigrid for imaging electrical conductivity

and permitivity at low frequency, Inverse Problems ,17:329-359.

[5] Borcea, L. (2002). Electrical impedance tomography,Topical Review. In-

verse Problems 18:R99-R136.

[6] Butler, J.E., R.T. Bonnecaze(2000). Inverse method for imaging a free sur-

face using electrical impedance tomography. Chemical engineering science,

55: 1193-1204.

20

[7] Carrillo, M., J.A. Gmez (2015). A globally convergent algorithm for a

PDE constrained optimization problem arising in electrical impedance to-

mography. Num. Funct. Anal. Optim. 36: 748776.

[8] Dalmasso, R. (2004). An inverse problem for an elliptic equation, Pub.

RIMS, Kyoto Univ.,40:91-123.

inverse conductivity problem, SIAM J. Appl. Math,(52),No. 2, 442-458.

[10] Engl, H., M. Hanke, A. Neubauer(2000). Regularization of inverse prob-

lems, Kluwer Academic Publishers,Dordrecht,Boston,London.

algorithm for unconstrained optimal control problems. Numer. Funct. Anal.

Optim., 19:9-10.

[12] Gmez S., M. Ono, C. Gamio, A. Fraguela (2003). Reconstruction of ca-

pacitance tomography images of simulated two-phase ow regimes, Applied

Numerical Mathematics 46:197-208.

[13] Herzog R., K. Kunisch (2010). Algorithms for PDE-constrained

optimization,Gamm-Mitteilungen, (33):163-176.

[14] Holder, D., Editor(2005). Electrical Impedance tomography. Institute of

Physics. Series in Medical Physics and Biomedical Engineering. United

Kingdom.

[15] Jin B., Khan T., P. Maass (2012). A reconstruction algorithm for electri-

cal impedance tomography based on sparsity regularization, Int. J. Nu-

mer.Meth.Engng, (89):337-353.

[16] Kaltenbacher, B., A. Kirchner, B. Vexler(2011). Adaptive discretizations

for the choice of a Tikhonov regularization parameter in nonlinear inverse

problems. Inverse Problems, 27:12, 125008.

[17] Kaltenbacher, B., A. Neubauer(2006). Convergence of projected iterative

regularization methods for nonlinear problems with smooth solutions. In-

verse Problems, 22:11051119.

[18] Kirchner, A.R.(2014). Adaptive regularization and discretization for non-

linear inverse problems with PDEs,Thesis Dr. rer. nat.,Technische Univer-

sitt Mnchen, Germany.

mography: convergence by local injectivity. Inverse Problems, 24:6, 065009.

[20] Miyazaki, Y.(2008). New proofs of the trace theorem of Sobolev spaces.

Proc. Japan Acad., 84, Serie A, 112-116.

21

[21] Nocedal, J., S.J.Wright (1999). Numerical Optimization. Springer Series

in Operations Research. Springer Verlag.

[22] Ramlau, R. (2001) Morozovs discrepancy principle for Tikhonov regular-

ization of nonlinear operators. Zentrum fr Technomathematik, Report 01-

08. University of Bremen.

ization of nonlinear operators. Num. Funct. Anal. Optim. 2 3: 147172.

[24] Raviart P.A., J.M. Thomas (1998). Introduction Lanalyse numrique

des quations aux drivees partielles. Mathemtiques Apliques pour la

Maitrise. Dunod.

[25] Santucho E.M. A,A. Orlando,M. Luege(2013). Identicacin de cavidades

mediante la tomografa de impedancia elctrica. Mecnica Computacional

,Vol XXXII, 1737-1749.

[26] Seidman T.I, C.R. Vogel(1989). Well-posedness and convergence of some

regularization methods for nonlinear ill-posed problems. Inverse Problems,

5:227238.

[27] Uhlman, G.(2009). Electrical impedance tomography and Calderons prob-

lem. Inverse Problem 25: 123011.

22

- The Pragmatic Theory Solution to the Netflix Grand PrizeDiunggah olehapi-25884893
- 03 Linear RegressionDiunggah olehShashank Yadav
- Bachelor_Thesis_C_Caljouw.pdfDiunggah olehIulianPopescu
- Pol Izz OttoDiunggah olehIsmahene Smaheno
- 258afd93c0aa17648119241ca17b873da5f1.pdfDiunggah olehHarry Prado
- Clustered Compressive Sensingbased Image Denoising Using Bayesian FrameworkDiunggah olehCS & IT
- 39Diunggah olehKhuleedShaikh
- Variational Techniques for Image Denoising: A ReviewDiunggah olehgoel61411
- 34032_GreenFuncDiunggah olehFrank Murillo
- Global Formulations of Lagrangian and Hamiltonian Dynamics on Manifolds a Geometric Approach to Modeling and AnalysisDiunggah olehGiculPiticul
- Terms - Functions, Limits, ContinuityDiunggah olehfatdsd
- The Euler-PoincarÃ© EQs & Double Bracket Dissipation-96--Marsden-p49--pIRXDiunggah olehVeio Macieira
- Introduction to Mathematical Physics-Laurie CosseyDiunggah olehJean Carlos Zabaleta
- Calculus Variations NOTES1Diunggah olehvc94
- AJP000044Diunggah olehapuntesfisymat

- Taller4 Algebra 2019Diunggah olehmjcarri
- Lab-EDO2Diunggah olehmjcarri
- Lab-EDO8Diunggah olehmjcarri
- Sylabus AlgebraDiunggah olehmjcarri
- ev1_in1008c_2016_1_pautaDiunggah olehJonathan Sanchez
- Practico 11 s 12019Diunggah olehmjcarri
- Practico 6 s 12019Diunggah olehmjcarri
- Program a Eds 12019Diunggah olehmjcarri
- Listado2 Algebra(IN1001C) 2019 1Diunggah olehmjcarri
- Lab-EDO3Diunggah olehmjcarri
- Listado1EDS22019Diunggah olehmjcarri
- Test1EDDiunggah olehmjcarri
- Tutorial 02 PyDiunggah olehOsman N. Borjas
- Taller 6-Algebra 2019Diunggah olehmjcarri
- Pauta Control 3 Taller IN1001C 2019 I Forma BDiunggah olehmjcarri
- notas python.pdfDiunggah olehmjcarri
- Certamen 3 EdDiunggah olehmjcarri
- CRIN1008C_201702_pautaDiunggah olehmjcarri
- CRIN1008C_201702_pautaDiunggah olehmjcarri
- mapaDiunggah olehmjcarri
- Big_DataDiunggah olehmjcarri
- CRIN1008C_201702_pautaDiunggah olehmjcarri
- Recetas PaleoDiunggah olehmjcarri
- Alberto Boschetti, Luca Massaron - Python Data Science Essentials - Learn the Fundamentals of Data Science With Python-Packt Publishing (2015)Diunggah olehmjcarri
- Practico 2 s 12019Diunggah olehmjcarri
- Listado1 Algebra(IN1001C) 2019 1Diunggah olehmjcarri
- El cuarto paradigma - Tony HeyDiunggah olehProfrFer
- Python for Probability, Statistics, And Machine LearningDiunggah olehstanleyhartwell
- PWD_-_Los_11_elementos_de_una_solucion_big_data (1).pdfDiunggah olehGabriel Torres
- I Ching and Big DataDiunggah olehmjcarri

- Furuno Navnet MFD8 MFD12 User's Handbook DDiunggah olehAdi Prasetyo
- BIJKER, Wiebe. of Bicycles, Bakelites, And BulbsDiunggah olehJean Segata
- 42 TheuseofISO.pdfDiunggah olehIJAERS JOURNAL
- An Expert For Functional Test and Fault Diagnosis for Electronic DevicesDiunggah olehYudha Nawa Anggara
- 4 - Treatment of FracturesDiunggah olehmojahidmobark
- Economic Applications of Implicit DifferentiationDiunggah olehARUPARNA MAITY
- Avance en Perforacion y CompletacionDiunggah olehUlianov Gil
- TEC2 IOMDiunggah olehNaser Jahangiri
- What is EutrophicationDiunggah olehSoumya Shukla
- 010 How to Build an Elevated Square Foot GardenDiunggah olehTrash Panda
- Devlon v-API OriginalDiunggah olehCarlosIkeda
- [Powerpoint] Math - Measuring AreaDiunggah olehIa_Shion
- A New Process of Gas-Assisted Injection Molding for Faster CoolingDiunggah olehChandresh Motka
- Gateway to the AseanDiunggah olehhercu
- Salt Business Model 2nd Jan 2012Diunggah olehVenugopal Arravelli
- Palaeolithic Diet Stone Age DietDiunggah olehYuriko Esmeralda Aguiñiga
- Denison and Kiers, 2011Diunggah olehabatabraham
- E016Diunggah olehmercvi
- Automatic Gear Shifting in 2 WheelerDiunggah olehJilly Arasu
- Veda e UpanisadDiunggah olehvitazzo
- ChineseDiunggah olehDavide Baracco
- Vol-99-22nd-October-2015-TO-29th-October-2015_0Diunggah olehPriyatam Bolisetty
- Bizhub C203_C253_C353 Replacing the Imaging UnitDiunggah olehaionica
- United States Rifle Model of 1917Diunggah olehGheorghe Silviu
- Lenovo Storage s3200 DsDiunggah olehHatsuneMiku
- Eyes Anatomy PDFDiunggah olehAbdulkadir
- MSDS - Molecular Sieve 4ADiunggah olehadeliaaa
- DOC022.97.90324Diunggah olehVincent Mutambirwa
- Volvo 8400 City Bus Brochure for Sales KitDiunggah olehER Balram Yadav
- Differential Thermal Analysis (DTA).pptDiunggah olehDr. Saad B. H. Farid