Anda di halaman 1dari 14

CHEN 8201 Linear Analysis - Lecture Notes

Part III - Solving L u = 0 and

u
t

=Lu

Draft Copy 25 November 2014

Linear Function Spaces and Operators

1.1

Overview

We now change course and consider systems which can change in space and time or multiple space
dimensions, such as the transient PFR, heat and mass transfer, and distributions of things such
as polymer chain length. As an example, we look at an arbitrary differential volume element and
consider diffusion, convection, and reaction in space, as well as the energy balance.


 2

C
(ux C) (uy C) (uz C)
C
2C
2C
=
+
+
+D
+
+
+ r(C)
(1.1)
t
x
y
z
x2
y 2
z 2



 2
T
(ux T ) (uy T ) (uz T )
k
T
2T
2T
=
+
+
+
+
+
+ f (T )
(1.2)
t
x
y
z
cp x2
y 2
z 2
We recall that many special cases can exist here that make the problem easier to solve, such
as steady state 1-d, which gives us an ODE boundary value problem, steady state 2-d with no
convection and no reaction, which gives us a PDE boundary value problem, and transient 1-d with
no convection, which gives us a PDE initial-boundary value problem. The remainder of the class
will discuss how to attack these different scenarios. When these problems are linear, the common
form of the problem is u
t = L u f for a transient problem, or L u = f for a steady problem. In
this case, L is a differential operator. Also, u takes the form of a variable, but is actually a function
of x that satisfies the equation.

1.2

Linear Function Spaces

We begin by defining a collection of functions with properties analagous to those of a linear inner
product vector space. We then consider operators on this function space with properties analogous
to a perfect matrix. We can characterize functions by their differentiability and integrability, as
follows:
C(a, b) functions: Functions defined on the interval (a, b) and continuous over this interval
C 1 (a, b) functions: Functions that are continuous and continuously differentiable over (a, b).
C n (a, b) functions: Functions that are continuous and n-times continuously differentiable over
(a, b).
R(a, b) functions: Functions whose Riemann integral from a to b exists. The function must
be either continuous or piecewise continuous.

L1 (a, b) functions: Functions whose Lebesque integral exists from a to b, allowing the function
to have a set of countably infinite discontinuities.
L2 (a, b) functions: Functions which are square integrable over a to b in the Lebesque sense.
We can define additivity and scalar multiplication over the function space. Thus, (F + G)(t) =
F (t) + G(t) and (cF )(t) = c(F (t)). The notions of length and orientation are more complex, but
we can still define an inner product as follows:
Z
< f (t), g(t) >=

f (t)g(t)dt

(1.3)

This definition allows us to also consider the norm of a function by:


kf (t)k2 =< f (t), f (t) >

(1.4)

Example 1.1: Find the norm of f (x) = sin x, over the interval (0, 2).
We use equations (1.3) and (1.4) to solve:
2

sin2 x dx =

kf (x)k =

(1.5)

Example 1.2: Show that f (x) = sin x and g(x) = cos x are orthogonal over the interval (0, 2).
We recall that two vectors are orthogonal if their inner product is zero. We get the same notion for
functions. Using equation 1:
Z 2
< f (x), g(x) >=
sin x cos x dx = 0
(1.6)
0

Thus, the functions are orthogonal.


We see that for our norm integral to exist, we see that the function space L2 is a naturally occurring
function space for us.
Example 1.3: Show that the functions f (t) = 1,

1, t irrational
g(t) =
0,
t rational
are equal.
We can say two functions are equal if kf gk = 0. Thus, we are examining the integral:
Z

(f (t) g(t))2 dt

(1.7)

We see that the function (f (t) g(t))2 is zero except for a set of points that are countably infinite.
Thus, we can ignore these discontinuities when we integrate in the Lebesque sense and conclude
that the functions are equal.
2

1.3

Hilbert Spaces

A difference between function spaces and vector spaces is that unlike finite dimensional vector
spaces, we need to consider the convergence of sequences and completeness of space. For vectors
in En , if a sequence x1 , ..., xm converges in a Cauchy sense (limm,n xm xn = 0), then the
sequence converges to a vector in En . However, this notion is not always true in E of function
spaces. Consider the sequence:

0 t < 1/2 1/2n


0,
n(t 1/2) + 1/2, 1/2 1/2n t 1/2 + 1/2n
fn (t) =
(1.8)

1,
1/2 + 1/2n < t 1
We see in the following figure that the functions are continuous in the interval (0, 1). However, in
the infinite limit, this is not the case, as we have a discontinuity between 0 and 1 and t = 1/2. We
can also show that the sequence converges in a Cauchy sense.

Figure 1.1. The series of functions defined in equation (6).


Thus, for completeness of the set, we need to add all limit functions of Cauchy sequences from
C(0, 1). When we do this, the resulting space is L2 (0, 1) and is complete. When this is the case, we
can call our space a Hilbert space. A Hilbert space is most resemblant of En since they are both
complete, linear inner product spaces.

1.4

Basis Functions

Completeness of the inner product space guarantees that we will be able to find a basis set of
functions. We define a basis set as a set of linearly independent functions such that any function
in H can be defined as a linear combination of the basis functions. Some examples of this are
fi (t) = ti and fi (t) = sin(it). The first set of functions are not orthogonal but the second set
are. Both sets are linearly independent. To determine if a set of functions f1 , ..., fn are linearly
independent we need to use Gram-Schmidt. If the functions are linearly independent, the GramSchmidt procedure will give n orthogonal functions, but if not, we will generate some zero functions.
In contrast to En , not all linearly independent sets of functions are basis sets in the infinite dimensional case. Some common basis sets are the Fourier sets, Legendre polynomials, and
P Laguere
polynomials. We are interested in basis sets that are orthonormal, such that if f =
n fn , we
can find n =< f, fn >. Lets consider the set of functions 0 = (2)1/2 , n = 1/2 cos(nx), and

n = 1/2 sin(nx). This is an orthonormal set of functions, which we confirm by taking inner
products:
Z
1
< 0 , 0 > =
dx = 1
(1.9)
2
Z
1
cos(nx) cos(mx)dx = mn
(1.10)
< n , m > =

Z
1
< n , m > =
sin(nx) sin(mx)dx = mn
(1.11)

Z
1
< n , m > =
sin(nx) cos(mx)dx = 0
(1.12)

This confirms that our basis set is orthonormal. We consider the set of polynomials, fn = tn . We
note that this set is non-orthogonal, as, for example< f0 , f2 >= 2/3. However, using Gram-Schmidt,
we can get a new set of functions gn = (1, t, t2 1/3, ...). These are the Legendre polynomials.
Another set of orthonormal functions, defined on the domain (0, ) are the Laguere functions,
defined by:
fn (x) = exp(x/2)Ln (x)
exp x dn n
Ln (x) =
(x exp(x))
n! dxn

1.5

(1.13)
(1.14)

Operators in Hilbert Space

We define L as a linear operator in H if for x, y H, Lx, Ly H and that L(ax+by) = aLx+bLy


H. We note that this is not guaranteed in infinite dimensional space to hold true for all x even
if it holds for some x. This makes it necessary to define a domain D of the operator where
D : {x H L H}. We consider the following example:
Example 1.4: Consider the following Hilbert space:
E :< x, y >= x y, kxk <
We then consider the operator:
A = aij =

iij

We consider the following vectors to see if they are in the domain of A. We let xi = 1/i and
zi = 1/i2 . We consider (Ax)i = 1/i1/2 , and (Az)i = 1/i3/2 . We see that kAxk = , but
kAzk < . It is evident that Az is in H as we defined it, but Ax is not. Thus, z is in the domain
of A but x is not.
Some important operators in Hilbert spaces are integral and differential operators. The integral
operator is typically defined as follows:
b

Z
K :K v =

k(t, s)v(s)ds
a

We call k(t, s) the kernal of the operator, and it is analogous to a matrix. If K is an integral operator
in L2 (a, b) with a, b finite and the kernal is continuous, then f (t) = K v is also continuous and
in L2 . Thus, the domain of the operator is simply our Hilbert space, H. Some integral operators
4

include the Laplace transform and the Fourier transform. The simplest integral operator has the
form k(t, s) = g(t)h (s). This operator form is called a dyad, and when plugged into the operator
definition, we get:
Z b
g(t)
h (s)v(s)ds = g(t) < h, v >
(1.15)
a

In general, a dyadic operator has the form K =

i gi hi .

We now change our focus to differential operators. We define our Hilbert space as L(0, 1). We
define a differential operator as follows:
L:Lu=

d2 u
,
dt2

u(0) = 1 ,

u(1) = 2

We note that for differential operators, the boundary values are a necessary component to the
definition of the operator. The domain of the operator is all functions in L2 (0, 1) whose second
derivative exists and that satisfy the given boundary conditions.

1.6

Perfect Operators

Analogous to what we did with matrices, we can define the eigenproblem for operators in Hilbert
space:
L u = u
(1.16)
Consider the operator:
L:Lu=

d2 u
,
dt2

u(0) = u(1) = 0

Solving the differential equation:


d2 u
= u
(1.17)
dt2

Which has solutions of the form A sin( t)+B cos( t). When we apply the boundary conditions,
we see that B = 0. After applying
u(1) = 0, we get the condition that either A = 0, which gives

2 2
us a trivial solution, or sin( ) =
0. Thus, we can conclude that = n and u = sin(nt),
n = 1, 2, .... If we then let A = 2, our eigenfunctions are orthonormal. We conclude that the
above operator is perfect, as its eigenfunctions form a complete basis set of orthonormal functions.

Example 1.5: Consider the following dyadic operator in L2 :


K = p0 p0 + p1 p1
Where p0 = 1 and p1 = t, the zeroth and first order Legendre polynomials. We need to solve
the eigenequation K u = u. Lets then postulate that our eigenfunctions are also the Legendre

polynomials:
v = p0 = 1
Z
Z 1
dt + t
=

(1.18)
1

t dt

(1.19)

=2

(1.20)

v = p1 = t
Z
Z 1
t dt + t
t =

(1.21)
1

t2 dt

(1.22)

= 2/3

(1.23)

We see that higher order Legendre polynomials evaluate to zero in the eigenequation. Thus, we get
that vi = pi with 0 = 2, 1 = 2/3, and i = 0, i = 2, 3, .... We see that our eigenfunctions form a
complete basis set, thus our operator is perfect.
Example 1.6: Now lets consider a slightly different integral operator:
K = p1 p0 + p1 p1
If we again postulate that the Legendre polynomials are our eigenfunctions, we plug in p0 and see:
Z 1
Z 1
=t
dt + t
t dt
(1.24)
1

= 2t

(1.25)

We see that our left hand and right hand sides dont match up since is a constant. Thus, p0 is
not an eigenfunction of the system, and the operator is not perfect.
Lets now consider an operator L in H with (u1 , ...) orthonormal eigenfunctions that qualify as
a complete basis set. Then, for all f H, we can define the function as follows:
f=

an un ,

an =< un , f >

(1.26)

un un )f

(1.27)

n=1

Thus,
f =(

n=1

P
From this equation, we can clearly see that the resolution of the identity operator is
un un , which
we see is just a dyadic operator. Similarly, we can make an argument to find the resolution of any

operator:
Lf =L(

un un f )

(1.28)

n=1

=
=
L=

X
n=1

X
n=1

L un un f

(1.29)

n un un f

(1.30)

n un un

(1.31)

n=1

If the basis is not orthonormal, much like in the finite-dimensional case we use vn to denote a
reciprocal basis set where < vn , um >= m,n . In this case, our operator resolution is:
L=

n un vn

(1.32)

n=1

This is another profound result, analogous to what we found in finite-dimensional vector space,
that allows us to represent an arbitrary operator as a sum of dyadic integral operators via the
eigenvalues and eigenfunctions. In general, this resolution is easy to find for integral operators but
can be much tougher for differential operators.
We now define some special operators, some of which are analogous to the matrix case:
A bounded operator satisfies the condition kLk = supx6=0 Lx
x < . We note that differential
operators are typically not bounded, but integral operators have a better chance of being
bounded.
The adjoint operator satisfies the condition < v, L u >=< L v, u >.
An operator is self-adjoint if L = L.
An operator is normal if LL = L L.
An operator is compact if it can be approximated uniformly by a sequence of finite dyadic
operators.
An operator is perfect if it is both normal and compact.
For compact integral operators, we can write a solvability result similar to the Fredholm alternative
theorem. This is that L u = f has a solution if and only if < vi , f >= 0, where vi is a solution to
the equation L v = 0.

2
2.1

Second Order Differential Operators


Overview

We now consider operators of specific interest in chemical engineering applications, the second order
differential operators. We consider the differential operator:
L u = a2 (x)

d2 u
du
+ a1 (x)
+ a0 (x)u
2
dx
dx
7

(2.1)

Defined over the interval x (a, b). We recall that to define our differential operator, we also need
boundary conditions:
1 (u) = 11 u(a) + 12 u0 (a) + 13 u(b) + 14 u0 (b) = 1
0

2 (u) = 21 u(a) + 22 u (a) + 23 u(b) + 24 u (b) = 2

(2.2)
(2.3)

We require that these conditions be linearly independent, i.e, that rank() = 2. Typically, 1 will
specify conditions at just a while 2 will specify conditions at just b. We define the domain of L
as:
DL = {u H Lu H, Bi (u) = i , i = 1, 2}
(2.4)

2.2

Self-Adjoint Operators

We will find it useful to define the adjoint operator, which means that we need to find L , i , and
DL . Strictly speaking, we cannot use our previously found result for bounded operators, but we
assume that it doesnt matter and use the result anyway. Thus, to find the adjoint operator, we
solve the equation < v, L u >=< L v, u >. A linear operator is symmetric if the domain of L is
contained by the domain of L , DL DL and l = L . We also call this formally self adjoint. Two
operators are self adjoint if their domains are equal, DL = DL .
We now define a formal approach towards finding the adjoint operator. We define L such that
< V, L u > < L v, u > is only dependent on the boundary conditions, and then choose the
adjoint boundary conditions such that the difference is made zero. Consider the following function:
Z x
v (y)L u(y) dy
(2.5)
(x) =
a

Which has a similar structure to < v, L u >. Plugging in our definition of L u, we get:
 
Z x
Z x
Z x
du

(x) =
v (y)a0 (y)u(y)dy +
v (y)a1 (y)du +
v (y)a2 (y)d
(2.6)
dy
0
a
a
R
R
We recall the formula for integration by parts, v du = uv u dv, and use this to work on the
second and third terms:
Z x
Z x
d(v a1 )

x
v (y)a1 (y)du = v a1 u|a
u(y)
dy
(2.7)
dy
a
a
And, for the third term, we integrate by parts twice to obtain:


 
Z x
Z x
du
du x
d(v a2 ) x
d2 (v a2 )

v (y)a2 (y)d
= v a2 u
+
u(y)
dy
dy
dy a
dy a
dy 2
a
a

(2.8)

Lets now posutlate a form of the adjoint operator:


L v = a0 (x)v(x)

d(a1 v) d2 (a2 v)
+
dx
dx2

We now further postulate another psuedo-inner product function:


Z x
0 (x) =
(L v) (y)u(y)dy
a
Z x
Z x
Z x 2
d (a2 v )
d(a1 v )

=
a0 v u dy +
=
u dy +
u dy
dy
dy 2
a
a
a
8

(2.9)

(2.10)
(2.11)

Which is the same integral form as what we found from the original operator. Thus, when we get
the difference of the inner-products, we get:


d(v a2 ) b
du b
0

(b) (b) = v a1 u|a + v a2


u
(2.12)
dy a
dy
a
Now, we need to choose i such that the above function is zero. In general, this is not unique, but
we can make it unique by choosing the largest subspace.
Example 1: Consider the second order differential operator:
d2 u
dx2
We recall from our previous definition that in this case, a2 = 1, a1 = a0 = 0. Thus:
Lu=

d2 v
(2.13)
dx2
And our operator is formally self adjoint. We now look at our conditions on the boundary functionals:
v u0 (a) v u0 (b) + v 0 u(b) v 0 u(a) = 0
(2.14)
L v =

We consider multiple cases of potential boundary conditions. For u(0) = u(1) = 0, our condition
becomes:
v u0 (0) v u0 (1) = 0
(2.15)
We know that we have not constrained the derivatives of u, which implies that v (0) = v (1) = 0,
and our operator is self-adjoint. For case 2, consider u(0) = u(1) and u0 (1) = 0. This gives the
following conditions on the boundary functionals:
u(0)(v 0 (1) v 0 (0)) v u0 (0) = 0

(2.16)

Since the remaining u terms are arbitrary, we require that v 0 (1) = v 0 (0) and v (0) = 0. Thus, our
boundary conditions are different and our operator is not self-adjoint. However, it is still formally
self-adjoint since the expression of the operator is the same.

2.3

Properties of Self-Adjoint Operators

We have now learned how to find the adjoint of an operator and check for self-adjointness. However,
it is not yet evident why this property is important. It turns out that self adjoint operators have
many attractive properties. These properties assume homogeneous boundary conditions, i = 0.
There exists an inverse operator G sucht that for a general inhomogeneous problem, Lu =
f u = Gf ., were G is a compact integral operator whose kernal is called a Greens function,
and can be found.
The eigenvalues of L are real, and its eigenfunctions form a complete orthonormal set. Thus,
we can resolve a function of the operator as:
X
f (L) =
f (i )i i
(2.17)
And we can also conclude for an initial boundary value problem, u
t = Lu, u(t = 0) = f (x):
X
u = exp(Lt)f =
exp(i t) < i , f > i
(2.18)
Thus, a self-adjoint operator is guaranteed to have a solution to both the steady and transient
problem.
9

2.4

Sturm-Liouville Operators

Lets note that the functions a are typically real, so we can drop the conjugate notation. Our
adjoint operator is then:
L v = L v = a0 (x)v(x)

d(a1 v) d2 (a2 v)
+
dx
dx2

(2.19)

Thus, we have formal self-adjointness when the following holds:


a2 (x)

d2 u
du
d(a1 u) d2 (a2 u)
+ a1 (x)
+ a0 (x)u = a0 (x)u(x)
+
2
dx
dx
dx
dx2
da2
= a1
dx

(2.20)
(2.21)

Which we get from using the product rule on the left-hand side. For formal self-adjointness, our
operator L has the form:


d
du
Lu=
a2 (x)
+ a0 u(x)
(2.22)
dx
dx
When we have an operator in this form, we only need to look at the boundary functional conditions.
We can define a more general class of differential operators, the Sturm-Liouville operator, which
has the form:


du
q(x)
1 d
p(x)
+
u(x)
(2.23)
s(x) dx
dx
s(x)
Such that s, p, p0 , q are real and continuous in the domain and s, p are strictly positive in the domain.
We define this operator in the Hilbert space L2 (a, b, s(x)), such that our inner product is defined
as:
Z
b

< v, u >=

s(x)v (x)u(x)dx

(2.24)

These operators are always formally self adjoint. Our boundary functional condition is then:
(p(x)(v u0 (x) v 0 u(x)))ba = 0
Which implies that the operator is self-adjoint if and only if:




13 14
11 12




p(a)
= p(b)
23 24
21 22

(2.25)

(2.26)

Any operator with the form:


L u = a2 (x)

d2 u
du
+ a1 (x)
+ a0 (x)u
2
dx
dx

(2.27)

Where ai is real and continuous and a2 is strictly positive can be transformed into Sturm-Liouville
form and is as such regular, as follows:
Z x

a1 ()
p(x) = exp
d
(2.28)
c a2 ()
p(x)
s(x) =
(2.29)
a2 (x)
q(x) = a0 (x)s(x)
(2.30)
Where c is an arbitrary point in our interval (a, b).
10

3
3.1

Analytical Solution Methods


Solution of Boundary Value Problems

Lets now consider the general boundary value problem:


L u = f (x),

i ui

(3.1)

Where, in general, both the differential equation and the boundary conditions may be non-homogeneous.
We note, however, that we can use the principle of superposition solving the following two easy
problems:
Lu
= 0, i u
= i

(3.2)

Lu
= f (x), i u
=0

(3.3)

The principle of superposition tells us that the total solution will be u = u


+u
. We note that if we
have the equation L U = 0, i u = 0, We only get a trivial solution of u = 0. This is the closest
analogy we have in function space to the case of a matrix A being non-singular and having Ax = 0.
Lets focus first on solving the homogeneous differential equation shown in equation (3.2), L u = 0.
This problem is of particular interest as it often occurs in steady-state reaction-diffusion problems
and also arises in solving the eigenproblem, if we define a new operator L0 such that L0 u = Luu.
For an arbitrary pth order operator, we can let u1 , ..., up be a set of fundamental linearly independent solutions to the problem L u = 0 with p linearly independent initial conditions, called a
fundamental system. Then, we can say that:
u=

p
X

j uj

(3.4)

j=1

Is a solution to our problem, where j are chosen to satisfy i u = i .


Example 3.1: Consider the operator:
d2 u
=0
dx2
We need to generate an independent set of initial conditions. We specify u(0) = 1, u0 (0) = 0 to get
u1 (x) = 1. We specify a different independent set, u(0) = 0, u0 (0) = 1 to get u2 (x) = x. Thus, our
general solution is 1 +2 x. We would further specify our values based on the boundary conditions.
Example 3.2: Consider the operator:
d2 u
+ u = 0
dx2
Lets again generate a set of initial conditionsthat are easy for us to work with. If welet
u(0) = 1, u0 (0) = 0, we get that u1 (x) = cos( x). If we further let u(0) = 0, u0 (0) = ,
we get thatu2 (x) = sin( x). Thus, our general solution is the same as we have seen before,
u = 1 cos( x) + 2 sin( x).
Lets now consider the solution to problem (3.3), the inhomogeneous differential equation with
homogeneous boundary conditions. For this system, there is a solvability result analogous to the
11

Fredholm alternative theorem in vector space. The system L u = f , i u = 0 has a solution if and
only if < v, f >= 0, where v is any solution to the homogeneous adjoint problem, L v = 0. If
there is no solution to this problem, then the solution to L u = f is unique. Otherwise, we get the
solution;
p
X
u = up +
j uj
(3.5)
j=1

Where uj are the fundamental system of L, i.e. the solutions to L u = 0 with j chosen to
match the boundary conditions. Similar conditions, but a bit more complex, exist for the fully
inhomogeneous problem. However, this solution will still have the same 2-part structure.
p
X

u(x) = uI (x) +

j uj (x)

(3.6)

j=1

Where uI is the solution to the inhomogeneous differential equation with homogeneous boundary
conditions, and uj are the fundamental solutions to the homogeneous differential equation, with j
chosen to satisfy boundary conditions.
Example 3.3: Consider the inhomogeneous problem:

d2 u
= f (x)
dx2

We recall equation (6), which gives us the form of our solution. We already solved for the fundamental solutions in example 1. We solve the inhomogeneous problem by direct integration, giving
us:
Z x
uI (x) =
(x y)f (y) dy
(3.7)
0

Thus, our solution is:


Z

(x y)f (y) dy

u(x) = 1 + 2 x

(3.8)

Where 1 and 2 are chosen to meet our boundary conditions. Lets consider two different sets of
boundary conditions. First, lets look at Direchlet boundary conditions, u(0) = 1 , and u(1) = 2 .
Plugging in, we get:

Z
1 + 2

1 = 1

(3.9)

(1 y)f (y) dy = 2

(3.10)

This tells us that for any forcing function, we get that a solution to the problem exists and is unique.
If we look at the case of Neumann boundary conditions, u0 (0) = 1 , u0 (1) = 2 , we get:
Z 1
2 = 1 2
f (y) dy = 2
(3.11)
0

Thus, we haveR a solvability condition for these boundary conditions. Our problem is only solvable
1
if and only if 0 f (y) dy = 1 2 .
We then summarize a general result for second order operators,
L u = a2

d2 u
du
+ a1
+ a0 ,
2
dx
dx
12

i u = i

(3.12)

This problem has the general solution:


Z
u(x) = 1 u1 (x) + 2 u2 (x)
a

u1 (x)u2 (y) u2 (x)u1 (y)


f (y) dy
a2 (y)w(x)

(3.13)

Where w(x) is the Wronskian function, defined as:


w(x) = u1 (x)u02 (x) u2 (x)u01 (x)

(3.14)

And u1 and u2 are the fundamental solutions of for L.

3.2

A Second Look at the Eigenproblem

We can now use the mathematical framework from the last section to solve the eigenproblem. Lets
2
consider the second order equation, L u = ddxu2 and solve the eigenproblem L u = u. We

recall the fundamental solutions to this system, sin( x) and cos( x). We look at the boundary
conditions, and see that the system:
0 = 1 1 u1 + 2 1 u2

(3.15)

0 = 1 2 u1 + 2 2 u2

(3.16)

Which will have a non-trivial solution if and only if:




1 u1 1 u2


2 u1 2 u2 = 0

(3.17)

Lets consider the self-adjoint operator where u(0) = u() = 0. Using equation (3.17), we get that



0
1

(3.18)
sin( ) cos( ) = 0
Which tells us that = n2 , where n is an integer. However, when = 0, we get that u = 2 ,
which matches the boundary conditions if and only if 2 =, which is a trivial solution and as such
is not an eigenfunction.
For the remaining eigenvalues, the eigenfunctions, normalized, have the
p
form j = 2/ sin(jx). This is a complete set of orthonormal eigenfunctions.

3.3

Initial-Boundary Value Problems

Lets further extend our solution framework to solving time-dependent problem. This is best
exhibited by an example. Lets consider an insulated steel bar of length l. Lets consider that we
have an initial concentration profile of u(t = 0) = u0 (x), and no-flux boundary conditions such that
u0 (0) = u0 (l) = 0. It can be shown that this operator is self-adjoint. Lets now consider solving the
IBVP:
u
d2 u
= DL u, L u = 2
(3.19)
t
dx
Lets solve the eigenequation, L u = u. Using equation (3.17) with sine and cosine as our basis
function, we see that:


0 =0
(3.20)
cos( l) sin( l)

13


2 2 2
Thus, we see that 0 = sin( ).
p As such, we see
p that = n /l , with n = 0, 1, 2, .... After
normalization, we get that 0 = 1/l, and n = 2/l cos(nx/l). Looking at the general IBVP,
we recall that the solution to u
t = DL u. is:
u(x, t) = exp(tDL) u0 (x)

(3.21)

We recall that we can use our spectral resolution result to evaluate the exponential of the operator,
such that:
u(x, t) =

exp(tDi )i i u0

(3.22)

n=0

1
l

u0 (x) dx +
0

2X
exp(tDn2 2 /l2 ) cos(nx/l)
l
n=1

u0 (x) cos(nx/l) dx

(3.23)

We note a couple things about this system. First, at steady state, all of the exponential terms go
to zero and the system ends at its average value of its initial condition. Second, even far away from
steady state, the contribution of high-frequency modes is low since n2 appears in the exponential
term. Thus, as n increases, its contribution to the solution decreases very quickly. Systems with
this property are called dissapative, and the property naturally occurs in diffusion processes.

14

Anda mungkin juga menyukai