Anda di halaman 1dari 19

Math. Nachr. 278, No. 1213, 1490 1508 (2005) / DOI 10.1002/mana.

200410317
Elliptic equations and products of positive denite matrices
Charles H. Conley
1
, Patrizia Pucci
2
, and James Serrin
3
1
Department of Mathematics, College of Arts and Sciences, University of North Texas, Denton TX, 76203-1430, USA
2
Dipartimento di Matematica e Informatica, Universit` a degli Studi di Perugia, Via Vanvitelli 1, 06123 Perugia, Italy
3
Department of Mathematics, University of Minnesota, Minneapolis, Minnesota, USA
Received 15 November 2004, revised 10 February 2005, accepted 15 March 2005
Published online 8 September 2005
Key words Quasilinear singular elliptic inequalities, strong maximum principle, products of Hermitian
matrices
MSC (2000) Primary: 35J15, 15A18; Secondary: 35J70, 15A23, 15A57
Dedicated to the memory of Professor F. V. Atkinson
We present necessary and sufcient conditions under which the symmetrized product of two nn positive
denite Hermitian matrices is still a positive denite matrix (Part I, Sections 2 and 3). These results are then
applied to prove the validity of the strong maximum principle, as well as of the compact support principle,
for nonnegative C
1
distribution solutions of general quasilinear inequalities, possibly not elliptic at points
where the gradient variable is either zero or large (Part III, Sections 9 and 10).
In Part II (Sections 48) we consider the general problem of nding bounds for the least and greatest
eigenvalues of the product of two (not necessarily denite) Hermitian matrices. In particular, we rene
earlier results of Strang for this problem.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1 Introduction
In this note we present sufcient and (essentially) necessary conditions under which the symmetrized product
of two n n positive denite complex valued Hermitian matrices is still positive denite (see Theorem
2.1). These results are then applied to prove the validity of a strong maximum principle for nonnegative C
1
distribution solutions of quasilinear differential inequalities of the general form
D
i
a
ij
(x, u)A([Du[)D
j
u B(x, u, Du) 0 , u 0 in , (1.1)
where [a
ij
(x, u)], i, j = 1, . . . , n, is a continuously differentiable, real symmetric coefcient matrix dened
in R
+
0
, which is positive denite when u = 0, that is
a
ij
(x, 0)
i

j
> 0 for all x , R
n
0 . (1.2)
We suppose furthermore that the principal operator A = A() and the main nonlinearity B = B(x, u, )
satisfy the following conditions
(A1) A C
1
(R
+
), R
+
= (0, );
(A2)

() > 0 for > 0, and () 0 as 0, where () := A();


(B1) B C( R
+
0
R
n
) and
B(x, u, ) ([[) + f(u)
for x , u 0, and all R
n
with [[ 1, where > 0, and the nonlinearity f is continuous in R
+
0
and
nondecreasing on some interval (0, ), > 0, with f(0) = 0.
In fact, the strong maximum principle for nonnegative C
1
distribution solutions of the general quasilinear
differential inequality (1.1) was already stated in [7, Theorem 8.1], but with inadequate conditions on the
matrix [a
ij
]. This is because the assertion on [7, p. 42], that the product matrix [a
ik
b
kj
] is positive denite,

e-mail: conley@unt.edu, Phone: +00 (940) 565 3326, Fax: +00 (940) 565 4805

e-mail pucci@dipmat.unipg.it, Phone: +39 075 585 5038 Fax: +39 075 585 5024

Corresponding author: e-mail: serrin@math.umn.edu, Phone: +00 (612) 624 9530, Fax: +00 (612) 626 2017
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1491
fails to hold for arbitrary positive denite matrices [a
ij
]. That is, the (symmetrized) product of two positive
denite matrices need not itself be positive denite.
In this paper we overcome this gap in the proof by adding a suitable further assumption on the matrix [a
ij
]
(in terms of the main operator A) under which the strong maximum principle is valid: see (9.2) and (9.3)
and Theorem 8.1

. It should be noted, in fact, that Theorem 8.1

applies even in cases when the differential


inequality (1.1) is not elliptic.
In the remark at the close of Section 9 we also discuss several important subcases when the conditions (9.2)
and (9.3) automatically apply, in which case [7, Theorem 8.1] is valid as stated.
Section 10 treats the compact support principle, complementary to the strong maximum principle, see
Theorem 8.5

.
Part I of the paper is devoted to questions concerning the product of positive denite matrices. In partic-
ular, in Section 2 we give in Theorem 2.1 a sufcient condition on the eigenvalues of two positive denite
Hermitian matrices A and B so that the symmetrized product of A and B will also be positive denite. This
result, which answers a question raised by Olga Taussky (Todd) some years ago, is due independently to
Nicholson [5] and Strang [10], but we give a simpler and entirely elementary proof based on an idea of Tao.
In Section 3 we treat a converse result to Theorem 2.1.
These results are the basis for the new maximum principle Theorem 8.1

. Several further applications of


the results of Part I are noted in the bibliography of [1] and [2].
In Part II we turn to the related question of lower and upper bounds for the eigenvalues of the symmetric
product of (not necessarily denite) Hermitian matrices, rening earlier results of Strang [11], see Section 4.
The results of Part II are not needed for the elliptic results of the paper which are described above, and which
are treated in Part III, Sections 9 and 10.
Part I
2 Positive denite matrices
Here we present our rst main algebraic result. For any square matrix P, let P be its Hermitian symmetriza-
tion:
P =
1
2
_
P +P
T
_
.
Note that if A and B are Hermitian, then AB is their symmetric product:
AB =
1
2
(AB +BA) .
Theorem 2.1 Let A, B be positive denite Hermitian complex n n matrices. Then AB is positive
denite Hermitian if
__
a
n
a
1
1
_
_
_
b
n
b
1
1
_
< 2 , (2.1)
where 0 < a
1
. . . a
n
and 0 < b
1
. . . b
n
are the eigenvalues of A and B, given in increasing order.
This result, with an equivalent though less transparent condition than (2.1), has been given by Nicholson
[5]; see also Strang [11] and Alikakos and Bates [1], [2], and Gustafson [3]. Our proof is shorter and more
elementary than Nicholsons, in that it is based on the elegant observation of Terrence Tao that, in the real
case, AB is positive denite if the sum of the maximal angles by which A, B rotate an arbitrary vector
y R
n
is less than /2 (see (2.2) and (2.3) below).
Theorem3.1 provides a converse to Theorem2.1, showing that condition (2.1) is best possible in the sense
that the conclusion fails for any right-hand side greater than or equal to 2.
The rst three paragraphs of the following proof can be omitted in the important case when the matrices
A and B are real symmetric.
Pr o o f of Theorem 2.1. That AB is Hermitian is obvious. We begin by reducing to the case that A
and B are real. Given any complex matrix P, dene
(P) =
_
Re P ImP
ImP Re P
_
.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1492 Conley, Pucci, and Serrin: Elliptic equations
It is easy to verify that is a real linear homomorphismfrom the complex nn matrices to the real 2n2n
matrices, and that P is Hermitian if and only if (P) is symmetric. Moreover, it is a standard exercise to
prove that if the eigenvalues of P are
1
, . . . ,
n
, then those of (P) are
1
,
1
, . . . ,
n
,
n
. It follows that P
is positive denite Hermitian if and only if (P) is positive denite symmetric.
Suppose that the theorem holds for real matrices. Then given positive denite Hermitian complex ma-
trices A and B satisfying condition (2.1), the preceding discussion shows that (A) and (B) also satisfy
condition (2.1). Therefore (A)(B) is positive denite symmetric. But it is easy to check that this last
matrix is equal to (AB). Therefore AB is positive denite Hermitian, proving the theorem for complex
matrices.
Thus we may henceforth restrict our attention to real 2n2n symmetric matrices, with doubled eigenval-
ues. In fact it is then clear that it is enough to consider real n n symmetric matrices, so we assume this for
the remainder of the proof.
Given nonzero vectors x, y in R
n
, write Angle (x, y) for the angle between them(assumed to be in [0, )).
For any nonzero real symmetric matrix P, dene

P
= sup
xR
n
\ker P
Angle (x, Px) . (2.2)
Clearly P is positive denite if and only if
P
< /2.
Note that x ABx = Ax Bx. Therefore we must prove that Eq. (2.1) implies
sup
xR
n
\{0}
Angle (Ax, Bx) <

2
.
Now by elementary geodesic theory on the sphere S
n1
we get
Angle (Ax, Bx) Angle (Ax, x) + Angle (x, Bx) ,
as the two angles on the right-hand side are each less than /2. Therefore it will do to prove that Eq. (2.1)
implies
A
+
B
< /2, or equivalently,
cos(
A
+
B
) = cos
A
cos
B
sin
A
sin
B
> 0 . (2.3)
We now need the following lemma.
Lemma 2.2 Let A be a positive denite real symmetric n n matrix with eigenvalues 0 < a
1
a
2

. . . a
n
. Then
cos
A
= 2

a
1
a
n
a
1
+ a
n
, sin
A
=
a
n
a
1
a
n
+ a
1
. (2.4)
Pr o o f. It is sufcient to prove the formula for cos
A
. Since
A
and the eigenvalues of A are invari-
ant under orthogonal transformations of A, we may assume that A is the positive diagonal matrix A =
diag(a
1
, . . . , a
n
). We must compute
cos
A
= inf
xR
n
\{0}
x Ax
[x[ [Ax[
= inf
xR
n
\{0}

n
1
a
i
x
2
i
_
n
1
x
2
i
_
1/2
_
n
1
a
2
i
x
2
i
_
1/2
[0, 1] . (2.5)
To establish the minimum, write z
i
=

a
i
x
i
so that
inf
xR
n
\{0}

n
1
a
i
x
2
i
_
n
1
x
2
i
_
1/2
_
n
1
a
2
i
x
2
i
_
1/2
= inf
zR
n
\{0}

n
1
z
2
i
_
n
1
a
1
i
z
2
i
_
1/2
_
n
1
a
i
z
2
i
_
1/2
which in turn by the Kantorevich inequality (see, e.g., [4, p. 444]; a short, simple proof is due to Ptak [10]) is
greater or equal, with equality attained, to

a1an
a1+an
.
Remark 2.3 Lemma 2.2 provides an interesting geometric interpretation of Kantorevichs inequality.
Completion of the proof of Theorem 2.1. In view of (2.3), Eq. (2.4) implies (2.1). To see this, write
k = a
n
/a
1
, = b
n
/b
1
and
I =
_
k 1
__
1
_
, J =
_
k + 1
__
+ 1
_
,
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1493
so that (2.1) reads I < 2. Then
I 2 =
_
k 1
_

_
k +

_
, J 2 =
_
k 1
_
+
_
k +

_
,
and so from (2.4) one gets the following identity, implying Theorem 2.1,
cos(
A
+
B
) =
(I 2)(J 2)
(k + 1)( + 1)
. (2.6)
A second proof of Theorem 2.1 can be obtained from the results of Section 6, see the Remark 2.3 at the
end of that section.
Remark 2.4 When (2.1) holds, not only is AB positive denite, but also its eigenvalues are bounded
away from 0 in terms of I 2. To be more explicit, recall that x ABx = Ax Bx is greater than or equal
to [Ax[ [Bx[ cos(
A
+
B
). Therefore (2.6) gives
x ABx a
1
b
1
[x[
2
(I 2)(J 2)
(k + 1)( + 1)

2a
1
b
1
(k + 1)( + 1)
(2 I) [x[
2
,
from which the assertion is immediately obtained.
Remark 2.5 Condition (2.1) is equivalent to cos(
A
+
B
) > 0, which in turn is equivalent to cos
2

A
+
cos
2

B
> 1. By Lemma 2.2 this is equivalent to
a
1
a
n
(a
1
+ a
n
)
2
+
b
1
b
n
(b
1
+ b
n
)
2
>
1
4
.
For 2 2 matrices, this last condition reads
det A
tr
2
A
+
det B
tr
2
B
>
1
4
.
For n n matrices, it is possible to give a corresponding sufcient (although not necessary) condition for
AB to be positive denite, in terms only of the trace and determinant of A and B. To accomplish this, rst
note that (as we show below)
n
n
det A
tr
n
A

4a
1
a
n
(a
1
+ a
n
)
2
. (2.7)
Hence AB is positive denite if
det A
tr
n
A
+
det B
tr
n
B
>
1
n
n
.
The inequality (2.7) is obtained by induction on k from the following result:
Let r
1
, r
2
, . . . be any sequence of positive numbers, not necessarily increasing. Then for any positive
integer k,
(k + 1)
k+1
r
1
r
2
. . . r
k+1
(r
1
+ . . . + r
k+1
)
k+1

k
k
r
1
r
2
. . . r
k
(r
1
+ . . . + r
k
)
k
.
Pr o o f. This is easily checked to be equivalent to the relation
f(k) f
_
r
1
+ . . . + r
k
r
k+1
_
where the function f is given by
f(x) = x
k/(k+1)
(x + 1) .
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1494 Conley, Pucci, and Serrin: Elliptic equations
But f is minimal at x = k, so we are done.
3 A converse of Theorem 2.1
Given any invertible matrix T, we use the standard notation B
T
for TBT
1
. In the event that T is unitary
_
i.e., T
T
T = I
_
, we say that B
T
is a rotation of B. We often denote such rotations by B
rot
.
Theorem 3.1 Let A, B be positive denite Hermitian complex valued n n matrices. Let I denote the
left-hand side in (2.1) and suppose I > 2. Then there exists a rotation B
rot
= B
T
of B such that AB
rot
is
indenite.
Pr o o f. Note that B
T
is positive denite Hermitian, so x
_
AB
T
_
x = Ax B
T
x is positive for any
eigenvector x of A. Therefore in order to prove that
_
AB
T
_
is indenite, it is enough to nd an x such that
Ax B
T
x is negative.
By (2.6), I > 2 is equivalent to
A
+
B
> /2. Choose x and y such that Angle (x, Ax) =
A
and
Angle (y, By) =
B
. It is an easy exercise to verify that there is a unitary matrix T such that Ty = x
and TBy lies in the plane of x and Ax, on the opposite side of x from Ax. Then Angle
_
Ax, B
T
x
_
=
Angle (Ax, TBy) =
A
+
B
. Since this is obtuse, we are done.
Theorem 3.2 Let A be a positive denite real symmetric matrix, and let
C = C

= I + c , 1 + c > 0 ,
where R
n
, [[ = 1, and denotes the dyadic product. Then AC is positive denite symmetric for all
unit vectors R
n
if either c = 0 or c ,= 0 and
_
a
n
a
1
<
2 + c + 2

1 + c
[c[
. (3.1)
On the other hand, if c ,= 0 and
_
a
n
a
1
>
2 + c + 2

1 + c
[c[
, (3.2)
then there exists =

R
n
,

= 1, such that
_
AC

_
is indenite.
Pr o o f. Let
B = C
(0,...,0,1)
= diag(1, . . . , 1, 1 + c) , 1 + c > 0 .
Since C

, with R
n
and [[ = 1, is a rotation of B, it follows that the eigenvalues of C

are exactly
1, . . . , 1, 1 + c. In the present case, for the matrices A and C

, by (2.1) we have obviously


I =
__
a
n
a
1
1
_
_
1 + c 1
_
, c 0 ,
or
I =
__
a
n
a
1
1
_
_
_
1
1 + c
1
_
, c < 0 .
Hence I < 2 if and only if c = 0 or (3.1) holds, and I > 2 if and only if c ,= 0 and (3.2) is veried.
That is, if c = 0 or if (3.1) holds, then AC

is positive denite symmetric by Theorem 2.1; while


if (3.2) is veried, then by virtue of Theorem 3.1 there exists =

R
n
,

= 1, such that
_
AC

_
is
indenite.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1495
Part II
4 Eigenvalues of products of Hermitian matrices
In 1962 Strang initiated the study of the least eigenvalue of the symmetrized product of two n n Hermitian
matrices, with n 2, see [11]. A trivial answer for this problem can be found by simply multiplying the
matrices and determining their eigenvalues. More to the point, and of distinctly greater practical value, is to
nd a lower bound for the eigenvalues of the product in terms of the eigenvalues of the two matrices them-
selves. Such a result is directly in line with Nicholsons theorem [5], the condition that the least eigenvalue
be positive (Theorem 2.1).
Strang proved that the best possible lower bound can be found among ve distinct functions E
1
, . . . , E
5
of the eigenvalues of the original matrices A and B, and even more that these functions depend only on the
least and greatest eigenvalues of the two matrices!
On the other hand, except by implication from the details of his proof, Strang gives no way to decide
which one of the ve functions would actually apply in any specic case. It is this question which we answer
here, in Theorem 5.9. With the approach thus developed we also are able to answer the related question of
providing the least upper bound for the greatest eigenvalue of the product: see Sections 7 and 8. The results
are displayed in the unexpected and pretty diagrams of Figures 13. The separate domains there are bounded
by branches of the four hyperbolas H
1
, . . . , H
4
described below, one of the two branches of each hyperbola
occurring in the diagrams of Figures 1 and 2, and all branches in Figure 3.
Now let A and B be two given Hermitian matrices, which we may suppose have the eigenvalues a
1

. . . a
n
and b
1
. . . b
n
, with a
n
, b
n
> 0. The interest (and the complication) in our analysis, as also
in Strangs, derives from the fact that the eigenvalues of the symmetric product do not depend simply on the
eigenvalues of A and B, but also on the relative rotations of A and B. In other words, if B
rot
is a rotation of
B, the eigenvalues of AB
rot
need not be the same as those of AB.
5 A special case
We rst treat the special 2 2 real case in which the eigenvalues of A and B are 1, k and 1, , respectively,
where k, R. (This case already contains all the ingredients necessary to handle the general n n case.)
Write

A =
_
k 0
0 1
_
,

B =
_
0
0 1
_
,
and
K =

A, L =

B
rot
=
_
1 + ( 1) cos
2
( 1) sincos
( 1) sincos 1 + ( 1) sin
2

_
,
where [0, /2] is the associated rotation angle. Moreover, for an appropriate rotation angle the eigen-
values of AB are the same as those of KL.
Lemma 5.1 The least eigenvalue of KL is
= (k, ; t) =
1
2
[k + + (k 1)( 1)t S] , (5.1)
where t = cos
2
[0, 1] and
S =
_
(k
2
1)(
2
1)t + ( k)
2
0 .
Pr o o f. This follows from a direct though somewhat tedious calculation, using the standard formula
(k, ; t) =
1
2
_
trKL
_
(trKL)
2
4 detKL
_
.
Our problem is now to determine the best possible lower bound for (k, ; t), namely

min
(k, ) = min
t[0,1]
(k, ; t) . (5.2)
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1496 Conley, Pucci, and Serrin: Elliptic equations
This minimization problem is, in principle, a straightforward application of differential calculus. In prac-
tice, however, it is difcult to organize and carry out in view of the large number of special endpoint cases (at
least sixteen by one count), a difculty which seems to have been felt also by Strang in the formulation of his
main theorem.
We proceed with a series of lemmas.
Lemma 5.2 Assume S > 0. Then for all t [0, 1]

t
(k, ; t) =
(k 1)( 1)
4S
[2S (k + 1)( + 1)] (5.3)
and

t
2
(k, ; t) =
(k
2
1)
2
(
2
1)
2
8S
3
0 . (5.4)
Pr o o f. This is a direct consequence of (5.1).
Lemma 5.3
(k, ; 0) = mink, and (k, ; 1) = min1, k .
Pr o o f. This is immediate by computation of KL at the points t = 0 and t = 1.
Lemma 5.4 Except in the special cases k = = 1 and k = = 1, we have S > 0 for all t (0, 1).
Pr o o f. There are three cases to consider:
(i) (k
2
1)
2
(
2
1)
2
= 0, k = . This can occur only when k = = 1.
(ii) (k
2
1)
2
(
2
1)
2
= 0, k ,= . Here obviously S > 0 by direct calculation.
(iii) (k
2
1)
2
(
2
1)
2
,= 0. In this case, suppose for contradiction that S = 0 at some value t =

t
(0, 1). Then from the formula for S one nds that necessarily (k
2
1)
2
(
2
1)
2
< 0 and in turn S
is imaginary for t >

t, which is impossible.
Corollary 5.5 Except in the special cases k = = 1 and k = = 1, the function (k, ; ) is convex in
[0, 1].
Lemma 5.6 If the minimum in (5.2) occurs at some t (0, 1), then except when k = 1 or = 1 or
k = = 1 we have
1

min
(k, ) =

E
5
(k, )
16k (k 1)
2
( 1)
2
8(k + 1)( + 1)
, (k + 1)( + 1) > 0 . (5.5)
Pr o o f. Clearly S > 0 by Lemma 5.4. If also (k + 1)( + 1) 0 then /t ,= 0, so has no interior
minimum, contrary to hypothesis.
Next, at a minimum value t = t
0
(0, 1) we have /t = 0 which gives
S = S
0
=
(k + 1)( + 1)
2
and t
0
=
(k + 1)
2
( + 1)
2
4(k )
2
4(k
2
1)(
2
1)
.
Thus, in turn by (5.2) and (5.1)
min(k, ) = (k, ; t0) =
4(k + )(k + 1)( + 1) (k + 1)
2
( + 1)
2
4(k )
2
8(k + 1)( + 1)
=
b
E5(k, ) ,
as required. To avoid a tedious calculation here, it is simplest to note that the numerator function is a second
degree polynomial N(k, s) in the variable s = 1, and to use Taylors theorem with
N(k, 0) = 16k , N

(k, 0) = 16k , N

(k, 0) = 2(k 1)
2
and / = s .
1
Here, in using the notation
b
E
5
, we follow Strang.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1497
Strangs Theorem 1, see [11], follows almost immediately from Lemma 5.3. To obtain a more rened
version of Strangs theorem it is necessary to carry out a careful treatment of the formula (5.1). We give
two different methods, the rst based on a direct study of (5.1), the second using ideas already contained in
Strangs paper.
In the sequel the following four functions will be crucial:

1
(k, ) (k 1)( 1) 4 ,
2
(k, ) (k 1)( 1) + 4k ,

3
(k, ) (k 1)( 1) + 4 ,
4
(k, ) (k 1)( 1) 4k .
Lemma 5.7

t
(k, ; 0) =
(k 1)( 1)
4( k)

_

2
(k, ) , if > k ,

3
(k, ) , if < k ;
(5.6)

t
(k, ; 1) =
(k 1)( 1)
4(k 1)

_

1
(k, ) , if k > 1 ,

4
(k, ) , if k < 1 .
(5.7)
Pr o o f. Eq. (5.6) is obtained immediately from (5.3). To get (5.7) we see from (5.1) and the rst part of
Lemma 5.3 that, at t = 1,
min1, k =
1
2
[k + + (k 1)( 1) S] .
If k > 1, this gives S = k 1; the rst part of (5.7) then follows from (5.3). If k < 1, this gives
S = 1 k, and the second part of (5.7) again comes from (5.3).
Following Strang, we dene the quantities

E
1
= 1 ,

E
2
= k ,

E
3
= ,

E
4
= k and E = min
_

E
1
,

E
2
,

E
3
,

E
4
_
. (5.8)
Moreover, we introduce the following four hyperbolas:
H
1
:
1
(k, ) = 0 , H
2
:
2
(k, ) = 0 ,
H
3
:
3
(k, ) = 0 , H
4
:
4
(k, ) = 0 .
In preparation for the main Theorem 5.9 we rst give an interesting set of identities for the function

E
5
.
Lemma 5.8 Let (k + 1)( + 1) > 0. For each i = 1, . . . , 4 there holds

E
5
(k, ) =

E
i
(k, )

i
(k, )
2
8(k + 1)( + 1)
.
Pr o o f. Consider rst the case i = 1. By direct calculation
8(k + 1)( + 1)

E
1
(k, )
1
(k, )
2
= 8(k + 1)( + 1) [(k 1)( 1) 4]
2
= 8(k + 1)( + 1) + 8(k 1)( 1) 16 (k 1)
2
( 1)
2
= 8(k + 1)( + 1)

E
5
(k, ) ,
as required. The other cases can be treated similarly, e.g., for i = 2 we get as before
8k(k + 1)( + 1) [(k 1)( 1) + 4k]
2
= 8(k + 1)( + 1)

E
5
(k, ) ,
and so forth.
From Lemma 5.8 it follows that at points (k, ) H
i
we have

E
5
=

E
i
and D
_

E
5


E
i
_
= 0, (5.9)
where D denotes the gradient vector.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1498 Conley, Pucci, and Serrin: Elliptic equations
Theorem 5.9 A transition occurs between the value
min
=

E
i
and
min
=

E
5
when and only when one
crosses the upper branch of the hyperbola H
i
, i = 1, 2, 4, or the lower branch of the hyperbola H
3
, with the
exception of the three points (0, 1), (1, 0) and (1, 1) where the hyperbolas intersect.
In addition
min
is continuous in R
2
.
Pr o o f. We shall actually prove somewhat more. Denote by

H
i
, i = 1, 2, 4, the upper branches of H
1
,
H
2
and H
4
, and similarly by

H
3
the lower branch of H
3
, as in the statement of the theorem. Divide the
(k, )-plane into eight (closed) regions R by means of the following ve lines and half lines:
k = 0 , 0 ; k 0 , = 0; k = 1 ; = 1 ; k = < 0 .
In each one of the eight regions R there is contained a unique maximal arc = (R) of one of the branches

H
1
,

H
2
,

H
3
and

H
4
. We assert that in each such region R there holds

min
=
_
E on one side of the arc ,

E
5
< E on the other side ,
and

E
5
= E exactly on the arc . (Thus
min
is continuous in R.)
To prove this assertion we shall consider in detail only the regions
R
1
=
_
(k, ) R
2
: k 1, 1
_
and
R
2
=
_
(k, ) R
2
: 0 k 1, 1
_
,
the corresponding arcs being contained in the branches

H
1
and

H
2
, respectively, as one sees from Figure 1.
In R
1
, by Lemma 5.3 we have
(k, ; 0) = mink, 1 and (k, ; 1) = 1 =

E
1
= E ,
so that also (k, ; 1) (k, ; 0). Moreover in the interior of R
1
there holds k > 1 and (k 1)( 1) > 0.
Hence by (5.7),

t
(k, ; 1) < 0 if (k, ) is below

H
1
,

t
(k, ; 1) > 0 if (k, ) is above

H
1
.
Finally, the graph of (k, ; ) is convex by Corollary 5.5. From the above considerations, it now follows
easily that

min
(k, ) = (k, ; 1) = E if (k, ) is below

H
1
,

min
(k, ) = (k, ; t
0
) =

E
5
< E if (k, ) is above

H
1
.
(5.10)
By Lemma 5.8 we have

E
5
=

E
1
exactly on

H
1
R
1
. This proves the assertion for the region R
1
.
In R
2
we have
(k, ; 0) = k =

E
2
= E and (k, ; 1) = min1, k min1, k = k ,
so that now (k, ; 0) (k, ; 1). Moreover in the interior of R
2
there holds k < 1 and (k 1)( 1) < 0.
Hence by (5.6),

t
(k, ; 0) > 0 if (k, ) is below

H
2
,

t
(k, ; 0) < 0 if (k, ) is above

H
2
.
From Corollary 5.5 and the above considerations, it again follows easily that

min
(k, ) = (k, ; 0) = k = E if (k, ) is below

H
2
,

min
(k, ) = (k, ; t
0
) =

E
5
< E if (k, ) is above

H
2
.
(5.11)
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1499
By Lemma 5.8 we have

E
5
=

E
2
exactly on

H
2
R
2
. This proves the assertion for the region R
2
. The
remaining six regions R
3
, . . . , R
8
are treated in the same way.
It remains to be shown that
min
is continuous in R
2
. In view of what has been shown so far, it is enough
to prove that
min
is continuous at the common boundary of any two adjacent regions R. For the common
boundary k = 1 of the regions R
1
and R
2
, we have by (5.10) and (5.11) that
min
=

E
2
immediately to the
left of k = 1 and
min
=

E
1
immediately to the right. But

E
1
=

E
2
= 1 when k = 1, proving continuity
at the common boundary of R
1
and R
2
. The remaining common boundaries can obviously be treated in the
same way.
Remark 5.10 It is worth noting also that by Lemma 5.8 the function
min
is of class C
1
_
R
2
_
except
where k = 1 or = 1.
6 The general case AB
It remains to apply the special case of the matrices K, L, considered above, to obtain a lower bound for the
least eigenvalue of AB.
We assume as in Section 4 that the eigenvalues of A satisfy a
1
. . . a
n
and those of B obey b
1

. . . b
n
, where now we suppose, without loss of generality, that a
n
, b
n
> 0 but of course no longer that
a
1
, b
1
> 0. Let
min
be the best lower bound for the least eigenvalue of AB
rot
, in terms of a
1
, . . . , a
n
and
b
1
, . . . , b
n
. Since it is enough to treat the 2 2 real case (see Strang [11]), we have

min
= C
min
(k, ) =
min
(a
1
, a
n
, b
1
, b
n
) , (6.1)
where k, , C are given in the following table:
a
1
, b
1
1 : k = a
n
/a
1
, = b
n
/b
1
, C = a
1
b
1
,
a
1
< 1 , b
1
1 : k = a
1
/a
n
, = b
n
/b
1
, C = a
n
b
1
,
a
1
1 , b
1
< 1 : k = a
n
/a
1
, = b
1
/b
n
, C = a
1
b
n
,
a
1
, b
1
< 1 : k = a
1
/a
n
, = b
1
/b
n
, C = a
n
b
n
.
The numbers k, are sometimes called the condition numbers of the matrices A, B. From the table we also
get
C

E
5
(k, ) =
16a
1
b
1
a
n
b
n
(a
n
a
1
)
2
(b
n
b
1
)
2
8(a
n
+ a
1
)(b
n
+ b
1
)
E
5
(a
1
, a
n
, b
1
, b
n
) (6.2)
(notation of Strang), while moreover the quantities C

E
i
, i = 1, . . . , 4, have one of the four values a
1
b
1
, a
1
b
n
,
a
n
b
1
, a
n
b
n
.
We can now obtain the diagram of Figure 1, showing the values of
min
over the whole (k, )-plane. First
note by Theorem 5.9 that there is a transition each time one crosses an upper branch of the hyperbola H
i
,
i = 1, 2, 4, or the lower branch of H
3
. If we consider the transition across H
1
, for example, we have seen
previously that
min
=

E
1
= 1 below H
1
, while
min
=

E
5
above H
1
. In particular, one then nds that

min
= C
min
= a
1
b
1
below H
1
, while
min
= E
5
(a
1
, a
n
, b
1
, b
n
) above H
1
.
All the transitions can be treated the same way, but (as follows from the proof of the Theorem 5.9) it is
easier simply to alternate between the values E
5
and mina
1
b
1
, a
1
b
n
, a
n
b
1
each time one crosses the upper
branch of a hyperbola H
i
(lower branch if i = 3).
The diagram of Figure 1 is the rst main result of Part II of the paper, signicantly rening earlier results
of Strang. More particularly, in [11, Theorem 1] there is no attempt to locate the subset of points where the
condition
min
= E
5
applies. Here it is shown that this region is precisely the shaded region bounded by
branches of the four hyperbolas H
1
, . . . , H
4
, to us an unexpected and pretty result.
By Remark 5.10 and by (6.1) also
min
is a continuously differentiable function of the variables (a
1
, . . . , a
n
;
b
1
, . . . , b
n
).
Remark 6.1 When A is not positive denite, then
min
0 for all matrices B. Equality can occur only
when A is positive semi-denite and B = I.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1500 Conley, Pucci, and Serrin: Elliptic equations
6.1 The rotation required for indeniteness
Let K, L both be positive denite 2 2 real matrices. If E
5
< 0, where k = a
n
/a
1
> 0 and = b
n
/b
1
> 0,
then KL will be indenite, either by Theorem 3.1 or by Figure 1. Our interest is in the rotation angles
in (5.1) which produce
min
< 0. In fact, when
min
< 0 we have k + +(k 1)( 1)t < S by (5.1), that
is
[k + + (k 1)( 1)t]
2
< (k
2
1)(
2
1)t + ( k)
2
.
This is a quadratic inequality for t whose solutions
t

=
1
2

1
_
8(k + 1)( + 1)

E
5
(k 1)( 1)

give in turn the decisive angles

, i.e.,

E
5
< 0 exactly when
t

< t < t
+
(6.3)
(clearly 0 < t

< t
+
< 1). Of course (6.3) can be rewritten as

+
< < /2
+
,
where cos
2

+
= t
+
, a sector centered on either side of /4. (Other sectors symmetrically related to (6.3)
clearly also produce indeniteness).
As an immediate corollary it is obvious that at the onset of indeniteness (E
5
= 0) one requires the
rotation angle = /4.
Fig. 1 The eigenvalue function min
The diagram is to be read as follows: given a
1
, . . . , a
n
; b
1
, . . . , b
n
, one rst computes (k, ) from the
table preceding (6.2). If this point occurs in one of the unshaded regions, the value of
min
is shown in a
corresponding box directly on the diagram. If it is in a shaded region, then
min
= E
5
(see (6.2)).
The hyperbolas H
1
, H
2
, H
3
, H
4
are dened preceding Theorem5.9 (see also (8.1)). Their vertical asymp-
totes are respectively k = 1, 1, 3, 1/3 and their horizontal asymptotes are similarly = 1, 3, 1, 1/3.
The curve I = 2 is the borderline case in Nicholsons theorem (Theorem 2.1).
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1501
Remark 6.2 The region

E
5
(k, ) > 0 is of course given by (2.1). We can also obtain this result directly
from

E
5
> 0 by the factorization
8(k + 1)( + 1)

E
5
=
_
4

k (k 1)( 1)

_
4

k + (k 1)( 1)

.
Hence in the set (k, ) R
2
: k > 1, > 1,

E
5
> 0 is equivalent to 4

k > (k 1)( 1), which can


be rewritten as
_
k +

_
2
>
_
k 1
_
2
, or again
_
k +

k + 1
__
k +

k 1
_
> 0 .
But this immediately gives (2.1).
The same calculation also gives the remarkable identity

E
5
(k, ) =
1
8(k + 1)( + 1)

k
=1,

=1
__
k +
k
__
+

_
2
k

. (6.4)
7 Maximum eigenvalues
The maximum eigenvalue problem is treated in almost exactly the same way. The following obvious changes
occur:
(k, ; t) =
1
2
[k + + (k 1)( 1)t + S] ,

t
(k, ; t) =
(k 1)( 1)
4S
[2S + (k + 1)( + 1)] ,

t
2
(k, ; t) =
(k
2
1)
2
(
2
1)
2
8S
3
0 ,

t
(k, ; 0) =
(k 1)( 1)
4( k)

_

2
(k, ) , if < k ,

3
(k, ) , if > k ,

t
(k, ; 1) =
(k 1)( 1)
4(k 1)

_

1
(k, ) , if k < 1 ,

4
(k, ), if k > 1 .
The remaining lemmas of Section 5 are similarly changed, giving the main
Theorem 7.1 A transition occurs between values
max
=

E
i
and
max
=

E
5
when and only when one
crosses the lower branch of the hyperbola H
i
, i = 1, 2, 4, or the upper branch of the hyperbola H
3
, with the
exception of the point (1, 1) where the hyperbolas intersect. In addition
max
is continuous in R
2
.
A discussion similar to that given earlier for
min
applies to the corresponding upper bound function

max
, giving the diagram of Figure 2, this being the second main result of Part II.
Remark 7.2 The pair of matrices
A =
_
2 1
1 2
_
B =
_
14 0
0 1
_
(7.1)
provides an interesting example.
2
Here n = 2 and the eigenvalues of A are 1 and 3, and of B are 1 and 14, so that
a
1
= 1 , a
n
= 3 , b
1
= 1 and b
n
= 14.
Hence according to the table in Section 6,
k =
a
n
a
1
= 3 and =
b
n
b
1
= 14 .
By Theorem 2.1 the least eigenvalue of AB need not be positive, and in fact by direct computation is
0.008331. On the other hand, from Figure 1 we have (since the point (3, 14) lies in the shaded region)

min
= E
5
(1, 3, 1, 14) =

E
5
(3, 14) = 0.008333 .
2
A slightly more complicated example is given in [4, p. 464].
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1502 Conley, Pucci, and Serrin: Elliptic equations
Thus the particular matrices (7.1) give the value of
min
(3, 14) to within the error 2 10
6
. At the same
time, of course,

max
(3, 14) = 42
(the greatest eigenvalue of AB in fact is 30.008331).
It is interesting to note further that the rotation angle from

A to A in this example is just the borderline
case = /4.
Fig. 2 The eigenvalue function max
The diagram is to be read in the same way as Figure 1.
8 Strangs approach
Here we present proofs of Theorems 5.9 (in the n n case) and 7.1 based on Strangs method. Let A be an
n n Hermitian matrix with smallest and largest eigenvalues a
1
and a
n
. However, we do not specify which
of a
1
and a
n
is larger. Similarly, let B be an n n Hermitian matrix with extremal eigenvalues b
1
and b
n
.
Following Strang, we dene
E
1
= a
1
b
1
, E
2
= a
n
b
1
, E
3
= a
1
b
n
, E
4
= a
n
b
n
,
and E
5
by Eq. (6.2). As in Section 5 of this paper, we dene
k = a
n
/a
1
, = b
n
/b
1
and

E
i
= E
i
/a
1
b
1
, 1 i 5 .
This notation is consistent with (5.8) and (5.5). We emphasize that since we do not specify which of the a
i
is
larger, A only determines k up to one of two reciprocal values.
The paper [11] is concerned with the following quantities:

min
= inf
_
minimum eigenvalue of
_
AB
T
_
: T unitary
_
,

max
= sup
_
maximum eigenvalue of
_
AB
T
_
: T unitary
_
.
Let us dene 1to be the region in the (k, )-plane where the following two inequalities hold:

L(k) + L(k)
1
L(k)L()
2

< 2 and

L() + L()
1
L()L(k)
2

< 2 ,
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1503
where L is the involutive linear fractional transformation z (z + 1)/(z 1). Clearly 1 is symmetric
across k = , and we will see during the proof of Theorem 8.2 that it is invariant under the transformations
k k
1
,
1
. Therefore the matrices A and B determine whether or not (k, ) is in 1, even though
they only determine k and up to reciprocals. Strangs main result can be formulated as follows:
Theorem 8.1 (Strang [11]). If (k, ) 1, then
min
and
max
are the least and greatest of E
1
, . . . , E
5
,
respectively. If (k, ) , 1, then
min
and
max
are the least and greatest of E
1
, . . . , E
4
, respectively.
Our Theorems 5.9 and 7.1 extend this theorem by graphing 1 and determining explicitly at each point of
1whether E
5
is
min
,
max
, or neither. Of course, this is elementary, but it requires a signicant calculation
and the results are appealing.
We now restate our two theorems as a single theorem and give an alternate proof. Write 1 for the
boundary of 1, and recall that the zero-loci of the conics
i
are the hyperbolas H
i
, 1 i 4.
Theorem 8.2 1 is the union of the open shaded areas (both dark and light) in Figure 3. If a
1
b
1
> 0,
then E
5
=
max
in the darkly shaded areas and E
5
=
min
in the lightly shaded areas. This is reversed if
a
1
b
1
< 0.
To put it in words, (0, 0) is in 1 and 1 =

4
1
H
i
, so that a path either enters or leaves 1 every time it
crosses either branch of any of the H
i
. Moreover, E
5
is either
min
or
max
on all of 1. If a
1
b
1
> 0, then
E
5
=
max
for (k + 1)( + 1) > 0 and E
5
=
min
for (k + 1)( + 1) < 0. This is reversed if a
1
b
1
< 0.
Finally,
min
and
max
are of class C(R
2
), and they are C
1
except on (k 1)( 1) = 0. In particular,
they are of class C
1
across 1.
Pr o o f. By rescaling, we may assume without loss of generality that a
1
= b
1
= 1, so that a
n
= k,
b
n
= , and E
i
=

E
i
. Some algebra shows that the inequalities dening 1are equivalent to
(1 +[L(k)[) (1 [L()[) < 1 , [1 [L(k)[[ [L()[ < [L(k)[ ,
and the two inequalities obtained from these by exchanging k and . Since L(1/z) = L(z), this makes it
clear that 1 is invariant under both k k
1
and
1
.
First we claim that 1

4
1
H
i
. For this, recall that
1
(k, ) = (k 1)( 1) 4, and check that

2
(k, ) = k
1
_
k
1
,
_
= (k 1)( + 3) + 4 ,

3
(k, ) =
1
_
k,
1
_
= (k + 3)( 1) + 4 ,

4
(k, ) = k
1
_
k
1
,
1
_
=
1
3
[(3k + 1)(3 + 1) 4] .
(8.1)
Thus it is clear that

4
1
H
i
is invariant under k k
1
and
1
. This is also true of 1 and hence 1,
and so to prove the claim it sufces to prove that 1 (k, ) : [k[, [[ > 1 is in

4
1
H
i
.
Now (k, ) : [k[, [[ > 1 is equal to (k, ) : L(k), L() > 0. Therefore on this set our restatement of
the inequalities dening 1becomes
[1 + L(k)] [1 L()] < 1 , [1 L(k)[ L() < L(k) ,
and the two inequalities obtained by exchanging k and . It is easy to check that the four H
i
are dened by
the four equations (1 L(k))(1 L()) = 1, whence the claim follows.
The complement of

4
1
H
i
in the (k, )-plane consists of 14 connected open components (see Figure 3).
Since 1

4
1
H
i
, each of these regions is either in 1 or its complement, so it is only necessary to check
one point from each region to decide. We can reduce to checking only (0, 0), (2, 2), and (0, 2) by noting
that the orbits of the three components containing these points under the group generated by k , k k
1
,
and
1
intersect all 14 regions. The rst sentence of the theorem follows.
For the second sentence, observe in (6.2) that E
5
(a
1
, a
n
, b
1
, b
n
) is invariant under the exchanges a
1
a
n
and b
1
b
n
. Deduce from this that

E
5
(k, ) = k

E
5
_
k
1
,
_
=

E
5
_
k,
1
_
= k

E
5
_
k
1
,
1
_
.
Now by Lemma 5.8 we know that

E
5
(k, ) = 1
2
1
/Q, where we dene Q = 8(k + 1)( + 1) for brevity.
Combine this with the preceding identities to obtain

E
5
= 1
2
1
/Q = k
2
2
/Q =
2
3
/Q = k
2
4
/Q,
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1504 Conley, Pucci, and Serrin: Elliptic equations
as already noted in Lemma 5.8.
These four forms of

E
5
show that for i = 1, 2, 3, 4, we have

E
5
=

E
i
precisely on H
i
. Now it is not hard
to determine which of the

E
1
, . . . ,

E
4
is least and which is greatest at any point (k, ). It turns out that near
H
i
,

E
i
is always either least or greatest, depending on the branch of H
i
. Moreover, using the four forms of

E
5
we easily see that in each shaded region,

E
5
is either greater or less than the greatest or least of

E
1
, . . . ,

E
4
,
depending on the sign of Q. This completes the proof of the rst two paragraphs.
The key to the proof of the third paragraph is again the four forms of

E
5
, which, coupled with the above
discussion, immediately show that
min
and
max
are C
1
across the H
i
. For the rest, draw a picture dividing
the (k, )-plane into zones according to which of

E
1
, . . . ,

E
4
are least and greatest, and superimpose it on
Figure 3.
Corollary 8.3 For a
1
b
1
> 0,
_
AB
T
_
is positive denite for all unitary T if and only if (k, ) satises

E
5
(k, ) > 0, i.e., 16kl > (k 1)
2
( 1)
2
, which denes the open region between the two dashed curves
on Figure 3
_
the upper right dashed curve corresponds to Theorem 2.1, and the other curve completes its
orbit under k k
1
,
1
_
. If a
1
b
1
0, then there exists a unitary T such that
_
AB
T
_
is not positive
denite.
k
l
k
l
Fig. 3 The region Ris the union of the shaded areas. E5 is min in the lightly shaded areas and max in
the darkly shaded areas. The two dotted curves graph E5 = 0; they bound the positive denite region (see
Corollary 8.3). The dashed lines are the asymptotes of the hyperbolas: the asymptotes of H1, H2, H3, and
H4 are the pairs (k = 1, = 1), (k = 1, = 3), (k = 3, = 1), and (k = 1/3, = 1/3),
respectively.
Pr o o f of Corollary 8.3. The second sentence is obvious. For the rst, note that
_
AB
T
_
is positive
denite for all T if and only if
min
> 0, and apply Theorem 8.2 and Eq. (5.5). We remark that one can
use (6.4) to see that the curves where

E
5
= 0 are =
_
k
_
2
/
_
k +
_
2
for = 1.
Part III
9 The strong maximum principle
Consider the differential inequality
D
j
a
ij
(x, u)A([Du[)D
j
u B(x, u, Du) 0 , u 0 , (9.1)
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1505
in the domain R
n
, n 2, where [a
ij
(x, u)], i, j = 1, . . . , n, is a continuously differentiable, symmetric
coefcient matrix, verifying condition (1.2). We also assume the validity of (A1), (A2) and (B1), given in the
introduction.
By the strong maximum principle for (9.1) we mean the statement that if u is a C
1
solution of (9.1) with
u(x
0
) = 0 for some x
0
then u 0 in .
Theorem 8.1

(Strong maximum principleModied version of the sufciency condition of [7, Theo-


rem 8.1]) Suppose that
lim
0
A

()
A()
= c > 1 (9.2)
and when c ,= 0 assume also that the positive denite matrix [a
ij
] in (1.2) satises
s
0
<
2 + c + 2

1 + c
[c[
, (9.3)
where
s
0
= sup
x
_
(x, 0)/(x, 0) (< ) (9.4)
and
(x, u) = maxeigenvalues of [a
ij
] at the point (x, u) ,
(x, u) = mineigenvalues of [a
ij
] at the point (x, u) .
For the strong maximum principle to be valid for (9.1) it is sufcient that either f 0 in [0, ), > 0, or
that
_
0
+
ds
H
1
(F(s))
= , (9.5)
is satised, where F(u) =
_
u
0
f(s) ds.
Pr o o f. This is almost exactly the same as the proof already given for the sufciency condition of [7,
Theorem 8.1].
3
In fact, the proof is word-for-word the same until the assertion at the end of the proof of the
comparison [7, Lemma 8.2], that the matrix
AB , where A = [a
ij
(x, u(x))] and B = [b
ij
(x)] ,
is positive denite, where (in the notation used earlier in this paper)
B = A([[)I +
A

([[)
[[
, ,= 0.
As noted already in the introduction this assertion is in general incorrect. To continue correctly, rewrite B in
the form
B = A([[)C

,
where
C

= I + c , c = c([[) =
[[A

([[)
A([[)
, =

[[
,
so that
3
It is assumed that the reader is familiar with the proof of this result. For convenience in this connection, we also supply several
errata and improvements to [7]:
Errata:
Middle of page 33: Theorem 17 Theorem 5.4.
Line 6 from top of page 42: identications a = 0, identication.
Line 7 from foot of page 47: Theorem 8.1 Theorem 10.1.
Line 6 from foot of page 47: M = a 0 M = a, and u v M u v M.
Line 6 from foot of page 55: a .
Improvements:
Page 39: D .
Delete paragraph before Theorem 8.5.
Line 8 from top of page 54: = 0 in R
n
. = 0 such that || < b.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1506 Conley, Pucci, and Serrin: Elliptic equations
AB = A([[)AC

, A = A(x, u) = [a
ij
(x, u)] . (9.6)
From Theorem 3.2 it then follows that AB will be positive denite if either c = 0 or if c ,= 0 and
s <
2 + c + 2

1 + c
[ c[
, where s = s(x, u) =

(x, u)
(x, u)
1 . (9.7)
In turn, by (9.3) the condition (9.7) holds when u and are sufciently small. Consequently, under the
assumption (9.3) we see that D
j

A
i
(x, ) is uniformly positive denite when u and are sufciently small,
say u b
0
and [[ b
0
. (Notation as on [7, p. 42], and positive deniteness being understood in the sense
of the corresponding quadratic form.)
Accordingly, using [7, Theorem 10.1] with the parameter b = b
0
as above, we can state a weak version of
[7, Lemma 8.2], as follows.
Lemma 8.2

(Comparison lemma) Assume (B1). Let u and v be respectively solutions of (9.1) and of
D
i
a
ij
(x, u(x))A([Dv[)D
j
v ([Dv[) f(v) 0
in a bounded domain . Suppose that u and v are continuous in , and that
0 u b
0
, 0 < [Dv[ b
0
in , and u v M on ,
where M is constant. Then u v M in .
The nal step in the proof of Theorem 8.1

, as in the proof of [7, Theorem 1.1], is to apply the standard


Hopf comparison technique. This technique depends on (i) the construction of a tangent ball B to the set
where u 0, together with (ii) an appropriate choice of the value m for the comparison function v. The ball
B can however be chosen arbitrarily small without affecting the proof. Then u will be arbitrarily small in B
(where the ultimate comparison takes place), while simultaneously forcing m (< u) to be small (see [7, page
33]). Then in turn [Dv[ will be arbitrarily small (see [7, Proposition 4.1]).
The remainder of the proof of sufciency is then exactly as on [7, p. 42], except one now relies on
Lemma 8.2

(with M = 0) instead of on the originally given form of this lemma. This completes the
proof of the strong maximum principle for (9.1).
Remarks 9.1 When c ,= 0 in (9.2) and
s
0
>
2 + c + 2

1 + c
[c[
, (9.8)
with c and s
0
given as in Theorem 8.1

, the proof given above fails, since by Theorem 3.1 the matrix AB
is then indenite for some directions of the vector Du and for some points x . Of course exactly such
points and directions occur when the normal at the tangent point x = x
0
is a direction . Thus the proof of
Theorem 8.1

fails in this case, since x


0
could be any point in and the normal could have any direction ,
depending on the particular outcome of the Hopf construction.
It is still an open question whether [7, Theorem 8.1

] fails for values s


0
satisfying (9.8). We have not been
able to nd a counterexample for such cases, though it may be conjectured that the condition (9.3) is in fact
necessary.
If a
ij
(x, u) = a(x, u)
ij
, where a : R
+
0
R
+
is of class C
1
, then the differential operator in (9.1)
has the variational form
diva(x, u)A([Du[)Du .
For this special case, [7, Theorem 8.1] continues to hold without the help of (9.3), since then [a
ij
b
jk
] =
a(x, u)[b
ik
] is of course positive denite without further argument.
Condition (9.2) applies to the p-Laplace operator A() =
p2
, p > 1, with c = p 2. In this case, when
c ,= 0, namely when p ,= 2, the condition (9.3) takes the explicit form
s
0
<
p +

p 1
[p 2[
. (9.9)
Moreover, if c = 0 in (9.2), as occurs for example when A() = 1, i.e., for the Laplace operator, or when
A() = 1/
_
1 +
2
, i.e., the mean curvature operator, then condition (9.3) is empty and so Theorem8.1

, and
in turn [7, Theorem 8.1], is correct even with no additional conditions on [a
ij
] outside of positive deniteness
and regularity! For the mean curvature type inequality we have the following formal statement.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Math. Nachr. 278, No. 1213 (2005) / www.mn-journal.com 1507
Theorem 9.2 Let (9.5) be satised. Then the strong maximum principle is valid for the mean curvature
type differential inequality
D
j
_
a
ij
(x, u)D
j
u
_
1 +[Du[
2
_
B(x, u, Du) 0 , u 0 . (9.10)
Here it is worth noting explicitly that (9.10) need not be elliptic when
[Du[ > 2
r
r
2
1
, r =
4

s ,
where s = s(x, u) is given by (9.7).
Finally, the validity of Theorem 8.1

can of course also be asserted if the differential inequality (9.1) is


assumed to be elliptic for all arguments (x, u, Du) R
+
0
R
n
such that 0 < u < , 0 < [Du[ < for
some > 0.
10 The compact support principle
The compact support principle for the differential inequality
D
j
a
ij
(x, u)A([Du[)D
j
u B(x, u, Du) 0 , u 0 in , (10.1)
has been treated earlier in [7, Theorem 8.5]. However, just as was the case for the strong maximum principle,
the validity of Theorem8.5 requires further hypotheses on the matrix [a
ij
], analogous to the conditions (9.3)
(9.4) already given for Theorem 8.1

. As in [7] we suppose that is an exterior domain, with


R
=
R
n
; that (A1), (A2) hold; while (B1) is replaced by
(B2) B C( R
+
0
R
n
) and
B(x, u, ) ([[) + g(u)
for x , u 0, and all R
n
with [[ 1, where > 0, and the nonlinearity g is continuous in
R
+
0
and nondecreasing on some interval (0, ), > 0, with g(0) = 0.
By the compact support principle for (10.1) we mean the statement that if u is a nonnegative C
1
distribu-
tion solution of (10.1) in an exterior domain , with u(x) 0 as [x[ , then u has compact support in
.
Theorem 8.5

(Compact principle Modied version of the sufciency condition of [7, Theorem 8.5])
Assume that
limsup
|x|, u0
(x, u) < , (10.2)
s
1
= limsup
|x|, u0

(x, u)
(x, u)
< , (10.3)
where (x, u), (x, u) are given in Theorem 8.1

.
Finally suppose that (9.2) holds and also (when c ,= 0) that
s
1
<
2 + c + 2

1 + c
[c[
. (10.4)
Then for the compact support principle to hold for (10.1) it is sufcient that (B2) be satised with g(u) > 0
for u > 0 and with
_
0
+
ds
H
1
(G(s))
< , (10.5)
where G(u) =
_
u
0
g(s) ds.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1508 Conley, Pucci, and Serrin: Elliptic equations
Pr o o f. This is again essentially the same as the proof of the sufciency part of [7, Theorem 8.5], of
course with modications due to the lack of global ellipticity.
First, at the foot of [7, p. 45] we replace C by an arbitrarily small value

C < C. Then by the dead core [7,
Lemma 7.1], with however f(u) replaced by g(u), it follows that = w
_

C
_
0 as

C 0, while also for
the central comparison function v we have (see the top of page 46)
v(R) = w
_

C
_
= , [Dv[ H
1
(G( )) ,
D
i
a
ij
(x, u(x))A([Dv[)D
j
v + ([Dv[) g(v) 0 (10.6)
(here one uses (10.2)).
Moreover, we can assume that u in , see the paragraph following (8.16) in [7]. The function
z = v u then satises, as in [7], the condition [z[ in , and correspondingly 0 < . Of course,
also [x[ can be made as large as we wish by taking the radius R
0
suitably large.
Proceeding to Case 1, and noting that
_
(x, u)/(x, u) s
1
because can be arbitrarily small, one can
then argue exactly as in [7], but using now the new version Theorem 8.1

of the strong maximum principle,


to show that in fact this case cannot occur.
In Case 2 the boundary point lemma still applies either when c = 0 in (9.2), or when c ,= 0 and (10.3)
(10.4) are satised. Again this case cannot occur.
Finally for the crucial Case 3, by using (10.3) and (10.4) it follows as in Section 9 that D
j

A
i
(x, ) is
uniformly positive denite when u and [Dv[ are smaller than b
0
, and x is suitably large.
Next, exactly as in [7], we can apply [7, Theorem 10.1] to get a variant of Lemma 8.2

in which u and v
respectively satisfy the reverse inequalities (10.1) and (10.6), while the inequality u v M is replaced by
v u M.
To complete the argument for Case 3, it is now enough to recall that u < and 0 < [Dv[ H
1
(G( )),
and to take suitably small and [x[ suitably large so that the above variant of Lemma 8.2

applies. This being


done, the remaining part of Case 3 is then exactly as in [7], and accordingly this case also cannot occur.
It is thus proved that z 0 in , and in turn u 0 for [x[ > R
1
, completing the proof.
The case c = 0 in (9.2) can be treated exactly as in Section 9, leading to the following result for the mean
curvature type inequality.
Theorem 10.1 Let (B2) hold, and assume (10.5) is satised with g(u) > 0 for u > 0. Then the compact
support principle is valid for the mean curvature type differential inequality
D
j
_
a
ij
(x, u)D
j
u
_
1 +[Du[
2
_
B(x, u, Du) 0 , u 0 in . (10.7)
Acknowledgements We wish to particularly thank Roger Horn for his valuable advice in writing the paper.
References
[1] N. Alikakos and P. W. Bates, Estimates for the eigenvalues of the Jordan product of Hermitian matrices, Linear
Algebra Appl. 57, 4156 (1984).
[2] N. Alikakos and P. W. Bates, Estimates for the eigenvalues of the Jordan product of Hermitian matrices. Erratum,
Linear Algebra Appl. 65, 282 (1985).
[3] K. Gustafson and D. K. M. Rao, Numerical Range: The Field of Values of Linear Operators and Matrices (Springer-
Verlag, New York, 1997).
[4] R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University Press, Cambridge, U.K., 1985).
[5] D. W. Nicholson, Eigenvalue bounds for AB + BA with A, B positive denite matrices, Linear Algebra Appl. 24,
173183 (1979).
[6] P. Pucci and J. Serrin, A note on the strong maximum principle for elliptic differential inequalities, J. Math. Pures
Appl. 79, 5771 (2000).
[7] P. Pucci and J. Serrin, The strong maximum principle revisited, J. Differential Equations 196, 166 (2004).
[8] P. Pucci and J. Serrin, The strong maximum principle revisited. Erratum, J. Differential Equations 207, 226227
(2004).
[9] P. Pucci, J. Serrin, and H. Zou, A strong maximum principle and a compact support principle for singular elliptic
inequalities, J. Math. Pures Appl. 78, 769789 (1999).
[10] V. Ptak, The Kantorevich inequality, Amer. Math. Monthly 102, 820821 (1995).
[11] W. G. Strang, Eigenvalues of Jordan products, Amer. Math. Monthly 63, 3740 (1962).
[12] O. Taussky (Todd), Research Problem 2, Bull. Amer. Math. Soc. (N. S.) 66, 275 (1960).
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Anda mungkin juga menyukai