Anda di halaman 1dari 25

The Einstein Summation Notation

Introduction to Cartesian Tensors


and Extensions to the Notation
Alan H. Barr
California Institute of Technology

An Order-independent Representation form, however, the terms in the equations can


be arranged and factored in di erent orders with-
The Einstein summation notation is an algebraic out changing the algebraic result. The ability to
short-hand for expressing multicomponent Carte- move the factors around as desired is a particu-
sian quantities, manipulating them, simplifying larly useful symbolic property for derivations and
them, and expressing the expressions in a com- manipulations.
puter language short-hand. It manipulates ex-
pressions involving determinants, rotations, mul-
tidimensional derivatives, matrix inverses, cross 1 Mathematical Background:
products and other multicomponent mathemati-
cal entities.
Tensors
Esn can help you write closed form expressions In general, abstract \coordinate independent"
for such things as deriving and computing messy multidimensional mathematical operators and
multidimensional derivatives of expressions, ex- objects can be categorized into two types: lin-
pressing multidimensional Taylor series and chain ear ones like a vector or a matrix, and non-linear
rule, expressing the rotation axis of a 3D rota- ones. Although there are many di erent deni-
tion matrix, nding the representation of a matrix tions, for this document it will su ce to think
in another coordinate system, deriving a fourth of tensors as the set of (multi)linear multidi-
(4D) vector perpendicular to 3 others, or rotating mensional mathematical objects { whatever the
around N-2 axes simultaneously, in N dimensions. nonlinear ones are, they are not tensors. Vari-
Esn is an \expert friendly" notation: after an ini- ous other tensor denitions exist { for instance,
tial investment of time, it converts di cult prob- a tensor can be thought of as the mathematical
lems into problems with workable solutions. It objects which satisfy certain transformation rules
does not make \easy" problems easier, however. between di erent coordinate systems.
The algebraic terms explicitly delineate the nu- Di erent numerical representations of a Carte-
merical components of the equations so that the sian tensor are obtained by transforming be-

tensors can be manipulated by a computer. A C- tween Cartesian coordinate systems, which have
preprocessor has been implemented for this pur- orthonormal basis vectors, while a generalized
pose, which converts embedded Einstein summa- tensor is transformed in non-orthonormal coor-
tion notation expressions into C Language expres- dinates.
sions. The components of a Cartesian tensor are magni-
In conventional matrix and vector notation, the tudes projected onto the orthogonal Cartesian ba-
multiplication order contains critical information sis vectors three dimensional N -th order Carte-
for the calculation. In the Einstein subscript sian tensors are represented in terms of 3N com-

1
Barr, Extending the Einstein Summation Notation, 2.05 2

ponents. another. The tangent vectors transformed prop-


The arrays of numbers are not the tensors them- erly with the stretching transformation. They are
selves. A given set of numbers represent the ten- \contravariant" noncartesian tensors.
sors, but only in a particular coordinate system.
The
0 numerical
1 representation of a vector, such as
1
B
@2C A, only has geometric meaning when we also
3
know the coordinate system's basis vectors { the
vector goes one unit in the rst basis vector direc-
tion, two units in the second basis vector direc-
tion, and three units along the third basis vector
direction. Thus, a tensor is an abstract math-
ematical object that is represented by di erent
sets of numbers that depend on the particular co-
ordinate system that has been chosen the same
tensor will be represented by di erent numbers
when we change the coordinate sytesm. Figure 2: Normal vectors on the sphere

Figure 3: Unlike the tangent vectors, the normal


vectors, when the gure is stretched in the hor-
izontal direction, are not the normal vectors for
Figure 1: Non-Cartesian coordinates. Tangent the new surface!
vectors are \contravariant" noncartesian tensors. Generalized tensor transformations (for noncar-
Noncartesian coordinates need two transforma- tesian coordinates) are locally linear transforma-
tion rules, not just one. For instance, in Figure tions that come in two types: contravariant ten-
1 we're given a circle and its tangent vectors. If sors transform with the tangent vectors to a sur-
we stretch the whole gure in the horizontal di- face embedded in the transformed space, while
covariant tensors transform with the normal vec-
rection, (this is equivalent to lengthening the hor-
izontal Cartesian basis vector so it is not a unit tors.
vector anymore, so the coordinates are now non- Contravariant indices are generally indicated by
cartesian), we can see that the stretched circle superscripts, while covariant indices are indicated
and the stretched vectors are still tangent to one by subscripts. Good treatments of these conven-
Barr, Extending the Einstein Summation Notation, 2.05 3

tions may be found in textbooks describing ten- to obtain the numerical representations of
sor analysis and three-dimensional mechanics (see the same object in another coordinate sys-
SEGEL], and MISNER,THORNE,WHEELER]). tem. That rule is described in section 3.
Generalized tensors are outside the scope of this
document. 1.1 Example 1:
For example, let us consider the component-by-
component description of the three-dimensional
matrix equation
a=b+M c
(we utilize one underscore to indicate a vector,
two for a matrix, etc. We will occasionally use
boldface letters to indicate vectors or matrices.)
The above set of equations is actually a set of
three one-component equations, valid in any co-
ordinate system in which we choose to represent
the components of the vectors and matrices.
Figure 4: Normal vectors are covariant tensors
they are transformed by the inverse transpose of Using vertical column vectors, the equation is:
the stretching transformation matrix.
0 a 1 0 b 1 0 m m m 10 c 1
For Cartesian transformations, such as pure ro- @ a12 A = @ b12 A+@ m11 12 13 1
tation, tangent vectors transform via the same 21 m22 m23 A @ c2 A
a3 b3 m31 m32 m33 c3
matrix as the normal vectors there is no di er-
ence between the covariant and contravariant rep- which is equivalent to:
resentations. Thus, subscript indices su ce for
Cartesian tensors. a1 = b1 + m11 c1 + m12 c2 + m13 c3
A zero-th order tensor is a scalar, whose sin- a2 = b2 + m21 c1 + m22 c2 + m23 c3
gle component is the same in all coordinate sys- a3 = b3 + m31 c1 + m32 c2 + m33 c3
tems, while rst order tensors are vectors, and
second order tensors are represented with matri- The above equations condense into three in-
ces. Higher order tensors arise frequently intu- stances of one equation,
itively, if a mathematical object is a \k-th order
Cartesian tensor," in an N dimensional Cartesian X
3

coordinate system, then ai = bi + mi j c j i = 1 2 3:


j =1
1. The object is an entity which \lives" in the
N dimensional Cartesian coordinate sys- The essence of the Einstein summation notation
tem. is to create a set of notational defaults so that the
2. The object can be represented with k sub- summation sign and the range of the subscripts do
scripts and N k components total. not need to be written explicitly in each expres-
3. The numerical representation of the object sion. This notational compactness is intended to
is typically di erent in di erent coordinate ease the expression and manipulation of complex
systems. equations.
4. The representations of the object obey the With the following rules, the above example be-
Cartesian form of the transformation rule, comes:
Barr, Extending the Einstein Summation Notation, 2.05 4

combine it with G or evaluate FG and then com-


ai = bi + mi j c j bine it with E :
EFG = (EF)G = E(FG)
1.2 Denition of Terms:
Rule 1: A subscript index variable is free to
There are two types of subscript indices in the vary from one to N if it occurs exactly once in a
above equation. The subscript \i" is free to vary term.
from one to three on both sides of the above equa- Rule 2: A subscript index variable is bound
tion, so it is called a free index. The free indices to a term as the dummy index of a summation
must match on both sides of an equation. from one to N if it occurs exactly twice in the
The other type of subscript index in Example 1 is term. We will sometimes put boxes around bound
the dummy subscript \j ." This subscript is tied variables for clarity.
to the term inside the summation, and is called a Rule 3: A term is syntactically wrong if it con-
bound index. Sometimes we will place \boxes" tains three or more instances of the same sub-
around the bound indices, to more readily indi- script variable.
cate that they are bound. Dummy variables can
be renamed as is convenient, as long as all in- Rule 4: Commas in the subscript indicate that
stances of the dummy variable name are replaced a partial derivative is to be taken. A (free or
by the new name, and the new names of the sub- bound) subscript index after the comma indicates
scripts do not conict with the names of other partial derivatives with respect to the default ar-
subscripts, or with variable names reserved for guments of the function, (frequently spatial vari-
other purposes. ables x1 , x2 , and x3 or whatever x y z spatial
coordinate system is being used). Partial deriva-
tives with respect to a reserved variable (say t,
1.3 The Classical Rules for the N di- the time parameter) are indicated by putting the
mensional Cartesian Einstein Sum- reserved variable after the comma.
mation Notation With these rules, Example 1 becomes
We remind the reader that an algebraic term is ai = bi + mi j c j
an algebraic entity separated from other terms
by additions or subtractions, and is composed of
a collection of multiplicative factors. Each fac- The subscript \i" is a free index in each of the
tor may itself composed of a sum of terms. A terms in the above equation, because it occurs
subexpression of a given algebraic expression is only once in each term, while \j " is a bound in-
a subtree combination of terms and/or factors of dex, because it appears twice in the last term.
the algebraic expression, The boxes on the bound indices are not neces-
sary they are used just for emphasis.
The classical Einstein summation convention
(without author-provided extensions) is governed At rst, it is helpful to write out the interpre-
by the following rules: tations of some \esn" expressions in full, com-
plete with the summation signs, bound indices,
Scoping Rule: Given a valid summation- and ranges on the free indices. This procedure
notation algebraic expression, the notation is can help clarify questions that arise, concerning
still valid in each of the algebraic subexpres- the legality of a particular manipulation. In this
sions. Thus, it is valid to re-associate sub- way, you are brought back into familiar territory
expressions: If E, F, and G are valid ESN- to see what the notation is \really" doing.
expressions, then you can evaluate EF and then
Barr, Extending the Einstein Summation Notation, 2.05 5

1.4 Why only two instances of a sub-


script index in a term? 8
>
< 0 if any pair of subscripts is equal
bj cj dj =? (b j c j )dj =? bj (c j d j ): ijk = > 1 (i j k) is an even perm. of (1 2 3)
: 1 (i j k) is an odd perm. of (1 2 3)
;
However,
An even permutation of a list is a permutation
X
3 X
3 created using an even number of interchange op-
bj cj dj = (
6 b j c j ) dj etc: erations of the elements. An odd permutation
j =1 j =1 requires an odd number of interchanges. The six
permutations of (1,2,3) may be obtained with six
The rst expression is a number, while the second interchange operations of elements:
expression is a vector which equals a scalar (b c) 

times vector d. The third expression is a di erent (1 2 3) ! (2 1 3) ! (2 3 1)


vector, which is the product of the vector b with
the scalar c d, which are not remotely equiva-

! (3 2 1) (3 1 2) (1 3 2):
! !

lent. Thus, we are limited to two instances of any Thus, the even permutations are (1,2,3), (2,3,1),
particular subscript index in a term in order to and (3,1,2), while the odd permutations are
retain the associative property of multiplication (2,1,3), (3,2,1), and (1,3,2).
in our algebraic expressions.
Unlike conventional matrix notation, Esn factors 2.1 Preliminary simplication rules
are both commutative and associative within a
term. For beginners it is particularly important The free-index operator of a vector, matrix or
to check Esn expressions for validity with respect tensor converts a conventional vector expression
to rule 3 and the number of subscript instances into the Einstein summation form. It is indicated
in a term. by parentheses with subscripts on the right. Vec-
tors require one subscript, as in
2 A Few Esn Symbols (b)i = bi
The Kronecker delta, or identity matrix, is rep- which is verbosely read as \the i-th component
resented with the symbol  with subscripts for the of vector b (in the default Cartesian coordinate
rows and columns of the matrix. system) is the number b sub i."
( Two subscripts are needed for matrices:
ij = 0 i=j
6

1 i=j (M )ij = Mij :


The logical \1" symbol is an extended symbol can be read as \the ij-th component of matrix M
related to the Kronecker delta. in the default coordinate system is the number M
( sub i j."
1logical expression = 1 expression is true To add two vectors, you add the components:
0 otherwise
(a + b)i = ai + bi
So 1i = j = ij .
This can be read as \the i-th component of the
The order-three permutation symbol ijk is sum of vector a and vector b is the number a sub
used to manipulate cross products and three di-
mensional matrix determinants, and is dened as
follows:
Barr, Extending the Einstein Summation Notation, 2.05 6

i plus b sub i."1 2.1.1 Rules involving ijk and ij


To perform matrix-vector multiplications, The delta rule
the second subscript of the matrix must be bound
to the index of the vector. Thus, to multiply ma- When a Kronecker delta subscript is bound in a
trix M by vector b, new bound subscripts are cre- term, the expression simpli es by
ated, as in:
(1) eliminating the Kronecker delta symbol, and
(M b)i = Mi j b j (2) replacing the bound subscript in the rest of
the term by the other subscript of the Kronecker
delta. For instance,
which converts to
vik = i j M j k
(M b)1 = M11 b1 + M12 b2 + M13 b3
(M b)2 = M21 b1 + M22 b2 + M23 b3 becomes
(M b)3 = M31 b1 + M32 b2 + M33 b3 vik = Mik 
To perform matrix-matrix multiplications, and
the second subscript of the rst matrix must be vk = ij ai Mjk
bound to the rst subscript of the second matrix. becomes
Thus, to multiply matrix A by matrix B , you cre- vk = a j M j k
ate indices like:
or equivalently, ai Mik . Note that in standard
(A B )ij = Ai k B k j notation,
vk = (M T a)k :
which converts to:
Rules for the order-N permutation symbol
(A B )11 = A11 B11 + A12 B21 + A13 B31
(A B )21 = A21 B11 + A22 B21 + A23 B31 Interchanges of subscripts ip the sign of the per-
(A B )31 = A31 B11 + A32 B21 + A33 B31 mutation symbol:

(A B )12 = A11 B12 + A12 B22 + A13 B32 i1 i2 . . . iN = ; i2 i1 . . . iN


(A B )22 = A21 B12 + A22 B22 + A23 B32 Repeated indices eliminate the permutation sym-
(A B )32 = A31 B12 + A32 B22 + A33 B32 bol (since lists with repeated elements are not
(A B )13 = A11 B13 + A12 B23 + A13 B33 permutations).
(A B )23 = A21 B13 + A22 B23 + A23 B33
(A B )33 = A31 B13 + A32 B23 + A33 B33 iijk . . . ` = 0
1
In the esn C preprocessor, the default ranges are dif-
This is related to the behavior of determinants,
ferent: where interchanges of columns change the sign, or
#f c; i = a; i + b; i repeated columns send the determinant to zero.
g (assuming the arrays have been declared over the Note repeat of index i.
same range). In C, this expands to:
c0] = a0] + b0]
c1] = a1] + b1] The order-three permutation symbol sub-
c2] = a2] + b2] script rule
For the special case of an order-three permutation
symbol, the following identity holds.
Barr, Extending the Einstein Summation Notation, 2.05 7

ijk = jki 3e. In other words, each basis vector has unit
length, is perpendicular to the other basis vectors,
The order-3 epsilon-delta rule and the 3D vectors are named subject to the right
hand rule.
The order-3 - rule allows the following subscript Right handed unit basis vectors satisify:
simpli cation (when the rst subscript of two per-
mutation symbols match: e  2 e = 3e
1

e  3e = 1e, and
i jk i pq = jp kq ; jq kp
2

In other words, the combination of two permu-


3
e  1 e = 2e:
tation symbols with the same rst subscript is In addition, we note that
equal to an expression involving Kronecker deltas
and the four free indices j , p, k, and q. In prac- i
e  j e = ij :
tice, the subscripts in the permutation symbols
are permuted via the above relations, in order to This result occurs because the dot products of
apply the - rule. perpendicular vectors is zero, and the dot prod-
A deriviation to verify the identity is provided in ucts of identical unit vectors is 1.
Appendix A. We also consider another set of right-handed or-
thonormal basis vectors for the same three dimen-
sional space, indicated with \hats," vectors 1 e^
3 Transformation Rules for 2e
^ and 3 e^, which will also have the same proper-
Cartesian Tensors ties.

We express vectors, matrices, and other tensors


i
e^  j e^ = ij , etc.
in dierent Cartesian coordinate systems, with-
out changing which tensor we are representing. 3.1 Transformations of vectors
The numerical representation (of the same ten-
sors) will generally be dierent, in the dierent Consider a vector a, and express it with respect
coordinate systems. to both bases, with the (vertical column) array of
numbers a1  a2  a3 in one coordinate system, and
We now derive the transformation rules for Carte- with the array of dierent numbers a^1  a^2  a^3 in
sian tensors in the Einstein summation notation, the other.
to change the representation from one Cartesian vector a = a1 1 e + a2 2 e + a3 3 e = ai i e
coordinate system to another. Some people in and also for the1 same vector,
fact use these transformation rules as the de - a = a^1 e^ + a^2 2 e^ + a^3 3 e^ = a^i ie^
nition of a Cartesian tensor { any mathemtaical Since the two expressions represent the same vec-
object whose representation transforms like the tor, they are equal:
Cartesian tensors do is a Cartesian tensor.
Consider a three dimensional space, with right- ai i e = a^i ie^ (sum over i)
handed2 orthonormal3 basis vectors 1e 2e and
2
The author strongly recommends avoiding the com-
We derive the relation between a^i and ai , by dot-
puter graphics convention of left handed coordinates, and ting both sides of the above equation with basis
recommends performing all physically based and geomet- vector k e.
ric calculations in right handed coordinates. That way, Thus, by commuting and reassociating, and the
you can use the last 300 years of mathematics texts as
references. fact that the unit basis vectors are mutually per-
3
mutually perpendicular unit vectors pendicular, we obtain:
Barr, Extending the Einstein Summation Notation, 2.05 8

By dotting twice by pe and q e, a similar derivation


(ai e)  e = (^ai e^)  e, or
i k i k
shows us that the transformation rule for a matrix
is:
ai (ie  k e) = a^i ( ie^  k e), or
Mij = Tip Tjq M^ pq
ai ik = a^i ( i e^  k e), or
For instance, with the standard bases, where
ak = (k e  ie^) a^i 0 1
1
The above expression, in conventional notation, 1
e=B@ 0 CA 
becomes: 0
X1 N 0 1
a1 = ( e  e^) a^i 0
e=B
@ 1 CA 
i
2
X2
i=1
N
0
a2 = ( e  i e^) a^i
X
=1
i
N etc., note that
a3 = 3
( e  i e^) a^i 0 1
1 0 0
@ 0 0 0 CA 
e e=B
i=1
Thus, a matrix T re-expresses the numerical rep- 1 1

resentation of a vector relative to a new basis via 0 0 0


ai = Tij a^j 0 1
0 1 0
where e e=B
1 2
@ 0 0 0 CA 
0 0 0
Tij = ( e  e^) i j
etc.
Note that the transpose of T is its inverse, and
det T = 1, so T is a rotation matrix. 3.3 Transformations of N -th order
Tensors
3.2 Transformations of matrices
You would not be surprised then, to imagine basis
The nine basis vectors of (3 dimensional) matrices vectors of 3rd order tensors as being given by
(2nd order tensors) are given by the nine outer
products of the previous basis vectors: ijk
 = ie j e k e
k`
 = k e`e: with a transformation rule (from hatted coordi-
nates to unhatted)
The above matrix equation uses \dyadic" nota-
tion for the outer product of the two basis vectors
k
e and `e. The outer product of two vectors is a Aijk = (Tip ) (Tjq ) (Tkr ) A^pqr
matrix, whose i ; j -th component is given by:
Also, not surprisingly, given an N th order tensor
(a b)ij = ai bj quantity X, it will have analogous basis vectors
Thus, the nine quantities which form the nine di- and will transform with N copies of the T matrix,
mensional basis \vectors" of a three-by-three ma- via:
trix are the outer products of the original basis
vectors: Xi1 i2 . . . iN = (Ti1 j1 ) . . . (TiN jN ) X^j1 . . . jN
M = Mij ij  = Mij iej e
Barr, Extending the Einstein Summation Notation, 2.05 9

4 Axis-Angle Representations 4.2 Rotation is a Linear Operator


of 3D Rotation We will be exploiting linearity properties for the
derivation:
One very useful application of Esn is in the repre-
sentation and manipulation of rotations. In three Rot(a + b) = Rot(a) + Rot(b)
dimensions, rotation can take place only around and
one vector axis.
Rot( a) =  Rot(a):
4.1 The Projection Operator See Figure 5.
The rst operator needed to derive the axis-angle
matrix formulation is the projection operator. To
project out vector b from vector a, let
anb=a; b
such that the result is perpendicular to b.
e2
Rot(e2) = -sin, cos Rot(e1) = cos, sin
The projection operation \a n b" can be read as
vector a \without" vector b. Note that
 = ab  bb :
e1
)
(B

Figure 6. Two-d rotation of basis vectors 1 e and


ot

e by angle  are displayed. Rot(1 e) = cos  1e +


+R

2
A)

sin  2 e, while Rot(2 e) = ; sin  1 e + cos  2 e.


t(

Rot(B)
Ro
=
B)

4.3 Two-D Rotation.


A+
t(
Ro

Two dimensional rotation is easily derived, using


linearity. We express the vector in terms of its
basis vector components, and then apply the ro-
tation operator.
Rot(A)
Since
Figure 5. Visual demonstration that rotation is a = a1 1e + a2 2 e
a linear operator. Just rotate the document and
observe that the relationship holds: Rot(A+B) = Therefore
Rot(A) + Rot(B) no matter which rotation oper-
ates on the vectors. Rot(a) = Rot(a1 1e + a2 2e):
Using linearity,
Barr, Extending the Einstein Summation Notation, 2.05 10

Finally, let e2 = e3  e1 . It's in the direction of


Rot(a) = a1 Rot(1e) + a2 Rot(2e) r  (a n r) which is in the same direction as r  a.
From Figure 6 we can see that that Note that

Rot(1e) = cos 1e + sin 2e e  2e = 3e


1

and 2
e  3 e = 1 e, and
Rot(2 e) = ; sin 1e + cos 2 e: 3
e  1e = 2e:
Thus, We have a right-handed system of basis vectors.

Rot(a) = (cos  a1 ; sin  a2 ) 1 e 4.5 Deriving the axis-angle formula-


+ (sin  a1 + cos  a2 ) 2 e tion
First, note that rotation of vectors parallel to r
around itself remain unchanged.
axis r
We're now ready to derive Rot(a) around r by .
By de nition,
r x a a = a n r + r
Rot(a\r) Thus,
Rot(a) = Rot(a n r +  r)
angle theta = Rot(a n r) + Rot(r)
= Rot(ja n rj 1 e) +  r
a \ r = ja n rj Rot(1 e) +  r
= ja n rj (cos  1 e + sin  e2 ) + r
Figure 7. The rotation is around unit vector axis
r, by right handed angle . Note that the vectors = cos  (a n r) + sin  (r  a) + (a  r)r
r, a n r and r  a form an orthogonal triple.

4.4 An Orthogonal triple 4.6 Deriving the components of the ro-


tation matrix
Given a unit vector r around which the rotation Since rotation is a linear operator, we represent
will take place (the axis) we can make three or- it with a matrix, R.
thogonal basis vectors.
Using Esn, we can easily factor out the compo-
Let the third basis vector, 3 e = r. nents of the matrix, to obtain its components:
Let the rst basis vector, 1 e, be the unit vector from the previous equation, we know the i-th
in the direction of a n r. component of both sides of the equation is given
by:
Note that by the de nition of projection, 3 e is
perpendicular to 1 e. (Rot(a))i =
;R a (a n r) + sin (r 
a) + (a r)r )i
i = (cos
Barr, Extending the Einstein Summation Notation, 2.05 11

Thus, parentheses with subscripts, requiring one sub-


script for vectors, as in ( b )i = bi , and two
Rij aj = cos (a n r)i + sin (r 
a)i + (a r ) r i subscripts for matrices, as in ( M )ij = Mij .
You can think of ()i as an operator which dots
the argument by the i-th basis vector.
= cos (ij; ri rj )aj + sin ikj rk aj + aj rj ri Sometimes we will use the free index operator to
Note that each term has aj in it. We put every- select column vectors of a matrix, such as the
thing on one side of the equation, and factor out following:
the aj on the right.
( E )i = i e
 
;Rij + cos (ij ; ri rj ) + sin ikj rk + rj ri aj =0

Unfortunately, we can't divide out the aj term, In this case 1 e is the rst column vector of the
because j is involved in a sum, from 1 to 3. How- matrix E , 2 e is the second column, etc. 4
ever, since the above equation is true for all val-
ues of aj , then the esn factor on the left must be (2) The dot product of two vectors a and b is
zero. (For instance, we could let aj = 1j , 2j expressed via: a  b = ai bi = a i b i :
and 3j in sequence.)
Thus, putting Rij to the other side of the equa- (2a) The outer product of two vectors a and b
tion, the ij th components of the rotation matrix is expressed via: (a b)ij = ai bj :
are given by:
(3) In 3-D the vector cross product of two vec-
Rij = cos  (ij ; ri rj ) + sin  ikj rk + rj ri tors a and b is expressed via:
You can reverse the sign of the permutation sym- (a  b)i = ijk aj bk 
bol to get:
or, putting boxes on the matching sets,
Rij = cos  (ij ; ri rj ) ; sin  ijk rk + rj ri (a  b)i = i j k a j b k 
or
5 Summary of Manipulation (a  b)1 = a2 b3 ; a3 b2
Identities (a  b)2 = a3 b1 ; a1 b3
In this section, a series of algebraic identities are (a  b)3 = a1 b2 ; a2 b1
listed with potential applications in multidimen-
sional mathematical modeling. (in which the free index of the output cross prod-
uct vector becomes the rst subscript of the per-
mutation symbol, while two new bound indices
List of Einstein Summation Identities: are created).

(1) The free-index operator of a vector, matrix (4) ij jk = i j  j k = ik = ki .
or tensor converts a conventional vector expres-
sion into the Einstein form. It is indicated by Note that i e 6= (e)i . The right hand side is an i-th
4

scalar, while the left side is an i-th vector.


Barr, Extending the Einstein Summation Notation, 2.05 12

(5) ii =  i i = 3 in three dimensional space.  


ii = N in N dimensional spaces. Fi j (x) = @F@i () 
j =x
(6) i j i j = ii = N .
(13) xi j = ij  where x is the default spatial
coordinate vector.
(7) ijk = ; jik .
(14) Sometimes partial derivatives may also be
(8) ijk = jki . taken with respect to reserved symbols, set aside
in advance.
(9) The - rule allows the following subscript
simpli cation: F t = @F
@t :
i jk i pq = jp kq ; jq kp (15) (r2 F )i = Fi jj
(10) If Sij is Symmetric, i.e., if Sij = Sji , (16) The determinant of an NxN matrix M may
then be expressed via:
qij Sij = 0: detM = i1 i2 . . . iN M1i1 M2i2 . . . MNiN
(11) If Aij is Antisymmetric, i.e., if Aij = The order N cross product, is produced by leav-
;Aji , then Aij ; Aji = 2Aij . In addition, ing out one of the column vectors in the above
since Mij ; Mji is always antisymmetric, expression, to produce a vector perpendicular to
N ; 1 other vectors.
qij (Mij ; Mji ) = 2 qij Mij
For three-by-three matrices,
(12) Partial derivatives are taken with respect
to the default argument variables when the sub- det(M ) = ijk M1i M2j M3k = ijk Mi1 Mj 2 Mk3 :
script index follows a comma:
For example, given a scalar function F of vector (17) Another identity involving the determinant:
argument x, the i-th component of the gradient
of F is expressed as: qnp det(M ) = ijk Mqi Mnj Mpk
(rF )i = F i = @x@F : (18) The rst column of a matrix M is designated
i Mi1 , while the second and third columns are in-
Given a vector function F of vector argument x, dicated by Mi2 and Mi3 . The three rows of a
the the derivative of the i-th component of F (x) three dimensional matrix are indicated by M1i ,
with respect to the j-th component of x is ex- M2i , and M3i .
pressed as:
(19) The transpose operator is achieved by
Fi j = @Fi
@xi : switching subscripts:
Argument evaluation takes place after the partial (M T )ij = Mji :
derivative evaluation:
(20) A matrix times its inverse is the identity ma-
trix:
Barr, Extending the Einstein Summation Notation, 2.05 13

(22a) The axis-axis representation of a rotation


Mi j (M ;1) j k = ik : rotates unit vector axis a to unit vector axis b by
setting
(21) The SinAxis operator. The instantaneous r = jaa  b
 bj
rotation axis r and counter-clockwise angle of
rotation  of a three by three rotation matrix R and letting
is governed by the following relation (a minus sign
is necessary for the left-handed version):  = cos;1(ai bi )
(SinAxis(R))i = ri sin  = 12 ijk Rkj : (23) The inverse of a rotation matrix R is its
This identity seems easier to derive through the transpose:
\esn" form than through the matrix notation (R;1 )ij = Rji
form of the identity. It expands to
i.e.,
r1 sin  = 1=2(R32 ; R23 )
R;1 = RT
r2 sin  = 1=2(R13 ; R31 )
(24) When R is a rotation matrix, Rij Rkj =
r3 sin  = 1=2(R21 ; R12 ) ik . Likewise, Rji Rjk = ik .
(22) The three by three right-handed rotation
matrix R corresponding to the instantaneous (25a) The matrix inverse5 of a general N by N
unit rotation axis r and counter-clockwise angle matrix M :
of rotation  is given by:
(M ;1 )ji =
ii2 i3 . . . iN jj2 j3 . . . jN Mi2 j2 . . . MiN jN
Rij = ri rj + cos (ij ; ri rj ) ; sin  ijk rk : (N ; 1)! detM

Expanded, the above relation becomes: Please note that the numerator of the above ex-
pression involves an enormous number of individ-
ual components when written out in a conven-
M11 = r1 r1 + cos (1 ; r1 r1 ) tional form. The above expression exhibits an
M21 = r2 r1 ; cos r2 r1 + r3 sin  incredible economy.
M31 = r3 r1 ; cos r3 r1 ; r2 sin  (25b) The three by three matrix inverse is
given by:
M12 = r1 r2 ; cos r1 r2 ; r3 sin 
qnp Mjn Mkp
M22 = r2 r2 + cos (1 ; r2 r2 ) (M ;1 )qi = ijk 2 det M :
M32 = r3 r2 ; cos r3 r2 + r1 sin 
The algebraically simpli ed terms of the above
M13 = r1 r3 ; cos r1 r3 + r2 sin 
M23 = r2 r3 ; cos r2 r3 ; r1 sin  5
The author hesitates to expand this daunting expres-
sion out!
M33 = r3 r3 + cos (1 ; r3 r3 )

Relation 22 yields a right-handed rotation around


the axis r by angle . It can be veri ed by multi-
plying by vector ai and comparing the result to
the axis-angle formula in section 4.5.
Barr, Extending the Einstein Summation Notation, 2.05 14

expression are given by: This is equivalent to


(M ;1 )11 = (M33 M22 ; M23 M32 )= det M
(M ;1 )21 = (M23 M31 ; M33 M21 )= det M @ (F (G(x(t)))) = X 3 X 3
F G @xk (t)
(M ;1 )31 = (M32 M21 ; M22 M31 )= det M @t i j j k @t
j =1 k =1

(M ;1 )12 = (M13 M32 ; M33 M12 )= det M where


(M ;1 )22 = (M33 M11 ; M13 M31 )= det M 
(M ;1 )32 = (M12 M31 ; M32 M11 )= det M Fi j = @F@i () 
j  = G( x(t))
(M ;1 )13 = (M23 M12 ; M13 M22 )= det M and
(M ;1 )23 = (M13 M21 ; M23 M11 )= det M
(M ;1 )33 = (M22 M11 ; M12 M21 )= det M @Gj ( ) 
Gj k = @ 
k = x(t)
(The factor of 2 canceled out.)
To verify the above relationship, we can perform (28) Multidimensional Taylor Series:
the following computation:
qr = (M ;1 M )qr = (M ;1)qi Mir Fi (x) = Fi (x0 ) + Fi j (x0 )(xj ; x0j )
Thus, the above terms simplify to:
+ 2!1 Fi jk (x0 )(xj ; x0j )(xk ; x0k )
ijk qnp Mjn Mkp qnp (ijk Mir Mjn Mkp )
Mir =
2 det M 2 det M

From identity 17, the determinant cancels out and + 3!1 Fi jkp (x0 )(xj ;x0j )(xk ;x0k )(xp ;x0p )+   
the above simpli es to:
qnp rnp (29) Multidimensional Newton's Method
2 can be derived from the linear terms of the multi-
which, by the epsilon-delta rule, becomes dimensional Taylor series, in which we are solving
F (x) = 0, and letting Jij = Fi j :
pqn prn = qr nn ; qn nr
2 2 xj = x0j ; (J ;1)ji Fi (x0 ):
3qr ; qr
= 2 = qr
(30) Orthogonal Decomposition of a vector v
which veri es the relation. in terms of orthonormal (orthogonal unit) vectors
1 e, 2 e, and 3 e:
(26) From the preceding result, we can see that
in 3 dimensions: v = (v  j e)j e:
;1
ijk Mjr Mks = 2 det M (M )qi qrs : (31) Change of Basis: we want to express vj j e
in terms of new components times orthonormal
(27) The Multidimensional Chain Rule in- basis vectors j e^. A matrix T re-expresses the
cludes the eects of the summations automati- numerical representation of a vector relative to
cally. Additional subscript indices are created as the new basis via
necessary. For instance,
vj = Tij v^j
(F (G(x(t))))i t = Fi j (G(x(t)))Gj k ( x(t))xk t : where
Barr, Extending the Einstein Summation Notation, 2.05 15

Tij = ( e e^):
i j
cos  = Rii2 1 :
;

(32) The delta rule: the angle  is given by


ij esn-expressioni = esn-expressionj :  = Atan( sin   cos ):
j j

If  = 0, any axis suces, and we are nished.


(33) The generalized stokes theorem: Otherwise, if  = 0, we need to know the value of
6

Z Z r. In that case, if  = , then


6

n esn-expressioni d  = esn-expressioni j dv  Rij


@R j R rk = 2ijk
;

sin  
Likewise, Otherwise, if  = , then since the original rota-
Z Z tion matrix
n esn-expressioni d  = esn-expressioni i dv
@R i R Rij = 2ri rj ij 
;

then
(34) Multiplying by matrix inverse: if
Rij + ij
Mij xj = esn-expressioni r i rj = 2
then Letting
R +
; ; Mij = ij 2 ij 
(M )pi Mij xj = (M )pi esn-expressioni
1 1
we can solve for ri , by taking the ratio of non-
or, simplifying and renaming p back to j , diagonal and square roots of nonzero diagonal
xj = (M ;1 )ji esn-expressioni terms, via
M
Note the transpose relationship of matrices M ri = q ij ]
and M (;1) in the rst and last equations. Mj ]j ]
Note that we are using j ] to be an independent
(35) Conversion of Rotation matrix Rij to index as de ned near the bottom of page 3. We
the axis-angle formulation: choose the value of j to be such that Mj ]j ] is
We rst note that any collection of continuously its largest value.
varying axes r and angles  produces a continuous
rotation matrix function, via identity 22. How-
ever, it is not true that this relation is completely 6 Extensions to the tensor no-
invertible, due to an ambiguity of sign: the same tation for Multi-component
matrix is produced by dierent axis-angle pairs.
For instance, the matrix produced by r and  is equations
the same matrix produced by r and . In fact,
; ;

if  = 0 then there is no net rotation, and any In this section, we introduce the no-sum notation
unit vector r produces the identity matrix (null and special symbols for quaternions.
rotation).
Since
 R
sin  = ijk2 ij , and
j j
j j
Barr, Extending the Einstein Summation Notation, 2.05 16

6.1 The \no-sum" operator into matrix and vector notation for interpreta-
tion. Sometimes, however, there is no convenient
The classical summation convention is incomplete way to express the result in conventional matrix
as it usually stands, in the sense that not all notation.
multi-component equations (i.e, those involving
summation signs and subscripted variables) can
be represented. In the scalar convention, the de- 6.2 Quaternions
fault is not to sum indices, so that summation The other proposed extensions to the notation aid
signs must be written explicitly if they are de- in manipulating quaternions. For convenience,
sired. Using the classical summation convention, a few new special symbols are de ned to take
summation is the default, and there is no con- quaternion inverses, quaternion products, prod-
venient way not to sum over a repeated index ucts of quaternions and vectors, and conversions
(other than perhaps an awkward comment in the of quaternions to rotation matrices. A few iden-
margin, directing the reader that the equation is tities involving these symbols are also presented
written with no implicit summation). in this section.
Thus, an extension of the notation is proposed
in which an explicit \no-sum" or \free-index" op-
erator prevents the summation over a particular 6.2.1 Properties of Quaternions
index within aPterm. The no-sum operator is rep- A quaternion is a four dimensional mathematical
resented via i
or (which is easier to write) a object which is a linear combination of four inde-
subscripted pre x parenthesis (i ). This modi-
6

cation extends the types of formulations we can pendent basis vectors: 1, i, j, and k, satisfying
represent, and augments algebraic manipulative the following relations:
skills. Expressions involving the no-sum operator
are found in calculations which take place in a i2 = j2 = k2 = ; 1
particular coordinate system without the exten- ijk = ; 1
sion, all of the terms are tensors, and transform By pre- and post-multiplying by any of i, j, or k,
correctly from one coordinate system to another. it is straightforward to show that
For instance, a diagonal matrix
0 1
a1 0 0 ij = k
M =B @ 0 a2 0 CA jk = i
0 0 a3 ki = j
is not diagonal in all coordinate systems. ji = k
;

Nonetheless, it can be convenient to do the cal- kj = i


;

culation in the coordinate system in which M is ik = j


;

diagonal.
X Thus, a quaternion
Mij = (ai ij ) = (i ai ij ) 0 1
BB qq01 CC
6

Some of the identities for this augmented notation q=B@ q2 CA


are found later in this document. q3
The ease of manipulation and the compactness can represented as
of representation are the main advantages of the
summation convention. Generally, an expression q = q0 + q1 i + q2 j + q3 k
is converted from conventional matrix and vector
notation into the summation notation, simpli ed and quaternion multiplication takes place by ap-
in the summation form, and then converted back plying the above identities, to obtain
Barr, Extending the Einstein Summation Notation, 2.05 17

Note that
pq = (p0 + p1 + p2 + p3 )(q0 + q1 + q2 + q3 )  ! !  ! !
i
= p0 (q0 + q1 + q2 + q3 )
j k i j k
s1 s2 = s2 s1
i
+p1 (q0 + q1 + q2 + q3 )
i i
j
j
k
k
v1 v2 6
v2 v1
+p2 (q0 + q1 + q2 + q3 )
j i j k With this notation, quaternion-vector, vector-
+p3 (q0 + q1 + q2 + q3 )
k i j k quaternion, and vector-vector multiplication rules
= p0 q0 + p0 q1 + p0 q2 + p0 q3
i j k
become clear: to represent a vector, we set the
+p1 q0 + p1 q1 ( ) + p1 q2 ( ) + p1 q3 ( )
i ii ij ik
scalar part of the quaternion to zero, and use the
+p2 q0 + p2 q1 ( ) + p2 q2 ( ) + p2 q3 ( )
j ji jj jk
above relation to perform the multiplications.
+p3 q0 + p3 q1 ( ) + p3 q2 ( ) + p3 q3 ( )
k ki kj kk
= p0 q0 ; p1 q1 p2 q2 ; p3 q3
+ (p1 q0 + p0 q1 + p2 q3 ; p3 q2 )
i
Quaternion Inverse:
+ (p2 q0 + p0 q2 + p3 q1 ; p1 q3 )
j
+ (p3 q0 + p0 q3 + p1 q2 ; p2 q1 )
k
The inverse of a quaternion q is another quater-
nion q;1 such that q q;1 = 1. It is easily veri ed
that
6.2.2 The geometric interpretation of a  !
quaternion q;1 = s =(s2 + v v):
v ;

A quaternion can be represented as a 4-D com- Rotating a vector with quaternions is achieved by
posite vector, consisting of a scalar part s and a pre-multiplying the vector with the quaternion,
three dimensional vector portion v : and postmultiplying by the quaternion inverse.
 ! We use this property to derive the conversion for-
q = vs : mula from quaternions to rotation matrices.
A quaternion is intimately related to the axis-
angle representation for a three dimensional rota- Quaternion Rotation:
tion (see Figure 1). The vector portion v of a unit (Rot(v))i = (qvq;1 )i  i = 1 2 3:
quaternion is the rotation axis, scaled by the sine
of half of the rotation angle. The scalar portion, With this identity, it is possible to verify the ge-
s is the cosine of half the rotation angle. ometric interpretation of quaternions and their
 ! relationship to the axis-angle formulation for ro-
cos( =
q = sin(=2)r 2) tation.
The most straightforward way to verify this re-
where r is a unit vector, and q q = 1 lation is to expand the vector v into components
Thus, the conversion of a unit quaternion to and parallel to r and those perpendicular to it, plug
from a rotation matrix can be derived using the into the rotation formula, and compare to the
axis-angle formulas 22 and 35. Remember that equation in section 4.5.
the quaternion angle is half of the axis-angle an- i.e., to evaluate
gle.
(Rot(v )) = (q((v r) + r)q;1 )
n

Quaternion Product:
A more compact form for quaternion multiplica- 6.3 Extended Symbols for Quaternion
tion is Manipulations
 ! !  ! The author has developed a few special symbols
s1 s2 = s v s1+ss2 v v 1+ vv2 v ;
: to help write and manipulate quaternion quanti-
v1 v2 1 2 2 1 1 2  ties more conveniently. We de ne all quaternion
Barr, Extending the Einstein Summation Notation, 2.05 18

components as going from 0 to 3, with the vec- ijk = ij k0 + ik j 0 i0 jk + 00ijk;
tor part still going from 1 to 3, with zero for the
scalar component. The quaternion-vector product structure con-
If we are not using the full range of a variable stant ki` allows us to multiply a quaternion q
v

(going, say from 1 to 3 when the original goes and a vector v via
from 0 to 3), we need to explicitly denote that. (qv)k = ki` qi v` v

We hereby extend our Kronecker delta to allow


zero in the subscripts, so where the nonzero components are given by:
v

(i, j, k = 1 . . . 3)
00 = 1
and i0k = ik
v

0jk = jk
v
;

0i = 0 i = 0: 6
ijk = ijk
v

We create a new permutation symbol, 0 which


allows zero in the subscripts. It will be +1 for Note, using the full range for i, j, and k, that
even permutations of (0 1 . . .  3), and 1 for odd
;
ijk = ik j 0 i0 jk + 00ijk
v

permutations. ;

The quaternion inverter structure constant


ij The vector-quaternion product structure con-
allows us to compute quaternion inverses: stant k`i allows us to multiply a vector v and
v

qI;1 =
ij qj =(qK qK ): a quaternion q via
(vq)k = k`i v` qiv

where
8 where the nonzero components are given by:
>
< 1 i = j = 0
v


ij = > 1 i = j = 0 : (i, j, k = 1 . . . 3)
: 0 otherwise
; 6

Note, using the full range for i and j, that ij 0 = ij


v

0jk = jk
v
;


ij = 2i0 j 0 ij : ; ijk = ijk
v

The quaternion product structure constant Note, using the full range for i, j, and k that
Kij allows us to multiply two quaternions p and
q via v
ijk = ij k0 i0 jk + 00ijk
;

(pq)K = K ij pi qj
The vector-vector product is the conventional
where the nonzero components are given by: (i, cross product.
j, k = 1 . . . 3) The quaternion to rotation matrix structure
000 = 1 constant ijk` allows us to create a rotation6 ma-
ij 0 = ij trix R
i0k = ik Rij = ijk` qk q` =(qN qN ):
0jk = ; jk
ijk = ijk It is straightforward to express in terms of the
's:
Note, using the full range, for i, j, and k that 6
Note that qN qN = q q in the following equation.
Barr, Extending the Einstein Summation Notation, 2.05 19

ijk` = ikp pj N
N `v
 !
q = 21
0 0
! q
To derive this relation, we note that the i-th com-
ponent of the rotation of vector a is given by: (i,
j, k = 1 . . . 3)
Denition of angular velocity in three
dimensions
(Rot(a))i = (qaq;1 )i  i = 1 2 3:
= ikp qk (aq;1 )p Consider a time varying rotation Rot(t), (for in-
= ikp qk pj N aj (q;1 )N
v stance, represented with a matrix function or
= ikp qk pj N aj
N ` q` =(q q)
v quaternion function), which brings an object from
body coordinates to world coordinates.
= ( ikp pj N
N ` ) qk q` =(q q)aj
v

In three dimensions, we de ne angular velocity !


Since we can represent these rotations with a as the vector quantity
three dimensional rotation matrix R, and 1. whose direction is the instantaneous unit
(Rot(a))i = Rij aj vector axis of rotation of the time varying
rotation and
for all aj , we can eliminate aj from both sides 2. whose magnitude is the angular rate of ro-
of the equation, yielding tation around the instantaneous axis.
Rij = ikp pj N
N ` qk q` =(q q):
v
We derive angular velocity both for matrix repre-
Thus, sentations and for quaternion representations of
ijk` = ikp pj N
N ` :
v rotation.
The direction of the instantaneous axis of rota-
tion can be obtained by using a matrix-to-axis
7 What is Angular Velocity in operator on the the relative rotation from t to
3 and greater dimensions? t + h.
In symbolic form, angular velocity is given by
Using the axis/angle representation of rotation,
matrices, and quaternions, we de ne, derive, in-
terpret and demonstrate the compatibility be- ! = lim 0
Axis(RelRot(t h )) Angle(RelRot(t h ))
tween the two main classic equations relating the
h! h

angular velocity vector !, the rotation itself, and for either representation method.
the derivative of the rotation.
Matrix Eqn: Matrix representation
m =! m
0

Let M (t) be a time varying rotation matrix which
or takes us from body coordinates to world coordi-
nates, and let N (t h) be the relative rotation from
m =! m 0  M(t) to M(t+h).
In other words, since N takes us from M(t) to
where M(t+h),
(! )ik = ijk !


M (t + h) = N (t h) M (t)
j

Quaternion Eqn:
Barr, Extending the Einstein Summation Notation, 2.05 20

so Expressing M and M' in terms of !


RelRot(t h) = N (t h) We can express Mps (t) Mqs (t) in terms of ! by
0

N (t h) = M (t + h) M (t) T
multiplying both sides by ijk .
Nij (t h) = Mip (t + h) (M )pj (t) T

Nij (t h) = M
ip (t + h) Mjp (t) 2
!j = 12 pqj Mps (t) Mqs (t) 0

= Mip (t) + hMip + (h ) Mjp (t)ijk !j = 12 ijk pqj Mps (t) Mqs (t)
0
O
0

so = 12 (jp kq jq kp ) Mps (t) Mqs (t)


;
0

Nij (t h) = ij + hMip (t) Mjp (t) + (h2 )


0
= 12 (Mjs (t) Mks (t) Mks (t) Mjs (t))
O
0
;
0

ijk !j = Mjs (t) Mks (t) 0

Expressing ! in terms of M and M'


Expressing M' in terms of ! and M
To nd !, we can convert N (t h) to axis/angle
form, to nd the direction of the axis and the We take the equation in the previous section, and
angular rate of rotation, and take the limit as h multiply by Mkp :
goes to zero.

! = lim 0 Axis( ( )) Angle( ( ))


N t h N t h Mkp ijk !j = Mjs (t) Mks (t)Mkp
0

Mkp ijk !j = Mjp (t)


h! h
0

However, a simpler method is available. We note so


that as h 0, Angle(N) sin(Angle(N)). Thus,
! !
Mjp (t) = ijk !j Mkp
0

the product of the matrix Axis operator and the


Angle operator in the limit will equal the matrix Thus, we have derived the matrix equation pre-
SinAxis operator (which is easier to compute). sented in the introduction.

! = lim0
SinAxis(N (t h)) Quaternion representation
h
 Npq (t h)
h!

!i = 12 lim0 pqi h Let q(t) be a time varying rotation quaternion


h!
which takes us from body coordinates to world
= 21 lim0 pqi pq + hMps (t) Mqs (t) coordinates, and let p(t h) be the relative rota-
0

Thus
h!
tion from q(t) to q(t+h).
!i = 12 ipq Mps (t) Mqs (t)
0
In other words, since p takes us from q(t) to
q(t+h),
Note that the SinAxis operator is described in
identity 21 also note that q(t + h) = p(t h) q(t)
!i] = M i + 1]s Mi + 2]s )
0
so
RelRot(t h) = p(t h)
p(t h) = p(t + h) p 1 (t) ;

or
!1 = M 2s M3s 0

!2 = M 3s M1s 0
Expressing ! in terms of q and q'
!3 = M 1s M2s 0

To nd !, we can convert p(t h) to axis/angle


form, to nd the direction of the axis and the
angular rate of rotation, and take the limit as h
goes to zero.
Barr, Extending the Einstein Summation Notation, 2.05 21

What about non-unit quaternions?


! = lim !0 Axis( ( )) Angle( ( ))
p t h p t h

Let Q = mq be a non-unit quaternion (with mag-


h h

However, as in the case with the matrices, a sim- nitude m, and q is a unit quaternion).
pler method is available. We note that as h ! 0,
Angle(N) ! sin(Angle(N)). Thus, as before, the
product of the matrix Axis operator and the An- Q0 = (mq)0
gle operator in the limit will equal twice the = mq0 + m0 q
quaternion VectorPart operator which returns the = m 12 !q + qd=dt(Q  Q)1 2 =

vector portion of the quaternion.7 = 12 !Q + Q=(Q  Q)1 2 d=dt(Q  Q)1 2


= =

= 12 !Q + Q  Q0 Q=Q  Q
VectorPart(p(t h)) So
! = 2 lim
!0 VectorPart
h
( ( + ) ;1 )
h

= 2 lim !0 q t h q

(I ; QQ)Q0 = 21 !Q
= 2VectorPart(1 + q0 (t)q;1 )
h h

so ! or
0 = 2q0 (t)q;1
! Q0 = 21 (I + (1;  ) )!Q
QQ
Q Q

Expressing q' in terms of ! and q Alternate derivation of ! (works in N


We take the equation in the previous section, and dimensions)
multiply by 12 q on the right: Let M(t) be an N dimensional rotation matrix
! which brings an object from body coordinates to
q0(t) = 0 world coordinates.
! q
1
2
Note that
Thus, we have derived the quaternion equation
presented in the introduction. Min Mkn = ik
Taking the derivative of both sides,
Relating angular velocity to rotational ba-
sis vectors M 0in Mkn + Min M 0kn = 0
Let basis vector e be the p-th column of M, so
p
which means
the i-th element of the p-th basis vector is given
by M 0in Mkn = ;Min M 0kn = Aik
( e)i = Mip
p
= ;Aki
Thus,
Thus, M 0in Mkn = Aik
( e0p )i = ijk !j ( e)k
To solve for M 0 , multiply both sides by Mkj .
p

or So
e0p = !j  p
e M 0ij = Aik Mkj
The VectorPart operator can be thought of as a Half-
7
In N dimensions, the antisymmetric matrix A
SinAxis operator on the unit quaternion. takes the place of the angular velocity vector !.
Barr, Extending the Einstein Summation Notation, 2.05 22

8 Examples. scripts \i" and \j ," so by the symmetric identity


(10), the term is zero. So,
Example 1. To simplify the vector a 
( b  c ) we use the re-association rules, the re- qji Rij = qji ikj sin  rk :
arrangement rules, the permutation symbol sub- Since
script interchange rules, the - rule, the  sim-
pli cation rules, and the re-association and rear- ( qji ) = iqj ikj
ikj
rangement rules: = qk jj ; qj jk = 3qk ; qk = 2qk
(a  (b  c))i = i j k a j (b  c) k
qji Rij = 2qk sin  rk
= ijk aj knp bn cp and
= (ijk knp )aj bn cp
= (kij knp )aj bn cp qji Rij = 2 sin  rq
= (in jp ; ip jn )aj bn cp which completes the derivation.
= aj bi cj ; aj bj ci
= (aj cj )bi ; (aj bj )ci Example 5. To verify the matrix inverse iden-
tity (25), we multiply by the original matrix Mim
Therefore a  (b  c) = (a  c)b ; (a  b)c. Note that on both sides, to see if we really have an identity.
a  (b  c) 6= (a  b)  c.
M  qnp M M
Example 2. The equation c = ai bi has two re- (M ;1 )qi Mim =? 12 im ijk det M jn kp :
peated subscripts in the term on the right, so the
qnp (ijk Mim Mjn Mkp )
index \i" is bound to that term with an implicit
summation. This is a scalar equation because qm =? 12 det M
there are no free indices on either side of the equa- Using identity (17), this simpli es to
tion. In other words, c = a1 b1 + a2 b2 + a3 b3 =
ab qm =? 12 (qnp det
mnp ) det M
M
Example 3. To show that a(bc) = (ab)c = so
det(a b c) qm = 12 (2qm )
a  (b  c) = ai (b  c)i Thus, the identity is veri ed.
= ai ijk bj ck
= ijk ai bj ck Example 6. To verify identity (26), we multiply
= det(a b c) by the determinant, and by qrs . Identity (11)
ijk ai bj ck = (ijk ai bj )ck is used to eliminate the factor of 2. The other
= (kij ai bj )ck details are left to the reader.
= (a  b)k ck Example 7 To discover the inverse of a matrix
= (a  b)  c A of the form
Example 4. To derive identity (21) from iden- A = aI ; bxx :
tity (22), we multiply (22) by qji :
i.e.,
qji Rij = (
qji ri rj +c (ij ;ri rj ))+ qji s ikj rk : Aij = (aij ; bxi xj ):
The second factor of the rst term on the right We are looking for Bjk such that
side of the above equation is symmetric in sub-
Aij Bjk = ij :
Barr, Extending the Einstein Summation Notation, 2.05 23

We assume the form of the inverse, within an un-


determined constant : sa  v ; sa  v = 0
 The vector part becomes
Bjk = ( jk a + xj xk ): s(sa ; a v ) + (a  v )v + v (sa ; a v )
Since
The i-th component of the vector part of the rotation
becomes
Aij Bjk = (aij ; bxi xj )( jka + xj xk ) = s2 ai ; 2s(a v)i + (a  v)vi ; ijk vj kpq ap vq
= s2 ai ; 2s(a v)i + (a  v)vi ; ( ijk kpq )vj ap vq
= s2 ai ; 2s(a vi + (a  v)vi ; (vj ai vj ; vj aj vi )
= ik ; ab xi xk + axi xk ; bxi xk (xj xj ) = s2 ai ; vj vj ai + 2(vj aj vi ; 2sa v
= c2=2 ai ; s2=2 ai + 2s2=2ri rj aj ; s ijk aj rk
= ik + (a ; b=a ; b(xj xj ))xi xk  = c ai ; s ijk aj rk + (1 ; c )ri rj aj
= (c ij ; s ijk rk + (1 ; c )ri rj )aj
we conclude that, = (ri rj + c (ij ; ri rj ) ; s ijk rk )aj
a ; b=a ; b(xj xj ) = 0
so which veri es the relationship (see identity 22).

 = a2 ; abb (x  x) : 8.1 Sample identities using the no-sum


Thus, operator.
I A few identities using the no-sum operator are
A;1 = a + a2 ;bx x
ab(x  x) listed. This is not an exhaustive exploration the
purpose is to give an intuitive feeling for the ter-
Example 8. Verifying the relationship between minology. It is hoped that this new terminology
the axis-angle formulation and quaternions: may be helpful in the development of multicom-
Consider a unit quaternion ponent symbolic manipulative skill.
! ! 0 1
X a b
q = vs = cos(=2)
sin(=2)r ( ai bi ) = (B
1 1
@ a2 b2 CA)i
a3 b3
6

Thus,
i

0 1
X X a 1 0 0
s = c=2 ( ij ai ) = ( ij aj ) = (B @ 0 a2 0 CA)ij
and 0 0 a3
6 6

i j

X X
v = s=2 r: bi ( ij ai ) = (aj bj )
6 6

The rotation X X i j

bj ( ij ai ) = (ai bi )
! X X
6 6

0 ;1
i i

plk ( kj gk ) = (plj gj )


Rot(a) = qaq ! ! !
6 6

X k j

= vs 0 s 1i ( ai bi ) = ai bi
! a ;v !
6

= vs av In the pre x subscript form, the nosum symbol


sa ; a  v is not written into the expression | the pre x
subscript is sucient by itself. Thus, the above
The scalar part becomes expressions may also be represented via:
Barr, Extending the Einstein Summation Notation, 2.05 24

0 1
a 1 b 1 Z
(i ai bi ) = (B
@ a2 b2 CA)i (I body
)ij = (x y z )(ij xk xk ;xi xj ) dx dy dz:
a3 b3 body
0 1 A point in the body b transforms to the point
body

a 1 0 0
(i ij ai ) = (j ij aj ) = (B
@ 0 a2 0 CA)ij in space b through the following relation:
0 0 a3 bi = Mij (bbody
)j + xi
bi (i ij ai ) = (j aj bj )
bj (i ij ai ) = (i ai bi ) 9 References
plk (k kj gk ) = (j plj gj )
Segel L., (1977) Mathematics Applied to
1i (i ai bi ) = ai bi Continuum Mechanics, Macmillan, New York
Please note that many of the above algebraic sub- Misner, Charles W., Kip S. Thorne, and John
expressions are not tensors | they do not follow Archibald Wheeler, Gravitation, W.H. Freeman
the tensor transformation rules from one coordi- and Co., San Francisco, 1973.
nate system to another.

8.2 Equations of motion of rigid Bod- Appendix A - Derivation of 3D


ies using Esn. epsilon-delta rule.
Given a rigid body with mass m, density To derive the identity,
(x y z ), position of center of mass in the world
x, position of center of mass in the body at its  i jk  i pq = jp kq ; jq kp
origin, a net force F , net torque T , momentum
p, angular momentum L, angular velocity !, ro- rst evaluate the expression with independent in-
tation quaternion q, rotational inertia tensor in dices for j , k, p, and q, and then expand the sum
body coordinates I , the equations of motion
body
over the bound index i, letting i equal f1 2 3g,
for the rigid body in the lab frame becomes: but in the order starting with j , then j + 1, and
j + 2 ( modulo 3, plus 1).
d=dt xi = pi =m
d=dt qi = 1=2 ijk !j qk
v
ijk] ipq] = jjk] jpq]
d=dt pi = Fi + j + 1]jk] j + 1]pq]
+ j + 2]jk] j + 2]pq]
d=dt Li = Ti
where
The rst term is zero, due to the repeated index
!i = (I (;1) )ij Lj  j . There is a nonzero contribution only where
(I (;1) )ij = mip mjs ((I )(;1) )ps
body k = j + 2 in the second term, and k = j + 1 in
the third.
mij =
ijkl qk ql =(q  q) = ;1k = j + 2] j + 1]pq
and +1k = j + 1] j + 2]pq
Barr, Extending the Einstein Summation Notation, 2.05 25

Expanding the two nonzero terms of the permu-


tation symbols:

= ;1k = j + 2] (1p=j+2]q=j] ; 1p=j]q=j+2] )


+1k = j + 1] (;1p=j+1]q=j] + 1p=j]q=j+1] )
Expand, add and subtract the same term, then collect
into positive and negative terms:
= ;1k = j + 2] 1p = j + 2]q = j ]
+1k = j + 2] 1p = j ]q = j + 2]
;1k = j + 1] 1p = j + 1]q = j ]
+1k = j + 1] 1p = j ]q = j + 1] )
+1k = j ] 1p = j ]q = j ] )
; 1k = j ] 1p = j ]q = j ] )
= ;1k = j ] 1p = j ] ; 1k = j + 1] 1p = j + 1]

;
 1k = j + 2] 1p = j + 2] 1q = j ]
+ 1k = j ] 1q = j ] + 1k = j + 1] 1q = j + 1]

+ 1k = j + 2] 1q = j + 2] 1p = j ]

Since j takes on only three values the rst parenthetic


expressions is 1 if k equals p, and the other is 1 if k
equals q.

= 1p = j ] 1q = k] ; 1q = j ] 1p = k]


= pj qk ; qj pk

Anda mungkin juga menyukai