Anda di halaman 1dari 24

TENSORS IN CARTESIAN COORDINATES (Lectures 14)

Prepared by Dr. U.S. Dixit, IIT Guwahati for ME501 students, July 2009

You are already familiar with the concept of scalars and vectors in physics. For example, mass is a scalar quantity and velocity is a vector quantity. A scalar quantity has only magnitude but no direction, whereas a vector quantity has both magnitude and directions. Mathematically, we can say that a scalar quantity is fully described by just one component i.e., its magnitude. On the other hand, a vector quantity is described by two components in two-dimensional space and three components in three-dimensional space. Remember that in 2-D polar coordinates, two components are r and , r indicating the magnitude of the vector and the directions. The coordinates r and can also be expressed in terms of Cartesian coordinates x and y. Hence, in Cartesian coordinates, a vector can be specified by its x and y components. However, unlike polar coordinates, x and y do not explicitly tell about direction and magnitude. You can of course calculate the magnitude and direction by the following formulae:

r = x2 + y2 ,

(1)

(2) In the similar manner in three-dimensional space, a vector needs three components. These components fully specify the direction and the magnitude. This becomes obvious if we consider spherical coordinates system having coordinates r, and . Here, the first coordinate specifies the magnitude and the other two the direction. We can transform these coordinates to x, y and z in Cartesian system and thus specify the vector, although these components will not directly give you the magnitude and direction. It is easy to understand the concept of vector with the example of a position vector. A position vector of a point is actually the displacement needed for reaching that point from a reference point (say origin). Thus, for the point (x, y, z), the position vector j with respect to origin is written as xi + yj + z k , where i , and k are the unit vectors in three directions. All vectors should behave like a position vector. In fact, the other vectors are obtained by scalar operations on position vectors. For example, the velocity can be obtained by differentiation of the position vector with respect to time and the acceleration can be obtained by double differentiation of position vector. The force is mass multiplied by acceleration, and thus a vector. In a more formal way, we state that addition and subtraction of two vectors result in a vector. The multiplication (and division) of a vector by a scalar also results in a vector. As differentiation of a vector involves subtraction of two vectors divided by time interval, it also results in a vector. We may say that all the vectors are basically derived from a position vector and they preserve the characteristics of a position vector. In two-dimensional space, a position vector is specified by two coordinates (with respect to a reference point, say origin) and in three-dimensional space by three coordinates. Similarly, in a four-dimensional space, it is specified by four coordinates, although it is difficult to visualize a four-dimensional space. In all the dimensions, a scalar is specified by only one component. Thus, we can say that in an n-dimensional space, a scalar is specified by n0 components and a vector by n1 components. However, it is not sufficient. The scalar should be invariant to reference frame. If a coordinate system 1

= tan ( y / x) .
1

is rotated the scalar does not change. On the other hand, the components of a vector transform with a particular transformation rule under the rotation of a coordinate system. Let us not worry about the exact rule at this moment. However, it is worth realizing that the transformation rule for all vectors will be same as that for a position vector. Thus, for a quantity to be called as vector, the following conditions should be met: (1) It should have n components in an n-dimensional space. (2) Its components should transform in a particular fashion under the rotation of coordinate system. Some authors present the second condition as The vectors should follow the parallelogram law of addition. Thus, finite rotations although having three components in a three-dimensional space are not vectors, as they do not follow the parallelogram law of addition. Finite rotations will also not transform like the vectors under the rotation of coordinate system. A scalar is also called as a tensor of rank 0. Similarly, a vector is also called a tensor of rank 1. Is there a tensor of rank 2? Yes, tensors of rank 2 are commonly used in physical world, for example, stress at a point is a tensor of rank 2 or second-order tensor. Physically, the stress is the force per unit area. Like force, it has the direction and magnitude, but the direction and force will be dependent on the plane. This means that direction and magnitude of stress will be different at different plane passing through them. In two dimensional-spaces, a stress has 4 components, and in three-dimensional space, it has got 9 components. Besides the stress components should follow a particular transformation rules under the rotation of coordinate system. We shall study the properties of scalar, vectors and tensors in more detail, but before that index notation and related things will be described. Index notations are very helpful in representing the lengthy expressions in a concise form. If one is not afraid of using unabridged notations, there is no need to study index notation. However, you will soon see that without the index notations, certain mathematical expressions will look horrible and therefore the study of index notations is a must.
Index Notation Suppose a vector has 3 components a1, a2, a3. We can represent the component of the vector in the following form: ai , where i = 1, 2,3 . It is understood that in a threedimensional space, a vector has 3 components and in two-dimensional space, it has 2 components. Hence, it is enough to write the components of the vector as ai. Similarly, ij can represent the components of a tensor. For a three-dimensional space i and j both vary from 1 to 3. Hence, ij represents 9 components depending on the values of indices i and j. The set of 9 components can be represented as a 3-by-3 matrix in an unabridged notation and by [ij] in abridged notation. Thus, 13 11 12 [ ij ] = 21 22 23 . (3) 31 32 33

In two-dimensional space,
11 [ ij ] = 21

12 . 22

(4)

In a two-dimensional space all the indices vary from 1 to 2 and in a three-dimensional space, they vary from 1 to 3. This is called the range convention. The expression ai+bi=0 in three-dimensional space means the following three equations: a1 + b1 = 0 , (5a) a2 + b2 = 0 , (5b) a3 + b3 = 0 . (5c) What does expression aibi=0 means? Does it mean the following equations? a1b1 = 0; a2 b2 = 0; a3 b3 = 0 . (6) No. It does not mean that. It means the following single expression: a1b1 + a2 b2 + a3 b3 = 0 . (7) The rule is that if in a term an index is repeated, then it means the summation of the terms obtained by assigning the values of index over its range. This is called Einsteins summation convention. Thus, aik xk = bi means the following expression: ai1 x1 + ai 2 x2 + ai 3 x3 = bi . (8) How is it obtained? As k is a repeated index in the term aik xk , we obtain three terms ai1 x1 , ai 2 x2 and ai 3 x3 by varying k over its range i.e., from 1 to 3. All these terms are added. In the process, index k disappears, while index i remains. Observe that the expression aij x j = bi would also provide Eq. (8). Thus, the repeated index may be replaced by any other index. For this reason, it is called a dummy index, while the nonrepeated index is called free index. Equation (8) further implies the following three equations: a11 x1 + a12 x2 + a13 x3 = b1 ; a21 x1 + a22 x2 + a23 x3 = b2 ; a31 x1 + a32 x2 + a33 x3 = b3 . (9) Note that in an expression, each term should have the same free indices. Thus, the following are the valid expressions: ij (10) + bi = 0 , x j (11) ij = Cijkl kl ,

p = ij ni n j .

(12)

In Eq. (11), j is the dummy index and i is the free index. Note that the free index i is present in each term. In Eq. (11), i and j are free indices, which are present in each term, whilst k and l are dummy indices. Note that Cijkl kl = Cij1111 + Cij12 12 + Cij1313 + Cij 21 21 + Cij 22 22 + Cij 23 23 . (13) + Cij 31 31 + Cij 32 32 + Cij 33 33 Thus, Eq. (11) can be written as ij = Cij1111 + Cij12 12 + Cij1313 + Cij 21 21 + Cij 22 22 + Cij 23 23 (14) + Cij 31 31 + Cij 32 32 + Cij 33 33 Expression in Eq. (14) represents 9 equations that can be obtained by varying i and j from 1 to 3. In Eq. (12), i and j are dummy indices. There are no free indices in this expression.

The following expressions are invalid: ai b j = ci ,

(15)

ij + kl = 0 ,

(16)

ai bi ci = 0 . (17) In Eq. (15), the first term contains two free indices i and j, whilst the second term contains only one free index i. Thus, all the terms are not having the same indices. Hence, the expression is invalid. In Eq. (16), the first term has the free indices i and j, whilst the second term has the indices k and l. Hence, it is not a valid expression. The valid expression will have the same free indices, for example the following two expressions are the valid expression: ij + ij = 0; kl + kl = 0 . (18)

In Eq, (17), in the first term, index i occurs three times. Hence, it is an invalid expression. In a term, an index can occur one or two times. We also introduce comma (,) notation here. The comma in subscript indicates differentiation with respect to corresponding coordinate. Thus, a (19) ai , j = i . x j If is a scalar function of the coordinates, then ,i = . xi In three-dimensional space, the index i can take the values 1, 2 and 3. Thus, x1 {,i } = = gradient of . x2 x3 (20)

(21)

Note that ,i indicates one component of gradient vector. Suppose vi denotes the component of a vector. The component vi,j denotes differentiation with respect to coordinate. Thus, v vi , j = i . (22) x j As i and j vary from 1 to 3, vi,j can take on 9 values.
Example 1: Express ij , j + bi = 0 in unabridged form. Solution: As per comma notation: ij ij , j = . (23) x j In the above expression j occurs twice, hence it is a dummy index. By summation convention:

ij x j

i1 i 2 i 3 . + + x1 x2 x3

(24)

Hence, the expression ij , j + bi = 0 means


i1 i 2 i 3 (25) + + + bi = 0 x1 x2 x3 In the above expression, i is a free index that can take on values 1, 2 and 3. Hence, the expression represents the following three equations: 11 12 13 + + + b1 = 0 (26a) x1 x2 x3 21 22 23 + + + b2 = 0 (26b) x1 x2 x3 31 32 33 (26c) + + + b3 = 0 x1 x2 x3 Example 2: Prove that ui,j can be decomposed into two components such that (27) ui , j = ij + wij ,

where

ij =
Further, prove that

1 ( ui, j + u j ,i ) 2

and

wij =

1 ( ui, j u j ,i ) . 2

(28) (29)

ij wij = 0 .
1 ( ui, j + u j ,i ) + 1 ( ui, j u j ,i ) = ui, j . 2 2

Solution: Starting from the right hand side:

ij + wij =

(30)

Hence, Eq. (27) is proved. Now, 1 1 ji = ( u j ,i + ui , j ) = ( ui , j + u j ,i ) = ij (31) 2 2 and 1 1 w ji = ( u j , i ui , j ) = ( ui , j u j ,i ) = wij . (32) 2 2 In the expression, ij wij both i and j are dummy indices, hence they can be replaced by any other indices. Here, we replace i by j and j by i and use Eqs. (31) and (32). Hence, ij wij = ji w ji = ij wij . (33) Thus, 2 ij wij = 0 or ij wij = 0 . (34)

Example 3: Consider the following system of equations:

t x xx xy t y = yx yy t z zx zy

xz nx zz nz yz n y .

(35)

Express it using the index notation. Solution: First instead of x, y, z, we use 1, 2, 3. Thus, Eq. (35) is written as t1 11 12 13 n1 t2 = 21 22 23 n2 . t 3 31 32 33 n3 The above equation represents the following three equations: t1 = 11 n1 + 12 n2 + 13 n3 , t2 = 21 n1 + 22 n2 + 23 n3 , t3 = 31 n1 + 32 n2 + 33 n3 . Using the free index i, these equations can be represented by ti = i1 n1 + i 2 n2 + i 3 n3 . Using the dummy index j, the above equation can be written as ti = ij n j .

(36)

(37a) (37b) (37c) (38) (39)

Kronecker-delta and Levy-Civita Symbols Now, we introduce two important symbols. First is a symbol introduced by a German mathematician Leopold Kronecker. This is called Kronecker symbol. It has two indices attached to (as subscripts). The value of ij is 1 if both the indices take same value and 0 if both indices take different values. Thus, 11 = 22 = 33 = 1, 12 = 13 = 21 = 23 = 31 = 32 = 0 . (40) Note that ij represent, the elements of an identity matrix, i.e., 1 0 0 ij = 0 1 0 . (41) 0 0 1

Note that 11 is 1, but ii is not equal to 1. As i is a repeated (dummy) index in ii, ii = 11 + 22 + 33 = 1 + 1 + 1 = 3 . (42) The Kroneckers delta has got the following substitution properties: (43) (i) ai ij = a j (ii) aij jk = aik (iii) ij jk = ik These can be proved easily, understanding the definition of Kronecker delta. Consider the first expression. It is clear that in the first term i is the dummy index and j is the free index. Thus, ai ij = a11 j + a2 2 j + a3 3 j (44) In the above expression, right hand side is a1 for j=1, a2 for j=2 and a3 for j=3. Hence, it can be written as aj. Thus, ai ij = a j . In the same way, the other relations can be proved. Another important symbol is Levi-Civita symbol, named after an Italian mathematician Tullio Levi-Civita. This symbol is also referred to as the permutation

symbol, the alternating symbol or the alternator. It has three indices attached to it and in general form is written as ijk. Only if i, j and k all are different values, the permutation symbol is non-zero. For non-zero values of ijk, the index i can take 3 values; with each values of i, j can take 2 values, and with the values of i and j fixed k can take only one value (different from i and j). Thus, in the six cases, ijk can be non-zero. The convention is to take the value ijk equal to 1 if the indices take the values in cyclic order and 1 if the indices take the values in acyclic order. Thus, 123 = 231 = 312 = 1 and 132 = 213 = 321 = 1 . (45) The remaining 21 possible values of ijk are zero. The permutation symbol is very helpful in vector algebra. Consider the three unit vectors e1, e2 and e3 along the x-, y- and z- directions respectively. The cross product of the unit vectors may be represented as ei e j = ijk ek . (46) The above expression shows that if i and j take same values, the cross product is zero, i.e., the cross product of unit vector with itself is zero. If i is 1 and j is 2, then e1 e2 = 12 k ek = 121e1 + 121e2 + 121e3 = 0 + 0 + e3 = e3 . (47) Similarly, one can find the other cross products. It is interesting to note that the values of nine different cross products are given by just one expression at Eq. (46). Because the permutation symbol depends on the order of the indices, the following relation holds good: (48) ijk = jki = kij = ikj = jik = kji . The following relation is called - identity or the permutation identity: ijk pqk = ip jq iq jp . (49)

It can be proved in the following way. The left hand side expression will be non-zero only if i, j, p, q are different from k; also, i should be different from j and p should be different from q. This implies two possibilities (i) i is equal to p and j is equal to q and (ii) i is equal to q and j is equal to p. As k is the dummy index, ijk pqk = ij1 pq1 + ij 2 pq 2 + ij 3 pq 3 . (50) In the above expression, only one term will be non-zero on the right hand side. If possibility (i) occurs, the value of the expression will be 1. If possibility (ii) occurs, the value of expression will be 1. This is represented by the right hand side of Eq. (49). Verify that if i=p and j=q, ip jq iq jp becomes equal to 1 and if i=q and j=p, it becomes equal to 1. Example 4: Prove that ijk pqr = ip ( jq kr jr kq ) + iq ( jr kp jp kr ) + ir ( jp kq jq kp )
Solution: First we shall prove that
aip akp aiq akq air a jr , akr

(51)

ijk pqr det(aij ) = a jp a jq

(52)

where det(aij) denotes the determinant of [aij]. We shall prove it by showing that the two sides are equal. It is easy to see that if atleast two of i, j, k or two of p, q, r are equal, then both sides of Eq. (52) are zero. If i, j, k are different from one another and p, q, r are also different from one another, then the following two cases may arise. (1) Both i, j, k and p, q, r are cyclic or acyclic. In that case, both sides of Eq. (52) are equal to det(aij). (2) Between i, j, k and p, q, r, one group is cyclic and other acyclic. In that case, both sides of Eq. (52) are equal to det(aij). Hence, proved. Now, let aij=ij. In that case, from Eq. (52):

ip iq kp kq

ir
(53)

ijk pqr det( ij ) = jp jq jr kr

Using the fact that det(ij)=1 and expanding the right hand side of Eq. (53), we get Eq. (51). Example 5: Use Eq. (51) to prove Eq. (49). Solution: Replacing r by k in Eq. (51): ijk pqk = ip ( jq kk jk kq ) + iq ( jk kp jp kk ) + ik ( jp kq jq kp ) . (54) = ip ( 3 jq jk kq ) + iq ( jk kp 3 jp ) + ik ( jp kq jq kp )
Using the substitution property of Kronecker-delta, the above expression is written as ijk pqk = ip ( 3 jq jq ) + iq ( jp 3 jp ) + ( jp iq jq ip ) . (55) = ip jq iq jp Hence, proved. Example 6: Show that the determinant of 3-by-3 matrix [aij] can be expressed as det(aij ) = pqr a1 p a2 q a3r .
Solution: Determinant of 3-by-3 matrix is given by a11 a12 a13
det(aij ) = a21 a31 a22 a32 a23 . a33

(56)

(57)

In expanded form: det(aij ) = a11 ( a22 a33 a23 a32 ) + a12 ( a23 a31 a21 a33 ) + a13 ( a21 a32 a22 a31 ) , or det(aij ) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 a11 a23 a32 a12 a21 a33 a13 a22 a31 .

(58) (59)

Observing the above expression, it is seen that a typical term is a1pa2qa3r. Further, when p, q, r are cyclic, the term is positive, otherwise it is negative. The expression contains the summation of six terms. Therefore, it can be written as pqr a1 p a2 q a3r , which is a summation of the six terms, p, q, r being the dummy indices.

Transformation Rule for Vector Components under the Rotation of Cartesian Coordinate System Consider two coordinate systems x, y, z and x, y, z, which are rotated with respect to each other. The unit vectors along x, y and z are e1, e2 and e3 and unit vectors along x, y and z are e1 , e2 and e3 . In x, y, z system, a vector v can be represented as
v = v1e1 + v2 e2 + v3 e3 = v p e p

(60) (61)

The same vector is represented in x, y, z system as v = v1 e1 + v2 e2 + v3 e3 = v e p p

Note that the vector remains same in both the systems. However, its components change. We want to determine the component in one system as a function of components in other systems. For this purpose we first write (62) v p e p = vq eq . Note that as p is the dummy index in Eq. (61), it can be replaced by q. Taking the dot product of both sides with unit vector er, we get v p e p er = vq eq er . (63) Now,
e p er = pr ,

(64)

which means that the dot product of identical vectors is 1 and non-identical vectors is 0. The dot product eq er is equal to the cosine of angle between qth axis in x, y, z system and rth axis in x, y, z system and we denote it by qr. Hence, Eq. (63) can be written as v p pr = vq qr . (65) Using the substitution properties of Kronecker-delta and the fact that scalar quantities commute, Eq. (65) can be written as vr = qr vq . (66) In the expanded form, vr = 1r v1 + 2 r v2 + 3r v3 . (67) The reader may be familiar with the above expression, which states that the component of a vector along r direction is equal to the some of the projections of its three orthogonal components along r direction. The set of direction cosines [qr], a total of 9 components, is as follows in the expanded form: 11 12 13 qr = 21 22 23 . (68) 31 32 33 You may note that the rows of the above matrix correspond to the axes in x, y, z system and the columns correspond to the axes in x, y, z system. The entries in the matrix correspond to the cosines of the angles between two lines. From three-dimensional coordinate geometry, recall that

2 2 2 2 2 2 2 2 2 11 + 12 + 13 = 1; 21 + 22 + 23 = 1; 31 + 32 + 33 = 1; 11 21 + 12 22 + 13 23 = 0; 31 21 + 32 22 + 33 23 = 0; 3111 + 3212 + 3313 = 0; 2 2 2 2 2 2 2 2 2 11 + 21 + 31 = 1; 12 + 22 + 32 = 1; 13 + 23 + 33 = 1; 1112 + 21 22 + 31 32 = 0; 1213 + 22 23 + 32 33 = 0; 1113 + 21 23 + 31 33 = 0.

(69) Thus, matrix given by Eq. (68) is an orthogonal matrix, i.e., qr qr = qr qr = [ I ] . Thus, qr = qr . Now, in the matrix form, Eq. (66) is written as {vr } = [ qr ]T {vq } . Therefore, in view of Eq. (70),
{vq } = [ qr ]{vr }
1

(70) (71) (72) (72) (73) (74)

The above equation is written in index notation as vq = qr vr . In the above equation r is the dummy index and therefore, vq = q1v1 + q 2 v2 + q 3 v3 .

Thus, the component along the direction q in the primed (x, y, z) system is equal to the sum of the projections of the components of unprimed system along q direction. The reader may already be familiar with this statement. Equations (66) and (73) form the part of the definition of the Cartesian vector, which is a tensor of order 1. A Cartesian tensor of order 1 is a quantity that consists of n components in an n-dimensional space and which follows the transformation rules given in Eq.(66) and Eq. (77) under the rotation of the axis system. Example 7: The components of a vector in the x, y, z system are given by {1 2 3}T. The x, y, z is obtained by rotating the original system about the z-axis through an angle of 30 in the counterclockwise way. Find the components of the vector in x, y, z system. Solution: Table 1 contains the angles between the axis of one system and the axis of other system. With this the matrix of direction cosine [ij] is written as 3 1 0 2 cos 30 cos 60 cos 90 2 1 3 ij = cos120 cos 30 cos 90 = (75) 0 . 2 2 cos 0 0 cos 90 cos 90 0 1

10

Table 1: Angles between axes of one system with the axes of other system x y z 30 60 90 x 120 30 90 y 90 90 0 z Here, deliberately, the index i, j has been written in place of q, r to establish that one can name the free indexes in anyway as long as it is consistent throughout the expression. Now, Eq. (72)is {vi } = [ ij ]{v j } . Therefore,
3 1 3 0 +1 2 1 2 v1 2 1 3 1 0 2 = + 3 v2 = 3 2 2 2 v3 0 0 1 3 Thus, we have obtained the components of the vector in rotated system.

(76)

Transformation Rule for Tensor Components under the Rotation of Cartesian Coordinate System We have discussed the transformation rule for a vector, which is a tensor of rank 1. Now, we shall discuss the transformation rules for a general tensor of order n. However, first consider the tensor product of two vectors. Let u and v be two vectors. They can be multiplied in such a way so as to produce scalar. The corresponding multiplication is called a scalar or dot product, which is an inner product. In index notation u v = ui vi . (77) It can be easily shown that this dot product remains invariant under the rotation of the coordinate system. The outline of the proof is as follows: ui vi = ( ip u p )( iq vq ) = pq u p vq = u p v p = ui vi . (78)

The two vectors can be multiplied in a way to yield the vector. A well-known example of it is the cross product of two vectors. In index notation, the cross-product of the vectors, u v is represented as ijkujvk. Two vectors can also be multiplied in a manner to yield a tensor of order 2. It is called a tensor product or outer product denoted by uv. In index notation, it is denoted as ui vj. It is clear that in three-dimensional space, it has distinct 9 components. Now, ui v j = ( ip u p )( jq vq ) = ip jq u p vq (79) and

ui v j = ( pi u p

)(

qj q

) =

pi

qj u vq . p

(80)

Any (physical) quantity that has n2 components in an n-dimensional space and follows the transformation rule given by Eq. (79) and Eq. (80) is called a tensor of order two. Thus, if aij and aij are the components of tensor in the unprimed and primed Cartesian coordinate systems, then

11

aij = ip jq a p q ;

aij = pi qj a . pq

(81)

The transformation rule given in Eq. (81) can be generalized. For example, for a tensor of third order, the rule is (82) aijk = ip jq kr a p qr ; aij = pi qj rk a . pqr Similarly, the transformation rule for higher order tensors can be written. Now, we wish to write the transformation rules in the form of matrix multiplications. Given matrices [aij] and [bij], the matrix product in index notation is defined as (83) cij = aip bpj . The transpose of a matrix is obtained by interchanging the rows and columns of the matrix. The i-jth element of a matrix is same as the j-ith element of a transpose matrix. With this knowledge it is easy to see that the transformation rule can be written as

A = A T ; A = T A . (84) We have decided to use boldface letter for tensors. Example 8: Stress at point is a tensor of rank 2. Therefore, it is possible to use transformation rule given by Eq. (81) or Eq. (84) for finding the stress components along any axis system. Consider the case of plane stress with stress components x , y and xy. Finding out the stress components along the rotated Cartesian system, where the axis x makes an angle with axis x. Solution: The matrix of direction cosines is given by sin cos = (85) sin cos and the symmetric stress tensor in x-y system is given as xy xx = (86) . yy xy Using Eq. (84), the stress components in a new system are given by xy cos sin T cos sin xx xy cos sin xx cos = = yy sin cos yy sin sin cos xy sin cos xy
(87) that provides
sin xx cos + xy sin cos = sin cos xy cos + yy sin xx sin + xy cos xy sin + yy cos

sin cos

xx cos 2 + yy sin 2 + 2 xy sin cos xx sin cos + yy cos sin + xy ( cos 2 sin 2 ) = xx sin cos + yy cos sin + xy ( cos 2 sin 2 ) xx sin 2 + yy cos 2 2 xy sin cos

(88) Thus,

12

xx = xx cos 2 + yy sin 2 + 2 xy sin cos x = xx =

xx + yy
2

xx yy
2

1 + cos 2 1 cos 2 + yy + xy sin 2 2 2

cos 2 + xy sin 2 1 cos 2 1 + cos 2 + yy xy sin 2 2 2

yy = xx sin 2 + yy cos 2 2 xy sin cos x = xx =

xx + yy
2

xx yy
2

cos 2 xy sin 2 (89)

and

xy = xx sin cos + yy cos sin + xy ( cos 2 sin 2 ) =

xx yy
2

sin 2 + xy cos 2

(90)

Contraction and Quotient Laws If a force is represented as Fi in index notation and the displacement is represented by di, then the dot product Fi di, which is a scalar, will represent the work done. The tensor product of the force and displacement is denoted by Fi dj, which is a tensor of rank 2. Once we change j to i, the rank of tensor reduces to zero. This is called contraction operation. Thus, Fi di is the contraction of Fi dj. Similarly, consider the tensor product of a tensor (of rank 2) ij and vector nk, which gives ij nk. It can be shown that ij nk is a tensor of order 3. If we replace k by j, the product ij nj is the contraction of ij nk and is of rank 1. Thus, ij nj is a vector that can be denoted by ti. The relation ti=ij nj can be written in the matrix form as (91) t = n or {t} = [ ]{n} ,

whichever notation you like. Note that Eq. (91) is the commonly known product of a matrix with a vector (a column matrix). We observe that the pre-multiplication of a vector by the tensor provides another vector. The components of the vector t are the linear functions of the components of the vector n. Thus, tensor (of rank 2) can be called a linear map that assigns to each vector another vector. This is the alternative definition of tensor. Now consider the reverse problem. If it is known that Fi are the components of a vector and Fidi is a scalar, then one can conclude that di are the components of a vector. This is one type of quotient law. Quotient laws basically infer the nature of a quantity based on the outcome of its contracted product with a known quantity. Another quotient law can be written as follows: If ij are the components of the tensor (of rank 2) and ij nj are the components of the vectors, then nj are the components of the vectors. In a similar way, a number of quotient rules can be written. Example 9: The components of a symmetric tensor a (symmetry implies aij=aji) are given by

13

1 2 3 [a ] = 2 5 10 . 3 10 4

(92)

The component of a skew-symmetric b (skew-symmetry implies bij=bji) are given by 8 11 0 8 (93) [b] = 0 10 . 11 10 0 Find out the scalar product of the tensor aij bij. Solution: The scalar product aij bij means multiplying each entry of a with the corresponding entry b and adding all 9 products. Thus, aij bij = 1 0 + 2 8 + 3 11 + 2 (8) + 5 0 + 10 10 + 3 (11) + 10 (10) + 4 0 = 0 (94) We can easily prove that the scalar product of a symmetric tensor with an skewsymmetric tensor will be zero. Example 10: From the quotient law, show that mass moment of inertia is a tensor of rank 2. Solution: The angular momentum L is defined as L = I , (95) where L and the angular velocity both are vector. In index notation, Eq. (95) is written as Li = I ij j . (96) From the direct relation, it is easy to prove that if I is a tensor, the right hand side of Eq. (96) will represent the components of the vector. Thus, inverse relation is that if L and are vector, I is a tensor (of rank 2). Some Important Definitions and Properties of Tensor The components of a tensor (of rank 2) can be represented in the form of a matrix. Thus, the many properties and definitions are common between matrix and tensors. However, the components of the matrix need not follow the transformation rule and the matrix need not represent any physical quantity. If a matrix represents a physical quantity and its components follow the transformation rule, then it is a tensor. Now, we briefly discuss some properties of the tensor. In many cases, details are left for the reader to carry out them as exercise problems. Whenever we refer a tensor without specifying its rank, it will be understood as a tensor of rank 2. (1) Zero tensor: If all the components of a tensor are zero, the tensor is called zero tensor 0. The vector product of this vector with any vector v will give a zero vector. Thus, 0v = 0 . (97) Note that 0 in the left hand side of above equation should be understood as a tensor having 9 components in a three-dimensional space, whereas 0 in right hand side is a vector having 3 components in a three-dimensional space. (2) Identity tensor: The tensor I whose components are ij is referred to as an identity tensor. For any vector v, Iv = v . (98)

14

(3) Product of two tensor: The ordinary product C of two tensors (of rank 2 ) A and B is a tensor of rank 2 and is written as C = AB . (99) In index notation, (100) Cik = Aij B jk .

Thus, it is a contracted product. One can have scalar product defined by c = Aij Bij , or the non-contracted tensor product defined by Cijkl = Aij Bkl .

(101) (102)

Note that A and B commute during scalar product. (4) Transpose of a tensor: The transpose of a tensor is obtained by interchanging the indices of its components. Thus, T Aij = Aji (103) It can be shown that for all vectors u and v,

Au v = u AT v .
Also, for tensors A and B,

(104)

( A + B ) T = AT + B T ; ( AB )T = B T AT . (105) (5) Additive decomposition of a tensor into a symmetric and a skew part: A tensor A can be decomposed into a symmetric E and a skew-symmetric W part, such that A = E +W , (106) where 1 1 E = ( A + AT ) ; W = ( A A T ) . (107) 2 2 (6) Inverse of a tensor: Given a tensor A, if there exist a tensor B such that AB = I , (108) T then B is called the inverse of A. If B is equal to A , the tensor A is called an orthogonal tensor. An orthogonal tensor whose determinant is 1 is called proper orthogonal tensor. If the determinant of A is 0, then the matrix corresponding to A is called a singular matrix and the tensor is non-invertible. Reader is already familiar with the procedure to determine the inverse of a matrix. In index notation, i-jth component of the tensor B is given by 1 1 Bij = (109) 2 jpq irs Apr Aqs . det(aij ) (7) Invariants of a tensor: In three-dimensional space, a tensor A has three principal invariants (that remain unchanged during coordinate transformation) as described below: (i) First invariant IA: In index notation, it is written as Aii, which is called trace of A and is denoted as tr(A). (ii) Second invariant IIA: It is given by A11 A13 A22 A23 A11 A12 1 2 (110) II A = ( trA ) trA2 = + + A21 A22 A31 A33 A32 A33 2 (iii) Third invariant IIIA: The determinant of A is the third invariant. (8) Positive definite tensor: A tensor A is positive definite tensor if v Av > 0 for all non-zero vectors v. It is called positive semi-definite if v Av 0 . 15

(9) Negative definite tensor: A tensor A is negative definite tensor if v Av < 0 for all non-zero vectors v. It is called negative semi-definite if v Av 0 . Eigenvalues of a Tensor It is known that a tensor A carries out a linear transformation of a vector x by the relation: Ax = b . (111) It is possible that for some non-zero vector x, the vector b may be parallel to x, i.e., b=x. Thus, for some scalar and some vector x, the following relation holds good: Ax = x . (112) The above equation is called an eigenvalue problem, is called the eigenvalue and x the eigenvector. Eq. (112) may also be written as (A I)x = 0 . (113) Equation (113) implies that det( A I ) = A I = 0 . (114)

The roots of the above equation will provide the eigenvalues. For a particular eigenvalue, Eq. (112) may be used for finding out the eigenvector. Note that eigenvectors are not unique. One can obtain a normalized eigenvalue whose magnitude is 1. We shall prove that eigenvalues of a Hermitian matrix are real and eigenvectors corresponding to distinct eigenvalues of a real symmetric matrix are orthogonal to one another. First, let us define Hermitian matrix. Given a matrix A, the complex conjugate matrix A* is formed by taking the complex conjugate of each element. The adjoint of A is formed by transposing A* . The matrix is called Hermitian (or self-adjoint) if A=(A*)T. Let i and j be two eigenvectors of a Hermitian matrix, then Axi = i xi (115a) Ax j = j x j (115b) Pre-multiplying both sides of Eq. (115a) by ( x*j ) and Eq. (115b) by ( x* ) we get i
T T

(x ) (x )
* j

Axi = i ( x*j ) xi
T

(116a) (116b)
T

* T i

Ax j = j ( x

* T i

xj

Taking the adjoint of Eq. (116b),

(x )
* j

(A* )T xi = * ( x*j ) xi . j
T

(117) (118) (119) (120)

As A is Hermitian, the above equation can be written as

(x )
* j

A xi = * ( x*j ) xi . j
T

Comparing it with Eq. (116a) we get

* j

i )( x*j ) xi = 0 .
* i

In the above equation, replacing j by i

i )( x* ) xi = 0 i
T

As ( x* ) xi is a positive number, i* = i . This implies that i is real. This result holds i


T

good for a real symmetric matrix as it is a special case of Hermitian matrix.

16

holds good. Pre-multiplying both sides of Eq. (115a) by ( x j ) and Eq. (115b) by ( xi )
T

Now, for a real symmetric matrix, if i and j are two distinct eigenvalues, then Eq. (115)
T

we get

(x )
j

Axi = i ( x j ) xi
T

(121a) (121b)
T

( xi )

Ax j = j ( xi ) x j
T T
j

Taking the transpose of Eq. (121b) and using the fact that A is symmetric we get

(x )
i

A x i = j ( x j ) xi .

(122)

Subtracting Eq. (121a) and (122) we get

- j ) ( x j ) xi = 0 .
T

(123)

As both the eigenvalues are distinct, the above equation implies that

(x )
j

xi = 0.

(124)

Thus, the eigenvectors corresponding to distinct eigenvectors of a real symmetric matrix are orthogonal. Example 11: A sheet is subjected to plane stress condition. The symmetric stress components at a point are as follows: x = 10 MPa, y = 2 MPa, xy = 3MPa . Find out the principal stress and principle directions. Solution: In the form of matrix, stress components are represented as 10 3 ij = (125) . 3 2 The eigenvalues of the above matrix will be the principle stresses and eignvectors the principle direction. Let be the eigenvalue, then 10 3 = 0 or 2 12 + 11 = 0 , (126) 3 2 which gives =11 MPa, 1 MPa. Thus the maximum principal stress is 11 MPa and minimum principal stress is 1 MPa. Let us find out the eigenvector corresponding to eigenvalue of 11 MPa. By Eq. (113): 10 11 3 x1 (127) =0 , 3 2 11 x2 which represent the following two equations: x1 + 3 x2 = 0; 3x1 9 x2 = 0 . (128) The second equation is the scaled (by a factor of 3) version of the first equation. Thus effectively there is one equation that gives x1 = 3x2 . Thus, we get multiple solutions. Taking x2= (an arbitrary constant), x1 is 3. The normalized eigenvector components are

17

. (129) 10 10 9 2 + 2 9 2 + 2 These are the direction cosines of the first eigenvector (with n3=0), corresponding to first principle direction. For the principle stress of 1 MPa, Eq. (113) gives 3 x1 10 1 (130) =0 , 3 2 1 x2 which represent the following two equations: 9 x1 + 3x2 = 0; 3x1 + x2 = 0 . (131) The first equation is the scaled (by a factor of 3) version of the second equation. Thus effectively there is one equation that gives x2 = 3x1 . Thus, we get multiple solutions.
;

n1 =

n2 =

Taking x1= (an arbitrary constant), x2 is 3. The normalized eigenvector components are 1 3 3 . (132) ; n2 = = = n1 = 10 10 9 2 + 2 9 2 + 2 It can be easily verified that both the eigenvectors are orthogonal to each other, i.e., their dot product is zero. Polar Decomposition of a Tensor Every invertible tensor A can be decomposed into an orthogonal tensor Q and a positive definite symmetric tensor U such that A = QU . (133) Similarly, it can be decomposed into an orthogonal tensor Q and a positive definite symmetric tensor V such that A = VQ . (134) Starting from Eq. (133), T AT A = ( QU ) (QU ) = U T Q T QU = U T IU = U TU = UU = U 2 . (135)
Similarly, starting from Eq. (134), it can be shown that AAT = V 2 . It can be easily shown that V = QUQ T . (136) (137)

Tensor Calculus Let g be a function whose values are scalars, vectors or tensors and whose domain is an open interval D of real numbers. The derivative g (t ) of g at t, if it exists is defined by d 1 g (t ) = g (t ) = Lim [ g (t + ) g (t ) ] . (138) 0 d(t ) This implies that if the components of a tensor are represented in a matrix form, the derivative of the tensor can also be represented in a matrix form, in which each entry will be the derivative of individual entries. Here, it is assumed that axes do not change with time. Example 12: Components of a tensor are given as

18

2t t 2 sin t aij = t 5 (139) et . t 3 7 ln(t ) Find out the components of its derivatives with respect to time. Solution: By differentiating individual components we get 2 2t cos t aij = 1 0 (140) et . 2 3t 0 1/ t We can very easily prove the following relation for a tensor whose i-jth component is given by aij = ip jq . (141) a pq It states that if the component is partially differentiated with respect to itself, we get 1. If it is partially differentiated with respect to other component, we get 0. We shall adopt comma notation, which means (...) (...),i = . (142) xi Now, note that x xi , j = i = ij . (143) x j In the sequel, we shall describe some special derivatives. (1) Gradient: If f is a scalar field i.e., f is a function of the coordinates, then it can be shown that ai=f,i are the components a vector. (For this, one has to show that the transformation rule is followed.) This vector is called the gradient of f and is denoted by grad f or f. The operator may be written as / xi . We know that for infinitesimal change in the values of the coordinates, the infinitesimal change in the function is given by f f f (144) df = dx + dy + dz . x y z In index notation, it is written as df = f ,i dxi or df = ( f ),i dxi . (155)

df = f dx . (156) Now, suppose there is a surface given by f(x, y, z)=0 and we move by an infinitesimal distance on this surface, then dx will be tangential to the surface and df will be zero. It then follows from Eq. (156) that f will be perpendicular to dx. Thus, f will be directed along normal to the surface. If there is a unit vector a, then fa will indicate the change in the function for a unit length movement along a, which is called the directional derivative of f along a. It is denoted by f/a. Note that

In the vector notation, it is written as

19

f = f a = f a cos = f cos , (157) a where is the angle between a and normal (along f) to the surface. It then follows that the normal derivative along f is the maximum. Thus, f is the direction of maximum increase in the function value. The concept of gradient can be extended to vectors and tensors as well. If vi are the components of the vector field, then vi,j represents the component of the gradient of the vector field. It can be shown that it is a tensor of rank 2. Similarly, if aij are the components a tensor field, then aij,k represents the component of the gradient of the tensor and is a tensor of rank 3. (2) Divergence: Let vi be the component a vector field, then gradient of the vector field is represented in index notation as vi,j. Contracting it once, we get vi,i, which is called the divergence of the vector field. As the contraction operation reduces the rank of the tensor field by 2, vi,i, is of rank 0 and thus a scalar field. Similarly, if aij are the components a tensor field, then aij,k represents the component of the gradient of the tensor field. Applying the contraction operation, we get aij,j which represents the divergence of the tensor field. The divergence is denoted by div. If a is a tensor field of rank 2, div a or a is a vector field. (3) Curl: The curl of a vector field is a vector field. If vi represents the component of a vector field, then curl of the vector field v is denoted by curl v or v and its ith component is given by (158) ( v )i = imn vn,m . It is obvious that the curl of a constant vector field is a zero vector. Let us find out the value of curl grad f. The nth component of grad f is given by f,n. Therefore, from Eq. (158), the ith component of curl grad f is given by (159) ( curl )i = imn ,mn . In the above expression, we interchange the dummy indices to get ( curl )i = inm ,nm . Note that (160) (161) (162) (163)

inm = inm

and ,nm = ,mn .

Therefore, Eq. (160) can be written as ( curl )i = imn ,mn . Comparing Eqs. (159) and (162), we see that ( curl )i = ( curl )i or ( curl )i = 0 .

Hence, the curl of the gradient of a scalar field is zero. (4) Laplacian: The Laplacian of a scalar field f is f,ii, of a vector field denoted by vi is vi,jj and of a tensor field denoted by aij is aij,kk. The Laplacian operator is denoted by 2. Note that for x-y-z coordinate system 2 2 2 2 2 + 2 + 2 div(grad ) . (164) x y z

20

Divergence theorem (i) Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a scalar field f defined in V and on S, (165) grad f dV = fn dS ,
V S

where n is the unit outward normal to S. (ii) Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a vector field v defined in V and on S, (166) div v dV = v n dS ,
V S

where n is the unit outward normal to S. Practice Problem 28 provides the theorem concerning second order tensor. Stokes theorem Let C be simple closed curve in 3-dimensional space and S be an open regular surface bounded by C. Then, for a vector field v defined on S as well C, (167) v t ds = (curl v ) n dS ,
S

where t is the unit vector tangent to C, which is assumed to be positively oriented relative to the unit normal n to S. Practice problem 29 extends this theorem to tensors.
Practice Problems: Q.1: For an incompressible fluid, the continuity equation in the Eulerian form is u v w + + = 0, x y z where u, v and w are components of velocity field along x, y and z directions, respectively. Express this equation in index notation. Q.2: The 3-dimensional stress equilibrium equations are given by ij , j + bi = 0 ,

where is the stress tensor and b is body force per unit volume. Write down the stress equilibrium equations in unabridged notations. Q.3: Evaluate the following expressions: (i) ii (ii) ijk ijk (iii) ijk kji (iv) ijikjk Q.4: Prove the following - identity: ijk pqk = ip jq iq jp . Q.5: Prove that ijk pqr = ip ( jq kr jr kq ) + iq ( jr kp jp kr ) + ir ( jp kq jq kp ) . Q.6: A second-order tensor is a linear transformation that maps vectors to vectors. For example, if [] is the matrix containing stress components, {n}is the vector of direction cosines to a plane and {t} is the traction vector on the plane, then by Cauchys relation {t} = [ ]{n} . Here, stress tensor maps direction cosine vector into traction vector. The transformation law for vectors is given by

21

' xk = ki xi Using the aforementioned definition of tensor, find out the transformation law for tensors. Q.7: If a and b are vectors with components ai and bi, respectively, then aibj are components of a second-order tensor, called the tensor product ab. Prove that (i) If aij are components of a second-order tensor A and bi are components of a vector b, then aijbk are components of a third-order tensor (known as the tensor product of A and b in that order, denoted A b . (ii) If aij and bij are components of two second-order tensor A and B, then aijbkl are components of a fourth-order tensor (known as the tensor product of A and B in that order, denoted A B . Q.8: If aij and bij are components of two second-order tensor A and B and ci are components of vector c, then prove that (i) aijcj are components of a vector (known as the vector product of A and c in that order, denoted Ac. (ii) aikbkj are components of a second-order tensor, called the product of A and B in that order denoted by AB . (iii) aijbij is a scalar, called the scalar product of A and B and denoted by A.B. Q.9. Prove the following quotient rules: (i) Let ai be an ordered triplet related to the xi system. For an arbitrary vector with components bi, if aibi are scalar, then ai are components of a vector. (ii) Let aij be a 33 matrix related to the xi system. For an arbitrary vector with components bi, if aijbj are components of a vector, then aij are components of a tensor. Q.10: By using the transformation law for the vector, show that the vector product of ab is also a vector. Q.11: Prove that if A is a second-order tensor, then it is a linear operator on vectors and its components are given by aij = ei .Ae j .

Also prove that conversely, if A is a linear operator on vectors and aij are defined by the above equation, the aij are components of a second-order tensor. Q. 12: Show that every tensor with components aij can be represented in the form A = aij ei e j Q.13: Prove that ( x2 , x1 ) are the components of a first order Cartesian tensor in two
2 dimensions, whereas ( x2 , x1 ) and ( x12 , x2 ) are not. Q.14: Let aij and bij be the components of two 33 matrices respectively and their scalar product is defined as aijbij. Prove that the scalar product of a symmetric and skew-symmetric matrix is zero. Q.15: Prove that following are three invariants of a tensor A with components aij: (i) I A = trA = aii (tr is a short form of trace.). 1 1 (ii) II A = (trA) 2 tr ( A2 ) = [ aii akk aik aki ] 2 2 (iii) 1 1 III A = (trA)3 + 2 tr( A3 ) 3(trA2 )(trA) = aii a jj akk + 2aik akm ami 3aik aki a jj 6 6

22

Further, prove that third invariant can also be written as III A = det aij = ijk ai1a j 2 ak 3 = ijk a1i a2 j a3k . Q.16: Let aij be components of a tensor A. If 1 * aij = ipq jrs a pr aqs . 2
* Show that aij are components of a tensor. If this tensor is denoted by A*, prove the following: (i) A(A*)T=(A*)T A= (det A) I (ii) If A is invertible, then 1 ( A* )T . A1 = det( A) (iii) For all vectors a, b

A* (a b) = Aa Ab

(iv)

For all vectors a, b and c,

( A* )T a.(b c ) = a. Ab Ac Here A* is called adjugate or cofactor of A and (A*)T is called the adjoint of A. Q. 17: Obtain the expressions for 3 invariants of a deviatoric tensor of A. Q.18: Prove that a number is an eigenvalue of a tensor A if and only if it is a real root of the cubic equation

3 I A 2 + II A III A = 0 .
Q.19: Prove that if A is a symmetric tensor, then all three roots of the characteristic equation of A are real, and therefore, A has exactly three (not necessarily distinct) eigen values. Q.20: Prove that eigenvectors (principal directions) corresponding to two distinct eigenvalues (principal values) of a symmetric tensor are orthogonal. Q.21: Prove that a symmetric tensor has at least three mutually perpendicular principal directions. Q.22: Given a symmetric tensor A, there exits at least one coordinate system with respect to which matrix of A is diagonal. Q.23: Let A be a symmetric tensor with i as eigenvalues and vi as corresponding eigen vectors. Show that A can be represented as
A=
k =1

k (vk vk )

This is known as the spectral representation of A. Q.24: Every invertible tensor A can be represented in the form A = QU = VQ , where Q is orthogonal tensor and U and V are positive definite symmetric tensors such that U2=ATA and V2=AAT. Furthermore, the representations are unique. dQ T Q. 25: If Q(t) is an orthogonal tensor. Show that Q is a skew tensor. dt Q.26: Prove that xi,j=ij and xi,i=3.

23

Q.27: Express the divergence and curl of a vector and tensor field in index notation. Q.28: Prove the following divergence theorem for a tensor. Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a tensor field A defined in V and on S, div A dV = An dS ,
V S

where n is the unit outward normal to S. Q.29: Prove the following Stokes theorem for a tensor. Let C be simple closed curve in 3-dimensional space and S be an open regular surface bounded by C. Then, for a vector field defined on S as well C,

At ds = (curl A)
S

n dS ,

where t is the unit vector tangent to C, which is assumed to be positively oriented relative to the unit normal n to S.

24

Anda mungkin juga menyukai