Anda di halaman 1dari 51
98 Linear operators and matrices Givena bass (e),€2,.... ¢s)ofafnite dimensional vector space V, we recall from Section 3.5 that the matrix of components T = [7%] of linear operaior 7 : ¥ — ¥ with respect to tis basis is defined by Eq, (3.6) a8 Tej= The. wy and under a tansformation of bass, aad. f= dhe «2 whore ae aay 4) the components of any linear operstor transform by Pyaar ay «a The matrices A = [4] end A’ = [4) ae inverse to each other, A’ = A“?, and (44) can ‘be writen in matrix notation asa similarity ransformation rv TA as) The main task ofthis chapter will be to find a basis that provides a standard represen- tation of any piven lneae operator, called the Jonlan canonical form, Tas representation is uniguely determined by the operator and encapsulates all is essential properties, The proof given in Section 42 is rather technical and may be skipped on first reading. It would, however, be worthwhile to understand its appearance, summarized atthe end ofthat section, asithas frequent applications in mathematical physics. Good references for linear operators and matrices in general ate [1-3], while a detailed discussion of the Jordan canonical form can be found in [4 tis important to realize that we are dealing wih inca operators on free vector spaces fis concept will be defined rigorously in Chapter 6, but essentially it means thatthe ‘ctor spaces have no further structure imposed on them. A number of concepts such as “symmetric, “hermitin’ and “unitary, which offen appear in matrix thery, have no place in free vector spaces. For example, the requirement that Tbe a symmetric matrix would read T',= T/ in components, an awkward-looking relation tht violates the rules given in Section 3.6. In Chapter $ we will find a proper context for notions such as ‘symmetric transformations’ and “hermitian transformations at 4.1. Eigenspaces and characteristic equations Eigenspaces and characteristic equations Invariant subspaces [A subspace U of Vis sid to be invariant under a linear operator $: "> Vif SU = (Suu eU) CU. Inthis case the avon of cette to he subspace U, 8] ge te tains pero on Example 4.1. Lat ¥ be athroe-dimensional vector space with basis fe, ees}, and the operator defined by Let U be the subspace ofall vectors of the form (a + Bley + Bey + (—a + bles, where a and bare arbitrary scalars. This subspace is spanned by fj =e, ~ and fy =e) +e bey and is invariant under 5, since SW fi, Sf = 2k — Slafi + bf) =—afi +24 eU erg: Show hat if both U and are evan! subspaces of ¥ under an operator then 80 is thot inlercetion UW and thee sum U+ = (wnat ue Uw Wy, Suppose dimU =m mand the m xm matrix $ = [S¢] has the upper Block diagonal form s=(3 2) Tie mai 5h mm mai of npn ofS eed in ea ret adich tnd Senta eions lpn ome Ti ce Weer a ak wa Wate cn fm owe Ictonr Sst ego 99 100 Linear operators and matrices Example 42 In Example 41 set fa = ex, The vectors fi, fo, fa form a bass adapted to ‘the invariant subspace spanned by fi and fs, SR=-f. Sh=2f. Shim fh and the matrix of Shas the upper block diagonal form ole ata 0 ‘On the other hand, the one-dimensional subspace W’ spanned by f} = es — Je1 — Je, is invariant since $f; = ~ fj, and in the bass (fi. fay f3} adapted to the invariant decompo sition V = U 6 I? the matrix takes on block diagonal frm ole ata 0 Eigenvectors and eigenvalues Given anoperatorS: V > V,ascalari € Kis sid tobe an eigenvalue of Sifthere exists ‘non-zero vector suc that Svar @20) 49) and 1 is called an eigenvector of § corresponding to the eigenvalue A, Bigenvectos are ‘those non-7ero vectors that re ‘stretched by an amount on application of the eperatrS. Itis important to stiplate » #0 sine the equation (46) always holds forthe zero vector, 50-030, For any scalar © K, let Vi = ul Su = da), an ‘The set F isa veetor subspace, for su Su aSe Ju baky=Hubar) foralla eK For every , the subspace jis invariant under S, We Vy > Su hu > SISu) = A8u > Sue Ty V,, consists ofthe set of al eigenvectors having eigenvalue A, supplemented with the zero vector (0) If isnot an eigemalue ofS, then V, = (0) 4.1. Eigenspaces and characteristic equations TF fey,e3,---, &) is any bass ofthe vector space Vand wv = vey any vector of V let be the column Vector of components By Bq, (3.8) the matrix equivalent of (4.6) is Sv = ay, as “where Sis the matrix of components ofS, Under a change of basis (4.2) we have from Eqs 2.18) and (4.5), w= ASAT! Hence, i satisfies (4.8) then v'is an eigenvector of with the same eigenvalue , SV = ASA“!AW = ASY = AY =a. ‘This result is not unexpected, since Eq (4.8) and ts primed version ate simply representa- tions with respect to different bases ofthe same basis-independent equation (46). Define the nth power S* of an operator inductively, by seting S® = idy and St =S0st Thus S! = Sand s* = 85, ete. p(2) = ay + ays + asx? ++ +ayx" any polynomial With coefficients a, ©, the operator polynomial p(S) is defined inthe obvious way, PIS) = ay +asS-4 028 boob aS" If isan eigenvalue of S and v a corresponding eigenvector, then w is an eigenvector of any power S° corresponding to eigenvalue ". For = 0, Sy sidyy = A% since 2° and the proof fllows by induction: assume Sy = 2"!y then by Linearity Sea ss = 97 se For polynomial p(x), it follows immediately that» is an eigenvector ofthe operator p(S) with eigenvalue pO), (Sw = paw a9 Characteristic equation ‘The matrix equation (4.8) canbe writen in the Form (S-sbv=0. 19) 101 102 Linear operators and matrices |A necessary and suficent condition for this equation to have & non-trivial solution v0 £0) eS — AD ay Se called the characteristic equation ofS. The function (2) is a polynomial of degree m in fo) = (Io ‘known asthe characteristic polynomial of S. the fed of scalars isthe complex numbers, K = C, then the Fundamental theorem of Algebra implies that there cxst complex numbers Ay, 3a, 2 Such that PQ = AYO 2a) Da sata det), a2) faye [As some of these roots ofthe characteristic equation may appear repeatedly, we can write ‘the characteristic polynomial in the form Fl) = (“Me = AYE HY Le hg where prt pak + P= a3) Sinee for each A= A, there exists 4 non-zero complex veetor solution v 40 the fins ‘ear set of equations given by (410), the eigenvalues of S must all come from the set of roots (2)... Am): The positive integer p, is known as the multiplicity of the ‘eigenvalue, Example 4.3 When the field of scalars is the real mambers B there will notin general bbe roal eigenvectors corresponding to complex roots of the characteristic equation. For ‘example, let A be the operator on defined by the following action on the standard basis vectors es = (1, 0) and. Aen Ae ‘The characteristic polynomial is Se) whose roots are 2 1. The operator A thus hes no real eigenvalues and eigenvectors However, if we regard the fleld of scalars as being C and treat as operating oa C then it has complex eigenvectors ws emie, Aum iw etic dvs iv 4.1. Eigenspaces and characteristic equations Itis worth noting tha, sine A®e,, = Ae: = ey and A satisfies its own characteristic equation AB bidys = 09 AEH the operator A ‘This is a simple example of the important Cayley-Hamilton theorem ~ see Theorem 4.3 below Example 44 Lot V be a three-dimensional complex vector space with basis e1, £2 and $= ¥ > ¥ the operator whose matrix with respect o this basis is 110) s=[o10 002 z 10 In: 0 0 0 2 ‘The characteristic polynomial is fe) -@-e-2, Hencethecigemalues are 1 and2, andtistrvil to check thatthe eigenvector corresponding to 2is es, Letu = xe; + yea +265 be an eigenvector with eigenvalue 1, Susu hee 0 () tea Hence y =£ = Oandu = xy, Ta even thug egal. ta mip 2 all corresponding cigenvectors are multiples ofe, Note that while ei not an eigenvector, it is annihilated by ($ ~ dy, for Ses =e, Hey = (S~idy)es = er = (S—idy ey = (8 idyje, = 0. Operators ofthe form S — Dyidy and their powers (Sid, where are eigenvalues ‘of S, will make regular appearances in what follows. These operators evidently commute With eachother and shere is no ambiguity in writing them as (S— 3)" ‘Theorem 41 Any set of eigenvectors corresponding to distinc eigenvalues ofan operator Ss linearly independent Proof: Lot (fis fir---s fi} bea set of eigenvectors of S corresponding to cigemalues daa, Ay to pir of which ae equal, Shaki Gal and let c}.c2....,c¢ be seals such that ofitahtbah=o 103 104 Linear operators and matrices If we apply the polynomial Py(S) = (S— 22)($ ~ As)... (S— 24) to this equation, then all terms except te first ae annihilate, leaving a PiGnfi =0. Hence 64004 = 23901 =). Oy ANA =O, and sinee fi # Oand all the factors (41 ~ 2) O for # = 2, follows that cy = 0. Similarly, cy = -- = oy = O, proving linear independence of fi... fc . he operator S: V > V hasm distinct eigenvalues iy, ... dy wherem = dim V, then TTaeorem 4.1 shows the eigenvectors fi, fi...., fy are Li. and form a basis of V. With ‘respect to this basis the matrix of is diagonal and is eigenvalues lie along, the diagonal iy 0 0 Om. 0 00. de Conversely, any operator whose matrix is dagonalizabe has @ basis of eigenvectors (the igetvalues need not be distinc forthe converse) The more dificult task les in the lasi- fication of those cases such as Example 44 where an eigenvalue has multiplicity p > 1 but there are less than p independent eigenvectors corresponding toi. imal annihilating polynomial ‘The space of linear operators L(V, V) is a vector space of dimension n® since it ean be put into one-to-one correspondence with the space of m x matrices. Hence the fist n® powers = idy, _S.,.., $* ofany linear operator S on Vcannotbe lincarly Independent since there are n+ 1 opertors in all, Thus S must satisty a polynomi ‘equation, POS) = col +S has? +--+ eS” =0, not al of whose coefficients cp. ¢1,-...¢y vanish. Exercite: Show that the mat equivalent of ay such polynomial equation is hase independent by showing that ay smilntytansfor 8° ASA~! ofS satis the sme polmomial equation, PS) =. Let AS) = St HES bebe be the polynomial equation with leading coeTeient | of lowest degree k 0. then (S~ ii}. Proof There is clearly no loss of generality in setting i “The proof proceeds by induction or (Case r = 1: Letu be any vector suc that (S— 24) va(S— aru. inthe proof ofthis result, and set 107 108 Linear operators and matrices Since (S — 2)» = 0, this vector satisties the eigenvector equation, Sw = Av. If A(S) is ‘the minimal aniilating polynomial ofS, then (Su = S22)". (S—Aav (i — da). Ga — Ane, sally #2; ford # jit follows that» ~ 0, which proves the ease r= 1 Case r= 1: Suppose the Terma has been proved forr — 1, Then (Sahu =0 > (8 —ayhYUS— ane => (SAMS au = 0 dy indvetion hypothesis = (8-ayhu=0 by thecaser = 1 Which concludes the proof of the emma . Iedim(V;) = pet hy... hy Bea basis of Hand extend to a basis of V using Theorem a nya. ne 4.16) (Of course if (= — 24)" is the only elementary divisor ofS then p =n, since Vi = V as every vector is annihilated by ($ — 2)". 1f however, p O we conclude from Lemma 44 that Hence PST ehpse © Visand there exist constants dd... suc that Berne Lem 42. Jordan canonical form As the Set thi... a} i8 by definition & basis of ¥, these constants must vanish @ =a! =O forall a= I,...,n—p and (= 1... p-Substating into (4.19) it fol lows fom the linear independence of... Py thatc..., all vanish as wel, Tis proves the linear independence ofthe vectors in (4.17. Let = L(hy, i, pnp). By Ba, (4.18) every vector x € 1 is of the form Sayhy since this is true of each ofthe vectors spanning M7). Conversely, suppose x = (S—)y and let (y!,..., y*) be the components of y with respect to the original basis (4.16); thea S. oP (Xyth + Dye hye) = eh ‘Hence W/ consists precisely of all vectors of the form (S—A,)"y where y is an arbitrary vector of Y Furiermore Ws an ivariant subspace of P for if» © then x5 (SAiy = Se = (S~ Ap Sy => See M, Hence Wand ¥; are complementary invariant subspaces, V = Vy @ Wj, and the matrix of| 'S with respect to the basis (4.17) has block diagonal form 50 (6 2) Tor hp hab tn, =a =a the only eigenvalue ofS, for fw © Vis an eigenvector of 5; corresponding toan eigenvalee 2, Sw thea (5. -Aiu = = dn from which it fellows that o = 2y sine w #0. The characteristic equation of Si therefore et(S, — 21) = (-1)°0 — 2" 420) Furthermore, the operator (hay == 08 |, is invertible. Fo, let x be an arbitrary vector in 1; and set x = (S~ Aa) y where y © 7 Let y = y +92 be the unique decomposition such that yy € Fj andy € M4; then a anPy 109 110 Linear operators and matrices and since any surjective (onto) linear operator ona nite dimensional vector spaces bijective (one-to-one), the map (T; ~ hx} must be inverible on WF, Hence detT, 2 40 a2 and }y cannot be an eigenvalue of 7, ‘The characteristic equation of Sis deus 21) = de(S; — sf de) — 21) and from (4.20) and (4.21) the only way'the right-hand side can equal the expression in Fo, (4.13) if rsp and det(Ty — 21) = (APE — BaP fe— Hence, the dimension p, ofeach space 7 is equal tthe mltiplicity ofthe eigenvalue A, and ftom the Cayley- Hamilton Thearem 4.3 follows that, > Repeating this process on T;, and proceeding indctvely, it follows that VeVi OKO OI and seting Inu hiaecsooBap fate soos fap sc oot age to be a buss adapted to this decomposition, the matrix ofS has block diagonal form a) s-[° 2% 422) 0 0 1. Sy ‘The restricted operators, = 5], each have a minimal polynomial equation ofthe form (5 -Ayh =, sothat Se Aid) 1-Ni where NP <0 (423) jotent operators Any operator N saisying an equation of the form N* = 0s called a nllpotent operator. ‘The matrix N of ary nilpotent operator satisfies NF = O, and is called a nilpotent matrix, From Eq, (4.23), each py p, matrix S, in the decomposition (4.22) is a multiple ofthe ‘uit matrix plus a nilpotent matrix N,, S/=AI+N, where (NJ = ‘We next find a basis that expresses the matrix of any nilpotent operator in a standard (canonical form. 42. Jordan canonical form Let U be a finite dimensional space, not necessarily complex, dim U = p, and Na nilpotent operator on U7, Set tobe he smallest positive integer such that N* = 0, Evidently K=1itN =0, while if N 40 then & > 1 and N'! #0, Define the subspaces X of Uy Xa (ee UlMy ‘These subspaces form an increasing sequence, 0 WX CHC KRU and all are invariant under N, for if w€ X; then Nw also belongs to) since NiNu= NN = NO=0, The set inclusions are strit inclusion in every case, for suppose that Xj = Xj.) for somes 0 so that p = r+ ry boos try Then ris ‘he maximum number of vectors in X; that can form a set tha is Hineary independent with respect to X,- Proof: Let dim X,.1 =9. dim, = q! > q and let uy,..., uy be ebasis of X)..Sup- pose (v1, ...,o-} is a maximal set of vectors Li, with respect to Xj; tha is, a set that cannot be extended to a larger such set. Such a maximal set must exist since aay set of ‘vectors thats Li. with respect to Xj i Tinearly independent and therefore cannot exceed qin number. We show that $= (0. th...) is abasis of (a) Sis eli. set since Yon Ney 0 izplicsGesty that all vanish bythe requirement tat the vectors are Li. wit respect to-Xy and secondly all tea! = O beats thew, are Li (b) S span X, else there would exist vector x that cannot be exprested as linear combination of ecto ofS, and U(x) woulde liner independent. nthateae, the set fects (a. #} wou be Li with espesto Xi fOr, ny + Bx Ke then rom the lina inepenence of $x) all = @ snd b= This cotati te axial of mand Sms spn te volo X, “hiss the . a2 Linear operators and matrices Let (hy, hg} be a maximal set of vetors in Xe thats Li, with respect to Xy_1.From [Lemma 45 we have ry = dim Xx — dim X43. The vectors Wy = Wh, y= Nha i, = Nh all belong to Xs) and are Li. with respect Xe-2, fori ai Fahy tba, Xen then Nah, bathy bo tah) 0, Som which it follows that Gh aha bal hy © Xie Since (hf a Li, with respect to Xpa1 we must have Hence rit = re. Applying the same angument to all other gives nas snsn 42 Now complete the set $b...) 10a maximal system of vectors in Xiay that is 1 ‘with respect to Xi, Beet Baa Similarly, define the vectors? = Nh (@ = 1,....7421) and extend toa maximal system U.P) Xia. Contining inthis way, form a series of ryt boo re = dim U veciorsthatare linearly independent and forma basis of U and may be displayed inthe following scheme: hy fn H fg oss m HW oe Woe ae ae. Let Uy be the subspace generated by the ath column where a ‘spaces are all invariant under N, since Nh” = hY/*",and the bottom elements Ay are annihilated by 7, rs These sub- Vex, WHEY © Xp = 10) Since te vcore? arene indepen and forma bas for the sbapce Uae om inereting. and u= Ur ® Uy 42. Jordan canonical form ‘where the dimension d(a column, tn particular, d(1 ddim Uy of the ath subspace is given by height of the ath a bass is chosen in U, by proceeding up the ath column starting from the vector inthe bottom ro, fai ces feats = AE then the matrix of Ny = N],, has ll components zero except for 1's inthe superdiagonal, D1 0 oot 425 oot ee) Bereise: Ceck tis mati representation by remembering thatthe components of the matrix of ‘operas Mf with respect oa basis} are given by Mu, Mia Mame = fa Selecting a basis for U tht runs through the subspaces Uy... Up, in onder, Now set. M = Nua sd ote that Muy =O, Mus = ue 61 Site = fins 4 = fits 4 = fiteovns Op = Str the matrix of appears in block diagonal form Ny N N 426) N, where each submatrix N, bas the form (425). Jordan canonical form Let ¥ be a complex vector space and $: V —> V a linear operator on V. To suzumarize the above conclusions: there exists a basis of V” such thatthe operator has matrix S in block diagonal form (4.22), and each S, has the form S,=21-+N,, where Ny is a nilpotent matrix. Tae bass can then be further specialized such that each nilpotent matrix isin tun decomposed into a block diagonal form (4.26) such thatthe submatrices along the diagonal al have the form (4.25). This is called the Jordan canonical form ofthe matrix, 13 Linear operators and matrices In other words, fis an arbitrary n x m complex matrix, then there exists non-singular complex matrix A such that ASA“! isin Jordan form, The essential features ofthe matrix 5 can be summarized bythe folowing Segré characteristics m Gardin) (gee) Where 7 isthe numberof eigenvectors corresponding to the eigenvalue 2 and Samm Ee 1 Sepré characterises are determined entirely by properties ofthe operator Ssuch a its to realize that the Jordan canonical form only applies in the context of a complex vector space since it depends critically on the fundamental theorem of algebra. For @ real matrix there is no guarantee that areal similarity transformation will convert ito the Jordan frm, im cigenvalues and is elementary divisors. tis important, howe Example 4.6 Let Sbe transformation on a four-dimensional vector space having matsix with respect to a bass ee, ee) whose components ae Li -1o oro Slo ia oro 4 ‘The characteristic equation can be written in the form eu — a) = (A= DF + VF = 0. hich has two roots 2. = 1 + i both of which are repeated rots, Each root corresponds to just single eigenvector, writen in column vector form as 1 n= {2) me o-(? ° 0 satistying Sh=(+0h and Sh Let fy and fi be the vectors ane t= |) 4 42. Jordan canonical form and we find that Sh b+U+0h and Sh=h+0— db, Expressing these column vectors in terms ofthe original basis fi mien fe miey, frserties, fsa ties provides a new basis with respect to which the matrix of operator has block diagonal Jordan form ti too o 1K 0 s 0 oo. 1 oo =i ‘The matrix A needed to accomplish his form by the similarity transformation S’ = ASA“! is found by solving for thee; in terms ofthe f, (it fh @=Mh+M. G=HUi- A). «= HUE fo, which can be writen = Af, where A a=} ‘The matrix Sis summarized by the Sepré characterises: Exerc: Votify tha § = ASAT in Example 46 Problems Problem 42. Ona vector space let Sand T be two commuting operators, ST = 7S. (@) Show that ip an eigenvector of F then sis Se (Show tat bass for ¥ cane fou such that the matrices ofboth and wath respect to this basis are in upper rangle frm, Problem 4.3. For the penioe 7: — ¥ on a four-dimensional vector space given in Problem 3.10, show tat no basis exists uch thatthe mati of 7 i diagonal. Fad bass in which the mati 15 43 116 Linear operators and matrices ‘OFT ha he Sond fom 00 0 0) o 000 port 00 0 3, forsome A and cae the valu of. Problem 44 Lat be the matrix ir 1 0 0 1st 0 0 aia et 10 Hy, Find the minimal silting polyoma andthe charctertc polynomial of tht mati i eigen: ‘values and eigenvectors, an nda Basis that reco tt Jordin canon frm, Linear ordinary differential equations While no techniques exist for solving general diferential equation, systems of linear ‘ordinary differential equations with constant coeficiens are completly solvable with the help of the Jordan form. Such systems can be written in the form & @ ‘where x(t) is an m x 1 column vector and A an n x n matrix of real constants, filly i is sto consider this a an equation in complex variables x, even though we may only be seeking real solutions. Greater details ofthe following discussion, as well as applications ‘to non-linear differential equations, canbe found in [4,5] ‘ey fora solution of 4.27) in exponential form x= en, (428) ‘where ay isan arbitrary constant vector, and the exponential of a matrix is defined by the ‘convergent series 427) 429) IS and are two commuting matrices, ST = TS, it then follows just as for real or complex scalars that 7 = el “The intial value at = 0 ofthe solution given by Eq, (4.28) is clearly x(0) = x. IfPis any invertible m x » matrix, then y = Px satisfies the differential equation Y=AYy where A’ = PAPA! and gives rset the solution 43 Linear ordinary differential equations IF chosen such that A’ has the Jordan form 14 Ny Dal + Nar where the Nyy are nilpotent matrices then, since 1 commutes with Nj fr every i. j the exponential term has the form eight OateMent TEN isa & x k Jordan matrix having 1's along the superdiagonal asin (4.25), then N? has 1's in the next diagonal out and each successive power of N pushes this diagonal of I's one place further until N* vanishes altogether, 001 0 000. 1 ooo1 000. 0 °. 000... 0 ° Hence reg oie we . 00 1 and the solution (428) can be expressed asa linear supposition of solutions ofthe form (N= wile 430) ‘where wan" Ga Ais areal matrix then the matrices P and A’ ar in general complex, bu given real initial values xy the solution having these values at ¢ = Os m0) hy bob teas thy aap Pet Pay Which must necessarily be real bythe existence and uniqueness theorem of ordinary difer- ental equations. Alternatively, for Areal, both the real and imaginary parts of any complex Solution a(7 are soltion of the linear differential equation (4.27), which may be separated by the identity cosh + sind 7 Linear operators and matrices Two-dimensional autonomous systems Consider the special ase of a planar (two-dimensional) system (4.27) having constant ‘coefficients known aan autonomous system, so Ax woe A= (2 22) Boh th matric and vector x ae assumed te eal A ec pat x fe 1 ny const salon x= x of (427), The analysis of atonomous nso beaks Wo a ‘erable zo ofesesandnbcases, Weconsiderthe cas whorethematix ison singlet for which the oy ric! point is xo = 0. Bat gees and oe 70, nd the flowing postales rise (yi, # in and both eigenvalues are eal. this case the eigenvectors hy and hy form 3 basis of 8? and the general solution is rece ba eh (1a)ltAy < 4; < Othe critical point is called a stable node. (1b) 2a > 4: > Othe critical point is called an unstable node, {(1o)lP%_ <0 < 4; the ertcal point i called a saddle point. ‘These tree cases are shown in Fig 4.1, after the bass ofthe vector space axes has been ‘transformed tlie along the vectors hy @)A; =A, iy =F where 2 is complex. The eigenvectors are then complex conjugate to ‘each other since A is areal matrix, An=ah > AB and the arbitrary real solution is coh yeh we set ef — iy). Repti c= Re where, ht, > Oandar areal real quantities, then the soation x asthe frm Re cox(or +a + sift +a) (28). < 0: This isa logarithmic spiral apeoaching the ericl point x = 04s > 20, and is called a stable fous. (2b) = 0: Again the solution is» logarithmic spiral but arising from the exit 61-20, called an unstable focus. ‘0: With espect othe ass hh, the solution ist of ices about the orga point oo ‘Wonton = (2), = (°) sd esis and the critical point is called a vortex point ‘These solutions are depicted in Fig. 42. u8 423 Linear ordinary differential equations Figue 4 (a) Sable node, unsable node, () sade point Figue 42 (a Sable eu, (b) unstable fous, vortex point 19 aa 120 Linear operators and matrices Problems Problem 48 Verify tat (430), (431) ia solution of, = Ax) provided Ab Aly = dh +h ‘where isan eigenvalue of A Problem 4.6 Diss the emining eases fortwo-dneasionl autonomous stems (a) y= hs = 4 # Vand wo disincteigorsectors hy and, (i) oly one eigenvector; (b) A singular matrix Sketch the solutions nll nances. Problem 4.7 Chsify ll hree-timensonal alonomous systems of near diferent eqetions ‘avin constant coeficiens Introduction to group representation theory Groups appear most frequently in physi through their actions on veetor spaces, known as ‘representations. More specifically, a representation of any proup G on a vector space V is ‘homomorphism 7 of into the group of linea automorphisms of V, T:G>Guy For every group clement we then havea corresponding linea transformation T(g) : V > such that, Tig)Thw = Tigh forall g he G. v eV. representation ofan abstract group isa way ofprovidingaconerete model ofthe Vj and Ts : G+ Fs are said to be equivalent, writen 1 ~ Ty ifthere exists a vector space isomorphism A: Vi, —> Vs suc that Tg)d = AT Wg), forall g eG. (432) IC y= ¥g=V then Tg) = AT\(@)4~*. For finite dimensional representations the matrices representing 7; are then derived from those representing Tj by a similarity 4.4 Introduction to group representation theory ‘transformation. Inthisease the two representations can be thought os essentially identical, since they are related simply bya change of bass, Any operator 4, even itis singular, which satisfies Eg (432) is called an intertwining, operator fo the two representations. This conditions frequently depetedby acommutatve diagram nah nel |e wh Irreducible representations {A subspace Hof Fis suid to be invarlant under the action G, oF Gelnvarian invariant under each linea transformation 7), TW CW forages. itivis For every g € G the map Tg) is sujctive, T¢g)W” = W, since w = TigXT(g'w)) for every vector w € IF, Hence the restriction of Tg) to Wis an automorphism of 17 and provides another representation of G, called a subrepresentation of G, denoted Tr G + GL(W). The whole space Vand the trivial subspace (0) ae clearly G-invariant for any representation 7'on V If these are the only invarantsulbpaces the representation is sid tobe irreducible, If 17 isan invariant subspace of a representation T on F then representation is induced on the quotient space 7/1, defined by Tyywlghe+ W)= Tig +h forall ge GveV. crise: Verily that this definition is independent ofthe choice of representative from he coset 14 7 and that tis inded a representation Let ¥ be finite dimensional, dim V'— n, and 7” ¥/ be a complementary subspace to W, such that ¥ = W@W", From Theorem 3.7 and Example 2.2 there exists a basis whose frst r = dim IY vectors span W waile the remaining m —r span H”. The matrices ofthe representing transformations with espect to such a basis will have the form Twig) Si) Tris) Te) ‘The submatrices Ty (g) form a representation onthe subspace 1" tht is equivalent othe {quotient space representation, but 1” is natin general G-invariant because ofthe existence ofthe off-block disgonal matrices G, FS(g) £ O then tis essentially impossible to recover the original representation purely from the subrepresenations on 1” and H”, Matters are much improved, however, if the complementary subspace Wis G-invaiant as well as 7. In this case the representing matrices have the block diagonal form ina bass adapted 19 11 122 Linear operators and matrices and 1, nie) 0 t= ("SP ap) and the representation is sid to be completely reducible, Example 4.7 Lot the map T : R + GL(R?) be defined by rate=(5 $) ‘Tis is a representation since T(a)T() ‘Tha-+-b). The subspace of vectors ofthe form (;) sera, tm compen natant aso er tt (2) niin ah mai 1) tly lo ‘the Jordan canonical form that no matrix A exists such that AT(1)A~ is diagonal. The representation Tis thus an example ofa representation thats reducible but not completely reducible Example 48 The symmetric group of permutations on tres objects, denoted Sy, has & representation T on a three-dimensional vector space V spaaned by vectors ey, €2 and es, defined by Tea)es = ex ln this bass the mati ofthe transformation Px) 8 T= [7/¢n)], where Tee. = Tae, Using eyelie notation for permutations the elements of 5 ae € = id, my = (123). m2 = 132. 13). Then Tee, =e, 50 that Te) isthe identity matrix | while Pome es, Tei)es =e, ote. The matrix representa- tions of all permutations of Sy are re -(: : ‘) w-(' ° ‘) w-(s ° ) oon oa 1 oie 001 w-(' 2) (: 1) on 100 Let v= ve be any vector, then Tin)» = Tits, andthe action ofthe max Tn) i eft lication on the eon vector 44 Introduction to group representation theory ‘We now find the invariant subspaces ofthis representation. In the fist place any one= Aimensionalinvariaat subspace must be spanned by a veetor v that isan eigenvector ofeach operator T(x). In matrices, Tim av > ‘whence milary 2 = av? andv! = oe, andsincey # Owemasthave that = |. Sincea #0 it follows that all three components», »? and v? are non-vanishing, similar argument ives Tess po lap, tape, Bape, fom which #? = I,andaf! = I since = av? = ape! The onl pairof complex numbers a and p satisfying these relations is 1, Hence v! = +2 = v3 and the only one ‘dimensional invariant subspace is that spanned by » = €) +69 +6 ‘We shall now show that this representation is completely reducible by choosing the basis fi “The inverse teansformation is etetes ftibeie on and the matrices representing the elements of S; are found by caleulating the effet of the ‘various transformations onthe basis elements J. For example, TOA= fh. Mh =f Teh = hi Tim) fi = Toye, +e2 +65) atete= fi Ten) f= Teneo 1 pt th T(m) fs = Temi Key + €2 — 203) 3 satonta=-3h-th. ot 3 Continuing inthis way forall T'(,) we arrive atthe fllowing matrices ro=(0 16). te ( fe). te ot Cea ? roe 9). woof 124 Linear operators and matrices ‘The two-dimensional subspace spanned by fe and fi is thus invariant under the action of 5} and the representation T is completely educible, Exercte: Show tat the reresecationTrericted othe subeptce panned by fs and fisoeducibe, ty shoring ta there wo tvariotooe-inensioalsubpace spanned by fj and fi, Schur's lemma ‘The following key resul and its corollary are useful in the clasiieation of ireducible representations of groups. ‘Theorem 46 (Schur’s emma) Let T;: G > GL{V}) and T; = G > GL{V) be two r- reducible representations of a group G, and A: V\ -» Vz an intertwining operator such that RiQA= AT ig) forall ge G. Then either A ‘equivalent, Ty ~ Ts (or A is an isomorphism, in which case the two representations are Proofs Let eker AC Vi. Taen AT\Gg)0 = Tighdv =0 so that Tg)v ker A. Hence ker isan invariant subspace ofthe representation TAs 1 isan ieredueible representation we have tht either ker A = Vin which case A ~ 0, cor ker A = {0} Inthe ltr case 4 is one-to-one. To show itis an somesphism itis only necessary to show tht itis ont, This follows fom the fc that im A C7 san eaviant subspace ofthe represetation Tegylim A) ~ Tigh A(Vi) ~ ATHERAM) S ACV) ~ im A Since Fis an reducible representation we have either im A = (0) of im A = Vs. Inthe first ease A = 0, while inthe second A is onto. Schur’ emma i proved . Corollary 47. Let P : G > GL(V) be a representation ofa finite group G on a complex vector space V and A: V + V an operator that commutes with all T(g)thatis, AT(g) = Tig)A forall g © G. Then A = aid for some complex scalar Proof: Set ¥, = Ve = Vand 7 have Ty = T in Schur’ lemma, Since AT(g) = T(@)A we (A= aidy) Tig) = Tigh ~ aid) since idy- commutes with all linear operators on ¥. By Theorem 46 either 4 —eidy is invertbie ori is zero. Leta be an eigenvalue of 4 ~ for operators on a complex vector space this is always possible. The operator A — ady isnot invertible, for if itis applied toa corresponding eigenvector the result isthe zero vector. Hence A ~ aidy = 0, whichis the desired result. . References should be observed thet the proof ofthis corollary only holds for complex representa tions since real matrices do not necessarily have any real eigenvalues, Example 49 1G is Gite abelian group then all its irreducible representations are one ‘dimensional. This follows fom Corollary 47, forif T= G -> GL(V)isany representation of G then any T(h) (h © G) commutes with all T(g) and is therefore a moltiple ofthe identity, T(h) = ethidy Hence any vector v € V isan eigenvector of 7 (A) for alk © G and spans an invariant one dimensional subspace of V. Thus, if dim V > | the representation P cannot be reducible, References [1] BR Halmos.Finite-dimensional Vector Spaces. New York, D. Van Nostrand Company, 1958, [2] S. Hassani. Foundations of Mathematical Physics. Boston, Allyn ané Bacon, 1991 [B] EP. Hildebrand, Methods of Applied Mathematics. Englewood Cliffs, N.1, Prentice- Hal, 1965 [4] L.S.Ponisyagin. Ordinary Diferential Equations. New York, Addison-Wesley, 1962 [5] DLA. Sinchez. Ordinary Diflerential Equations and Stability Theory: An Introduction. San Francisco, W.H, Freeman and Co., 1968 [6 S. Lang. Algebra, Reading, Mass., Addison-Wesley, 1965, [7] M, Hammermesh, Group Theory and ts Applications to Physical Problems, Reasng, Mass, Addison-Wesley, 1962, [8] S Sternberg. Group Theory and Physics. Cambridge, Cambridge University Press, 1994 125 5 54 126 Inner product spaces Inmatrix theory itis common to sey that matrix is symmetric if itis equal tits transpose, ST =5. This concept does not however transfer meaningfully to the matrix ofa linear ‘operator ona vector space unless some extra structure i imposed on that space For example, le: V > V beanoperator whose matixS = [S,]issymmecic with respect toa specific basis. Under a change of basse; = Ae) the transformed matrix is) = ASA“, while for the transpose matrix 87 = (AsAcy? = at)"5AP Hence S'¥ #5! in general. We should hardly be surprised by this conclusion for, a com- ‘mented atthe beginning of Caaper 4, the component equation, = 8 violates the index conventions of Section 36. Exercise: Show that Sis syrmetric if and only iS comsmtss with ATA, SATA AAS “Thus the concep of “symmetric oper” ist crn! unde general basis tensormaons, ba itis ivarant with espoct to orthogonal basis transformations, A? = A! IF isa complex vector space itis similarly meaningless to talk oF an operator H : V —> V asbeing hermiian’ iis matrix H with respect to so Ht Evers: Shaw tht the hermit property i ot in general basi vara, but preserved under sntaryteasTormations,e = Use, where U"! =U) In this chapter we shal sce that syumetric and hermitian matrices play a different role in vector space theory, in that they represent inner products instead of operators (1-3) Matrices representing inner products are best writen with both indies on the subsript, level, G = GT = [gy] and H = H! = [ir] The requirements of symmety 23, = gi and Iermiticity yy =F; ae nt then at ols with the index conventions. ner product spaces Let 7 be areal Gnite dimensional vector space with dim Y =n. A real inner product, often refered to simply as an inner produet when there is no danger of confusion, onthe 5.1. Real inner product spaces ‘vector space Vis amap ¥" x V > B that assign areal number uv © & to every pair of vectors u,v € F sting the following thre conditions: (RIP) The map is symmetic in both arguments, w = v1 (RIP2) The distributive aw holds, w (av + bw) = au + Bu w: (RIP3) Hfu-v =O forall v © V thenw ‘A real vector space ¥ together with an inner product defined on i scaled areal inner product space, The inner producti aso distributive onthe frst argument for, by conditions (RIPL and (RIP2), (au + By) = (au 4 By) = aw 4 bow v= awww 4 By \We often refrto this linearity in both arguments by saying tha the inner products bilinear. ‘As a consequence of property (RIP3) the inner product i sid to be non-singular and is often referred to as pseudo-Euelidean, Sometimes (RIP3) is replaced by the stronger condition (RIPY) uu > O forall weetorsu 0. Inthis case the inner product ssid tobe positive definite or Euclidean, anda vector space with such an inner product defined on itis calle a Euclidean vector space. Condition (RIP3’) implies condition (RIP3), for iF there exists & non-zero vector u such that w-» forall yc ¥ thenw + = O(onsetting v = w), which violates (RIP3', Positive definiteness is therefore a stronger requirement than non-singulaity Example 51 The space of onlnary3vectors b,c. ia Eusidean vector space often denoted 3°, wit respect othe usual sla product by + dab * abs where [al isthe length or magnitude ofthe vector a and @ is the angle between a and b, Conditions (RIP1) and (RIP2) are simple to verify, while (RIP) fllows from ab a bcos aase Gy +s tay oo f age. “This generalizes toa positive definite inner proguet on R", ‘the resulting Fucidean vector space denated by 8" ‘The magnitude ofa vector w is defined as w - w. Note that ina preudo-Euelidean space the magnitude ofa non-vanishing vector may be negative or zero, bul ina Euclidean space it is always a positive quantity. The length of a vector in a Euclidean space is defined tobe ‘the square rot of the magnitude “Two vectors w and wate said tobe orthogonal if. » ~ 0. By requirement (RIP3) there is no non-zero vector u that is orthogonal every vector in V. A pseudo-Euclidean inner product may allow forthe existence of seiforthogoral or nll vectors u 7 O having 2=r0 magnitude u-u =O, but this possiblity is clearly ruled out in a Euclidean vector space. 27 128 Inner product spaces In Chapter 9 we shal see thet Einsteins special theory of relativity postulates @ pseudo~ uctidean structure for space-time known as Minkowski space, in which nul vectors play significant role Components of a real inner product Given abasis (y,....e,} ofan inner produc space V, set By 261-6) = By 6) called the components of the inner product with respect to the basis {e)). The inner product is completely specified by the components ofthe symmetric matrix, for if u = ve, v= we, ae any pair of vectors then, on using (RIP1) end (RIP2), we have ay v 62) I we write the components ofthe ianer product as symmetric matrix G = bay) = Lee) = Ley 0 = Lad = and display the components ofthe vectors w ané vin eolums form a ‘then the inner product canbe writen in matrix notation, uv aay, Theorem 5.1. The matrix G is non-singular if and only if condition (RIP3) holds Proof: To prove the if part, assume that G is singular, det non-trivial souton ote linea system of equations eo = Daw! ‘The vector w= w/e is non-zero and orthogonal to all v = ve, (0. Then there exists a iu.) aye’ =0, in contradiction to (RIP3), Conversely assume the matrix G is non-singular and that there exists vector u violating (RIP3);u 4 Oand uv =O forall» © V. Then, by Eg (5.2), we have aye! 0 where a = gyi! = gua! forarbivaryvaluesof/ Hence, = Ofer j However, this implies anon-ivil solution othe st of linear equations gu’ = 0, which is contrary to the non-singular assumption, deg] #0 . Orthonormal bases Under achange of basis an Ae, 63) 5.1. Real inner product spaces ‘the components gi transform by ay tee ed (Ae) 6a) = Agia, Where gy = ef ef, In matrix notation this equation reads G=aTGn 65) Using A’ = [474] = A the wansformed matix 6 can be writen @=a"Ga 66 ‘An orthonormal basis (e), ¢2.... ey) for brevity written “o.n, basis’, consists of vectors all of magnitude 1 and orthogonal to each other in pars, By 66) = mb where my =A on where the summation convention i temporarily suspended. We occasionally do this when a relation is referred toa specific class of bases ‘Theorem 5.2 In any finite dimensional real inner product space (V..), with diz V there exists an orthonormal basis (ey, 2... ey) satsiing Ea. (5.7). Proof: The method is by a procedure called Gram-Schmidt orthonormalization, an algortimie process for constructing an on. basis starting from any arbitary basis {uy ts. te). For Buelidea inner produets the procedure is relatively straightforward, but the possibility of vectors having zero magnitudes in general pseudo-Euclidean spaces ‘makes for added complications. egin by choosing a vector u such that uu 0, This is always possible because if u-u =O forallu € V, then for any pair of vectors u,v OR Wry) WH au ute Y Which contradicts the non-singulaity condition (RIP3). For the first step of the Gram— Schmidt procedure we normalize this vector, = and sere = 41 Tira Inthe Fusidean case any non-zero vector u will do fortis fst step, ad ey -¢y = Let F be the subspace of ¥ consisting of vectors orthogonal 0 e}, Viz twePlwe =) This isa vector subspace, for if w and w" are orthogonal te; then soi any linear combi- nation ofthe form w +a, (wb aw)-ey baw -e <0, For any » € V, the vetorv ()?v- ey =0. Furthermore, the decomposition v= = ey € Fj where a = mo ey), see vey 1 + Vito a component parallel 29 130 Inner product spaces toe; anda vector orthogonal toe; is unique, fori» = ae + v" where v” © Fi then (@- ae) “Taking the inner product ofboth sides with gives fstly a’ — a, and consequently v” = v ‘The inner produet resvcted to V, 88 a map Fi x Fi —> Ri an inner product on the vector subspace ¥3. Conditions (RIPI) and (RIP2) ae trivially satisfied if the vectors, vv and w are restricted to vectors belonging to V4. To show (RIP3), that this inner product is non-singular, let v ¢ Vj be a vector such that». "= 0 for alle" € ¥,. Then vis ‘orthogonal to every vector in w © V for, by the decomposition w= mibe-eie tw", wehave ” required Repeating the above argument, there exists a vector u’ & V; such that w'-w’ +0, Set 0. By condition (RIP3) for the inner product on ¥ this implies v' = 0, as vat and mp = e2€2 = 41. Clealy 3 e) = Osincee> © Vi. Defining the subspace V3 of vee~ tors ohogonal to e; and e,, the above argument can be used again to show that the re- striction ofthe inner product to satisfies (RIP)-{RIP3). Continue this procedure util ‘orthonormal vectors (ey, ee, ..-, eq have heen produced. These vectors must be linearly independent, for ifthere were a vanishing linear combination a'e, = 0 then performing the inner product ofthis equation with anye gives a’ ~ 0. By Theorem 3.3 these vectors form ‘a bass of F. AUIhis sage of he orthonormalization process Vy = {0}, there ean be no vector that is orthogonal to every e, ..., ny and the procedure comes toanend. “The following theorem shows that fora fixed inner produc space, apart fom the onder in which they appea, the coefficients mare the same in all orthonormal frames, ‘Theorem $3 (Sylvester) The number of + and ~ signs among the ny is independent of the choice of orthonormal basi. Proof: Lt (:) and (J) be two orthonormal bases such that eos see sth een fis a fe hath fess fon Is > r then the vectors fie... fad 43.05 ate aset of s+ =r > m= dia vectors and there must be a non-trivial Hine eat between them, Bt tat fea Beg tt be ‘The a! cannot all vanish since the e form an Li, se, Simi, not all de®! will vanish Setting Mibbal fy = Bley a bey £0 5.1. Real inner product spaces ‘we have the contradiction Sop <0 and the rwo bases must have exactly the same number of + and — signs. Hence r fr isthe number of + signs and s the number of ~ signs then their diference r — = is called the index ofthe inner product. Sylvester's theorem shows that i isan invariant ofthe inner product space, independent ofthe choice ofo.n, basis. For & Euclidean inner Product, rs =n, although the word "Euclidean" is also applied tothe negative definite ase, vn If = = b(n ~ 2), the inner producti called Minkowskian, Example 5.2 Ina Euclidean space the Gram- Schmidt procedure is carried out follows fem as mae fea ler-user moe fay = (eyes —ler-me moe Since each vector has postive magnitude all denominators Fr >, and each step is well-defined. Each veeto , is unit vector and is orthogonal 1 each previous e, =). Example £3 Consider an iner product on a three-dimensional space having components ina basis uaa, us oid 101 119) ‘The procedure given in Example 5.2 obviously fais as each bass vector i a null vector, yy = ug -ty = Hay = 0, and cannot be normalized to @ unit vector, Firstly, we finda veetoru such that #0. Any vector of the form a # Owill o, since G= ay) uy + at with wees uy my Dany uy = 2a Setting a = 1 gives w= uy $0 and ww process i then ‘The first step in the orthonormalization Syvten “There is ofcourse a significant element of arbitarnessin this the choice of is by no means unique; for example, choosing a = § Vea toe) = wi + jus ‘The subspace V; of vectors orthogonal consis oF vectors of the form v = au + buy =euy such that 4b 42e=0. veer vm = (ay + buy + eus)-(us + 03) a 12 Inner product spaces Setting, for example, ¢ = 0 and a {results in » = uy ~ us. The magnitude of wis v= ~2.and normalizing gives e 1 au Finally weneeda vector two requirements imply that @ [Normalizing w results in 14+ buy + cu thats orthogonal tobothe ane. These b= =e, and setting ¢= 1 results in = uy +p — ws. (uy ug = 9) (uy ban =H) 1 ute) oo ‘The components ofthe inner product in ths on. bass are therefore 1 C=til=te-el=[o - oo o “The index ofthe inner product is 1 Any paic of orthonormal bases {,} and (e() are connected by a bass transformation eae, seh that wy From Eg. (5.4) we have By = auth, 68) ‘or its matrix equivalent 6 Fou. 69) For a Euclidean metric G=1, and L is an orthogonal transformation, while for a -Minkoswkian metric withn — 4 the transformations are Lorente transformations discussed in Section 2.7. As was shown in Chapter 2, those transformations form the groups O(n) and O(3, 1) respectively: The general pseudo-orthogonal inner product results in a group (0(p,q) of pseudo-orthogonal transformations of type (. ). Problems Problem S.1 Let (7) be areal Euclidean inner product space and denote the length of vector sce by el = JI. Show that two vectors an v ae orthogonal iff a+ of = a+ WP. 52 5.2. Complex inner product spaces Problem 52 Let ‘be the components ofa rel inner product with respect toa basis wy, ua. Use Gram Sebi ‘onhogonaizaton to id an orthonormal basse, e, e, expressed in ters of he vec, and find he index of hi noe rod Problem 53 Let G be he symmetric matrix of components of real inner product with respect to basis usu Gola twws fo 2 1 Using Grams Schmid ortogonalization, fad an orthonormal basse), ees expressed ia terms of| the vse Problem $4 Define the concep of symmetric operator’: > Vas one hat satisfies (Sv =u-(S0) forallawe V. Sow tht his resus in the component entation Sheu = aS apislen to the mnt equation s'g-os. Show that for an orthonormal ass ina Euclidean space this resus inthe sual notion of symmetry, ‘but il forpseudo-Fuciean spaces. Problem 55 Let V’be a Miakowskian vector space of dimension »withindex n ~2and et k bea mil vector (Ek = 0)in (9), Sow ha there ism orthonormal ast ye sta that (©) Show tat if isa mele vector, defined as. vector wit negative magnitude w a < 0, thea 1s ot onogonal (©) Show thats iy aul vector sub hat e-£ = 0 then © ek (@) n> 4 whic of these statements generalize to space of index 8? Complex inner product spaces ‘We now consider a complex vector space ¥, which in the frst instance may be infinite Aimensional. Vectors will continue tebe denoted by lower case Roman letters such as w and +, but complex scalars willbe denoted by Greek letters such as, f,.. fromthe early part of the alphabet. The word inner produet, or salar product, ona complex vector space 133 134 Inner product spaces will be reserved for a map ¥” x ¥ > € that assigns to every par of wectors uw, v © Va ‘complex sealar (>) satisfying (P2) (uly) = WT. (P2) (ula + Bw) = e(ulv) + Stel? forall complex numbers a, (AP3) (lu) 2 Oand (ula) = 0iR'u =0, ‘The condition (P1) implios (|) is always teal, a necessary condition fr (1P3) to make any sense, From (PI) and (IP2) (av + Bow ly) = Tala eT aT PT wl + BT. sothat (aw + Bo Ju) = Bolu) + Fw tw) 6.10) ‘This propery is often described by saying thatthe inner producti antilinear with espect to the fist anpument ‘A complex vector space with an inner product will simply be called an inner product space. If Vis finite dimensional itis often called a ite dimensional Hilbert space, but fr intsite dimensional spaces the term Hilbert space only applies if the space is complete (see Chapter 13) “Mathematicians more commonly adopt notation (1, vn place of our angular bracket notation, and demand linearity inthe frst argument, with anilincarity in the second. Our conventions follow that which is most popular with physicists and takes its origins in Dirac’s “bra” and “ket” terminology for quantum mechenies (see Chapter 14), Example $4 On C* st en andi Bad) = A ‘Conditions (IP1}-{IP3) are easily verified. We shall see directly that this isthe archetypal finite dimensional inner product space. Every finite dimensional inner product space has ‘asis such that he inner product takes this form, Example 5.5 complex-valued function g (0, 1] -> Cis suid to be continuous if both thereal endimaginary parts ofthe function p(x) = (x) ~ ig(s) arecontiauous. LetC[0, 1] be the set of continuous complex-valued functions onthe real ine interval 0,1], and dene an inner product (ele = [ FHM Coniitions(1P1) and (1P2) are simple to prove, but in order to show (IP3) it is necessary to show that (0) =0, Yee [0.1]. [reer + tee ae =0 > Pe) 5.2. Complex inner product spaces 1€ f(a) # Ofor some 0 a = 1th, by contin there exis an mers fa =, ] orem interval fa.a-+e]onwaieb ts} > F1fta Tan [ora save« [eotaro Hence fcr 0 is essentially identical forall x € (0, 1). The proof that g(x eanple 6 A compleralied fica oh lin > Cisse beware inegabeif fp san ntepble fancton on ny closed terval of Rand [> Pa ‘ow The set ZG) of square negable complexed nts on he al nei coopier vcr mace fifa isa compen conta nd and te) pir of gure img anions ten [Cio easiaters [oie wk [7 weatac ve, 'e) = 0, Hence every unitary operator U is invertible 139 140 Inner product spaces With respect an onhononal basis (he components ofthe near tansfomation U, defined by Ue; = Uses, forme utay matix U = [0] by = Wee) = TUF elem) = Tusa =< STU, rin terms of matrices, leutu, Ie) and {e) are any pair of orthonormal bases, then the linea operator U defined by Ue; is unitary since for any pair of vectors u = we; and = ve, (ul Uv) = Wetted vy =e! fesley) = tule) ‘Tousall orthonormal bases are uniquely related by unitary transformations In the language of Section 3.6 thsi the activ view, wherein vectors ae physically” ‘moved sbout inthe inner prod space by the unitary transformation. Inthe elated passive vie, the change of basis i given by (5.3) ~ itis the component of vectors tat are tans formed, not the vectors themselves. I both bases are orthonormal the components ofan inner product, gvea by Bg. (5.16), ate hy = hi, = Ay, and sting 4% = UF in Eg (5.18) implies the mati U = [U] is uit, =H=uINU=ullu=UlU “Thus, fom both the active and passive viewpoint, orthonormal bases are related by unitary matrices, Problems Problem 5.6 Show thatthe norm defined by an inser product stsis the parallelogram lw lui? be x product space show that uP =2fulP +210 Problem $7 On sain Hence show tht ines ansormation U + V+ ¥ unary i orm preserving, wulwn luln), Yinve eas 1U) Problem 8 Show tat x pir of vectors and «ina complex inner produit space are othogon iw Ne Bo foul? + [eu ¥e,pec. Find a now-orthogonal pair of vectors w and vin. complex inner product space such that +0? = Ime i 53 5.3 Representations of finite groups Problem 53° Show tat he forma (AB) — nia’) {fines a nner producto he veto space of mx m complex matics Mion). (a) Caleulate Hh wher ly isthe nx ientiy matin, (©) What characterizes matrices orthogonal |? (©). Show dha all anisey nx 9 mascesU have the sme norm with respect thi ier produ Problem 510. Let Sand be complex inner product spaces and lt U:S-> Tbe s nese map such hat [Us = [x1 Prove that (WelUy) =be1y), forale.y es Problem 5.11 Let ¥ be s complex vector spice with an “indefinite nner prod’, fined 3k an inner product that ste (PI), (12) but with (IPS) replaced by the non-singular condition COPY) (te) forall» ¥ impos hate = 0, (@) Show tha smile rests Theorems 52 and 53 can be proved fr sucha indefinite inner produc, (by If here are p18 along the diagonal and g 1, find the defining eaions forthe group of ‘wansformations Up g between orthonormal ba Problem $12 1° isaninner product pace an operator K: 7» Fiscal edit i (uly) = (Rute foamy pair of vectors uv € Let) bean arbiary basis, having (ej) = hy and et Kes = Ke, Show that if H = hy) and K = [X5) then KKH (HK 1.) isan orthonormal basis, show tat Kieran matin Representations of finite groups 1G isa finite group, turns out that every init dimensional representation is equivalent to a representation by unitary transformations on aa iner product space -known as unitary representation, Fo, let Tbe a representation on any finite dimensional vector space ¥, and let (e;) be any bass of V. Define an inner produet (uo) on ¥ by setting {,} fo be an orthonormal set, (uy = SoH whore a Jey Wwec. (621) OF course there is no reason wihy the linear transformations 7g) should be unitary with respect to this inner product, bu they willbe unitary with respec tothe inner product |») 142 Inner product spaces formed by ‘averaging over the group’, uly L ia YTomITIow, (5.22) ‘where [Gis the onder ofthe group G (the numberof clements in G). This follows fom 1 TeulTon = DP @taltaten) = DiregmiTag) 1 = AEromro = tu) sine, a a ranges over the group G, so does b = ag for any fixed g © G. ‘Theorem 5.7 Any fnte dimensional representation of a finite group G is completely reducible into direct sum ofireducible representations. Proof: Using the above device we may assume that the representation i unitary ona finite (utawlw) = dulw) tao By sletng an orthonormal bass suc tha he first dim vectors belong oJ, i follows thatthe remaining vectors ofthe bass span W, Hence W and Wace orthogonal and complementary subspaces, = Wo W117 isa Grivarant subspace then I is also ovarian, Por ifu € then forany w € W, 0 = (Pegulw) = (Teel Tey") wiTG@-tw} since 74g) is uitary 0 since Tig" € W by the G-invavince of W. Hence Fig € 1" ‘Now pick tobe the G-nvariantsubspace of V ofsmalest meson not counting he ‘eval subspace (0}. The representation induced on must be reducible since itean have ‘no proper Gr-invaran subspaces, asthey would esd to have smaller dimension. IF” = ¥ then he representation Tisieducible. 1" # Vit orhogonal complement Wis either reducible, in which as the roo fnished or thas anon-ivial invariant subspace 7. Again pick theivariant subspace of smallest dimension and cotine i his fshion util ‘sa diet sum of ieduible subspaces, vawewewe ‘The representation T decomposes ino subepesenations Ty. . 5.3 Representations of finite groups orthogonality relations ‘The components of the matics of reducible group representatives satis a number of important orthogonality relationships, which are the cornerstone ofthe classification procedure of group representations. We wil give just few ofthese relations; others can be found in (4, 5. {tT and 7 be izeducible representations of finite group G on complex vector spaces and Vs espectvely Ife i my = im Fi} and (fla = 1, -..2= dim Fa) are bases ofthese two vector spaces, we will write the representative maces a8 Tyg) = [7oj,1 and Tag) = [Toa] where Tiger = Tey and TG fa = Tea fe ICA: Fi > Veisany linear map, dfin its group average’ 4: Hj > Yo tothe linear map 1 igi Lawn. ‘Then iff is any element ofthe group G, Ta ATO" Tae heat ha Bene) 4 Hence 4 isan intertwining operator, Tid = ATH) forall eG, and by Schur’ lemma, Theorem 46, i J) ~ T; then d= 0. On the other hand, from the corollary to Schur’ lemma, 47, ¥j = Ya = V and Tj = Ty then d = cidy. The matrix ‘version ofthis equation with respect to any basis of Vis A = and taking the trace gives c= bw However aed A= Le Tpar! arya La! wy (Taya) 1 aysaa whence Lan 143 144 Inner product spaces IPT,» 7 expressing 4 and din terms ofthe bases ey and fy the above consequence of Schur’ lemma can be written 1 2 pa, 1 AU Rioe tlie ‘As 4 isan arbitrary operator the matrix elements 4%, are arbitrary complex numbers, so that 1 fuedTaler) = Fa etoile =o (523) ICT, = T= T and ~ dim 5S rica rig — Laty! Fem Ee M@ainin) = 548; Js the degre ofthe representation we have As Ah ace arbitrary, 1S rieeyriigtty = Lala Ta Twa) = 5815 624) 1 (isthe invariant inner product defined by a representation 7 on a vector space V ‘by (522), and (eis any bass such that (estes =u then he unary eosdon (72171) = Gl mes LTT) hr ines on ar aller. In mais Tiga) = 1 ay a ‘whence Te) = Tey! = Te), ‘or equivalently Tule") = THB (925) Substituting this relation for 7; in place of 7 into (5.23), will indices now lowered gives ©. 626) Monte Similarly if 7) = 7; = 7, Pgs. (5.28) and (5.24) give 627)

Anda mungkin juga menyukai