Anda di halaman 1dari 6

MAT 204: Definitions and Theorems (Midterm Prep) Vector Space: has zero element, closed under addition

and subtraction (which are commutative & associative), scalar multiplication is associative I guess, etc. i.e. vector spaces include set of polynomials of degree n, , M(m,n) (the set of m by n matrices), etc ** do not include things not closed under addition and subtraction, like positive values of R^n or whatever Gaussian Elimination and such: - Basically write out the augmented matrix (coefficients + constants) Echelon form: the first nonzero entry in any nonzero row occurs to right of first entry in the row DIRECTLY above it - All zero rows grouped together at bottom Row-Reduced Echelon form: - A is in echelon form - All pivot entries are 1 - All entries above pivots are 0 Theorem: (More Unknowns): If there are more unknowns than there are equations (rank of m by n matrix is m, m<n) there are either no solutions or infinitely many If its consistent, itll have nonpivot variables that can be arbitrarily set Theorem: linear system is solvable IFF constant vector B belongs in column space of coefficient matrix Subspace: 1) is a span of linearly independent vectors 2) is a collection of elements that contains the zero element/vector, and is closed under linear combinations (for x, y Higher dimensional analogues of lines, planes through the origin The fundamental subspaces include: Rowspace the subspace spanned by the pivot rows of an echelon form of the matrix M Column Space the column space spanned by the pivot columns of the original matrix (constant vector B, AX = B, is always in the column space of A, if equation is solvable) col sp. Mxn matrix A is the set of X for which AX = 0. Subspace of Rn. Also the set of vectors B where AX = B is solvable.

Nullspace or Kernel the subspace of solutions to the equation Ax = 0; for a linear transformation, consists of all inputs that will get mapped to the zero vector; is always within the column space of matrix A - if the nullspace is 0, there is a unique solution to Ax = B (and matrix is of full rank) - given by the spanning vectors in the general solution - if the kernel of a linear transformation is {0} then the map is surjective onto - subspace of the domain space (Rm) - general solution to system: translate of nullspace

(t) translation theorem - A is an m x n matrix; T is any solution to AX = B - General solution: X = T + Z; Z belongs to the nullspace T can be any arbitrary solution to the system Solution to AX = B is unique IFF zero vector is the only element of the nullspace (t) nullspace of an m x n matrix is a subspace of Rn def: LINEAR INDEPENDENCE a set of vectors { v1, v2, v3, v4 } is linearly dependent if there

and a,b,c,d are not all 0 a set of n linearly independent vectors is a basis for an n-dimensional space and spans the space i.e. a set is linearly dependent if one of its vectors can be written as a linear combination of the others i.e. 0 vector is dependent on other vectors testing for linear independence: consider the dependency equation: multiply out the A1, A2,

RANK of a matrix: Can be found by: the number of pivot variables, the number of nonzero rows in an echelon form of matrix A, dimension of row space = dimension of column space = rank , Notes on rank: rank A = rank corollary: rank A <= m, <=n

Notes on solvability and rank, given A the m x n matrix: 1) if rank A = n: 2) if rank A = m: system is always solvable (is it?) (n unknowns and n equations) a. n linearly independent vectors in the column space so they are a basis and span all of R^m; all are in span of the column space 3) if rank A = n, A is square n by n: a. system is always solvable and solution is always unique b. null A = 0 c. by the way, its probably invertible.. rank n, n x n matrix Rank-nullity theorem: rank A + null A = n (# columns) LINEARITY PROPERTIES: - transformation T: takes elements from U, domain, produces T(X) in V (target space) - T: U -> V represented by a m x n matrix A - S is subset of domain of the transformation, T image of S under T given by T(X), where - Linear transformations: transformations of linear combinations = linear combinations of transformations - **GIVEN: linear transformation T, finds its matrix MATRIX MULTIPLICATION: really the process of forming linear combinations of the columns of the coefficient matrix - Leads to composition of transformations.. - SoT(X) = S(T(X)) ( ) - ( ) INVERSES: - If we say multiplying by the inverse of A solves AX = Y: o A(BY) = Y -> (AB)Y = Y; AB = I - A is invertible IFF A is nonsingular LU Factorization: - you go through a series of EROS (no row swaps, though) to arrive at an upper triangular matrix [echelon]. Thats your upper triangular matrix. - Inverse of your other matrix (used to be identity) will be lower triangular, thats your U - Diagonal matrix: numbers along the diagonal - By the way, the L matrix is def. unipotent

If you need a row swap, permute A first

LINEAR TRANSFORMATIONS: BASES: a set of linearly independent vectors that spans an n-dimensional subspace; has n elements Minimally spanning set maximally linearly independent set 1) finding the first basis from RREF a. second basis can just come from any echelon form (EROs are fair game) LINEAR TRANSFORMATIONS Def of linearity: T(X) + T(Y) = T(X + Y) Maps from a vector to another vector space: This can be written as a matrix, A which is m x n T = A(X) \ Coordinate vector: vector X = [x, y] where (just a linear combination of the bases) Note this is dependent on the order of the bases Uniqueness of the coordinates depends on linear Facts: product of two lower triangular matrices is lower triangular; similarly for upper triangular and pf: dot product of ith row and jth column of A, B = 0, for I < j Theorem/prop: A is m x n matrix, I is identity. [A | I ] is row-equivalent to [ U | B ],, where U is upper triangular/in echelon form, and B is mxm. B is invertible and A = B-1U; B-1 = L, A = LU Getting this L matrix: identity matrix except for one element Linear combinations of rows: denoted by And therefore its inverse: , given by Scalar multiples; ; and its inverse is that

Determinant: Laplace/cofactor expansion: expanding along any row/column [EXAMPLE] Expanding down an even-numbered row/column gives you det (A) by row-exchange property General formula: ( ) ( )

Scalar property: If you take = X, subtract them, coeficients are the same matrix but multiply a row by a scalar c, you need to multiply the determinant by the same factor c (follows by laplace expansion) Row Exchange Property: exchanging 2 rows of a matrix gives the negative of the determinant (determinant of an upper triangular matrix is the product of the entires on the main diagonal. Additive Property: det [U+V,a2, a3] = det [U,a2,a3] + det [V,a2,a3] Reduction of determinants: Any nxn matrix with 2 equal rows has determinant (by row exchange) *row equivalent matrices have the same determinant (decompose into addition with matrices 2 equal rows which cancel out) (factor out scalars) (remember, adding/subtracting scalar multiples does nothing **nxn matrix is invertible IFF determinant A != 0, e.g. there are no nonzero rows full rank nxn matrix proof: determinant of any nonzero matrix is a nonzero multiple of its RREF; if RREF is not I, then det = 0 **determinant function is unique Theorem 4; product theorem ( Theorem 5: for all nxn matrices, det A = det At ) ( ) ( )

X = Pb*X where point matrix is the nxn matrix whose columns are the basis vectors Multiplying coordinate vector X by Pb generates point X Pb invertible inverse is Cb, coordinate matrix; multiplying by it produces the coordinates of a point.

Anda mungkin juga menyukai