Anda di halaman 1dari 41

Graduate Mathematics HT2016

Lectures 1-4
MATRIX ALGEBRA
Charles Nadeau
E-mail: Charles.Nadeau@economics.gu.se
Office: D-604
Office Hours: by appointment

Department of Economics, University of Gteborg

Autumn 2016

General Course Information


Course moves quickly dont fall behind!
Course useful in all MsE/MsF courses (e.g. AMT, GE/AE/FE, Macro, DS, IO, I&S, ect )
Course Website: (i)
documents in Document folder check folder every few days
(ii) 1-page course outline; lecture slides; problem sets; ect
Textbook:
(i) Chiang/Wainwright (2005) copies available in bookstore
(ii) this week: Chapter 4 (sections 1-6); Chapter 5 (sections 1-6)
Lectures (13):
(i)
topics: matrix algebra; differential/integral calculus; probability
theory; ect
(ii) lectures/textbook designed to help you complete all PS and the exam
Problem Sets (5): (i)
work in groups; ungraded; possible exam questions
(ii) 5 exercise classes where suggested answers provided
(iii) this week: work PS #1; exercise class on Friday (10.00-13.00)
Written Exam:
(i)
determines 100% of course grade; closed-book; write it alone
(ii) 100 points possible on exam; will look similar to exams last year
(iii) exam dates: September 28; November 12; mid-August 2017.
Course Grade:
ECTS grades

(i)

PD (100-75 points); P (74-50 points); F (49-0 points); see me for

Modeling Systems in Matrix Form


Simple linear input-output model:

x1 x2 = d1
x2 x3 = d2
x3 - x1 = 0
where (x1 , x2 , x3) denote endogeneous variables, (, , ) denote coefficients
and (d1 , d2) denote constant terms.

Ax = d

Matrix form:
where

A =
Solving:

x 1* =

x =

x 2* =

d =

x 3* =

Modeling Systems in Matrix Form (cont)

Suppose we have a system of m simultaneous, linear equations:

a11x1 + a12x2 + . + a1nxn = d1


a21x1 + a22x2 + . + a2nxn = d2
..
am1x1 + am2x2 + . + amnxn = dm
This system can be written as:

Ax = d
where

x =

d =

Modeling Systems in Matrix Form (cont)

Suppose:

2x1 - x2 = 0
-x1 + x2 = 4
Matrix form:
Ax = d
where
A =

x =

First equation:
Second equation:

d =

[2 -1] = 0
[-1 1] = 4

Vectors

Vectors are ordered arrays of elements (e.g. numbers, variables,


ect ):
Two-dimensional vectors:
x = [x1 x2]
y =
which can be represented as points or directed segments on
a 2dimensional coordinate plane (see textbook discussion).

Vectors (cont)

Three-dimensional vectors:

x = [x1 x2 x3]
y =
which can be represented as points or directed segments on
a 3dimensional coordinate space (see textbook discussion).

Vectors (cont)

n-dimensional vectors:
x = [x1

x2

x3 xn]

y =
which cannot be represented in graphical form.
Equality of vectors:
y = z

iff yi = zi

x = z iff xj = zj

Matrices

Matrices are rectangular arrays of elements consisting of m


rows and n columns:
m x n matrix:
Amn

where aij denotes the element of the matrix on the intersection


of row i and column j, considered as a single entity (e.g. number,
parameter, variable).
Equality of matrices: A = B

iff aij = bij

Matrices (cont)
Special matrices:

Identity matrix:
I2 =
I3 =
There also exists I4 , I5 , I6 , ect . all necessarily square
matrices.
Matrix counterpart to scalar 1: IA = AI = A

Matrices (cont)
Special matrices:

Null matrix:
022 =
032 =
Null matrices are not necessarily square matrices.
Matrix counterpart to scalar 0: (A + 0) = (0 + A) = A
(A0) = (0A) = 0

Vector Operations (conformability conditions)


Addition/Subtraction:
Vectors must have the same dimension
Scalar Multiplication ():
Multiply every element of the vector by the scalar ()
Multiplication:
Column dimension of the lead vector must be same as the row
dimension of lag vector
(e.g. 1x2 lead vector and 2x1 lag vector)
Division:
Impossible

Vector Addition and Subtraction


Suppose:

x = [8 9 -3]
z = [4 -5 6]
Then:
x + z = [12 4 3]
x - z = [4 14 -9]
Suppose:
y =
z =
Then:
y + z =

and y - z =

Vector Addition and Subtraction (cont)

Generically:

x = [x1 x2 x3]
z = [z1 z2 z3]
x + z = [x1 + z1
x - z = [x1 - z1
y =
y + z =

x 2 + z2
x2 - z2
z =
y - z =

x3 + z3]
x3 - z3]

Scalar Multiplication ()

Suppose:
x = [x1

x2

x 3]
y =

Then:
x = [x1
y =

x2

x3]

Vector Multiplication (cont)


Generically:

x = [x1 x2 x3]
y =
Then:
xy = [x1 x2 x3]
= x 1y 1 + x 2y 2 + x 3 y 3
Requirement: (row vector) x (column vector) with equal number of
elements
Result: scalar (calculated as the inner product of the vector elements)

Linear Dependence

A system of vectors (x, y, z) is linearly dependent if some non-trivial linear


combination of them is equal to the null vector:
1x + 2y + 3z = 0
where the scalars (1 , 2 , 3) are not all equal to zero.
Examples:
(1) Two vectors (x, y) are linearly dependent if they are proportional: x = y
(2) Three vectors (x, y, z) are linearly dependent if one of the vectors is a linear
combination of the other two vectors: z = 1x + 2y
Textbook concepts:

vector space, spanning vectors

Linear Dependence (cont)


Example:
v1 = [5 12]

v2 = [10 24]

Row vectors v1 and v2 are linearly dependent because


2v1 = v2
2v1 - v2 = 0
Example:
v1 =

v2 =

v3 =

Column vectors v1 , v2 and v3 are linearly dependent because


3v1 2v2 = v3
3v1 2v2 v3 = 0
Question: Is it even possible for v1 , v2 and v3 to be linearly independent?

Matrix Operations (conformability conditions)


Addition/Subtraction:
Matrices must have the same dimension
Commutative Law: A + B = B + A
Associative Law: (A + B) + C = A + (B + C)
Scalar Multiplication ():
Multiply every element of the matrix by the scalar ()
Commutative Law: A = A
Multiplication:
Column dimension of the lead matrix must be the same as the row dimension of lag matrix
Associative Law: (AB)C = A(BC) = ABC
Distributive Law: A(B + C) = AB + AC
Division:
Impossible

Matrix Addition

Requirement: conformable for addition (i.e. same dimension)


A =

and

B =

A+B =
Rules:
(1)
(2)
(3)
(4)

Commutative:
Associative: (A
Null matrix (0):
Opposite matrix

A+B = B+A
+ B) + C = A + (B + C)
A+0 = A
(-A): A + (-A) = 0

Matrix Addition/Subtraction

Example:
A =

B =

A+B =
Example:
A =

B =

A-B =

Scalar Multiplication

A =

Rules:
(1) A) = A
(2) (A + B) = A + B
(3) ( ) A = A + 2 A

Matrix Multiplication
Conformability Requirement:

Amk Bkn = Cmn

Multiplication Rule:
cij =
i.e. multiply ith row in A with jth column in B and calculate inner product
of them
Rules: (1) AB BA (i.e. not commutative in general)
(2) Associative: (AB)C = A(BC) (assuming conformability holds)
(3) Distributive:
A(B+C) = AB + AC
(B+C)A = BA + CA (assuming )

Transpose of a Matrix
In order to transpose a matrix you interchange the rows and columns
of the matrix:

Suppose:
A =
Then:
A =
Properties: (A) = A
(A + B) = A + B
(AB) = BA

Transpose of a Matrix (cont)

Suppose:
A

Then:
A =

Matrix Inversion
The inverse of square matrix A is denoted as
A-1
and is defined such that
AA-1 = A-1 A = I
Note:
(2)
(3)
(4)
(5)

(1) unlike a transpose matrix, an inverse matrix may not exist.


if square matrix A has an inverse, then A is called nonsingular.
if square matrix A has no inverse, then A is called singular.
A and A-1 will always have the same dimension.
if A-1 exists, then it is unique.

Issue: What does singularity or nonsingularity imply about a square


matrix?

Matrix Inversion (cont)

Suppose matrix A and matrix B are both non-singular nxn


matrices.
Properties:
(A-1)-1 = A
(AB)-1 = B-1 A-1
(A)-1 = (A-1)

Determinant of a Matrix

The determinant of a square matrix is a unique scalar computed in a


particular way:
Suppose:
A =
Then:
det A =

= (a11a22 - a12a21)

(scalar)

where the value of is a crucial test for the singularity of the matrix
(discussed earlier) and is useful for calculating A-1 (also discussed
earlier).

Determinant of a Matrix (cont)


Suppose:

A =
Then:
= a11 - a12 + a13
= a11 - a12

+ a13

where the subdeterminant denotes the minor of element aij. The


cofactor () of element aij is defined as:
(-1)i+j
Note: In the example above, was calculated via expansion along the
first row

Determinant of a Matrix (cont)

The value of the determinant of a matrix can be calculated by


expansion using any row i or column j in the matrix; thus, the formula
for the value of the determinant of matrix A can be written as either:

det A =
if expanding using row i or, equivalently

det A =
if expanding using column j.

Determinant of a Matrix (cont)


Properties:

(1) If matrix A is singular (i.e. linearly dependent rows/columns) then:


= 0
(2)
=
(3) If matrix A has a 0 row or column, then = 0
(4) Interchanging rows (or columns) in matrix A does not affect
(5) Multiplying 1 row/column in matrix A by scalar yields:
Textbook concepts: rank, full rank

Determinant of a Matrix (cont)

Given:
A =
Calculating:
= (2)(2) (0)(0) = +4
Given:
A =
Calculating:
= (-1)(-10) (5)(2) = 0
Note: -2v1 = v2 (i.e. linear dependent rows)
Note: -5v1 = v2 (i.e. linear dependent columns)

Determinant of a Matrix (cont)

Suppose:

Question:

Is there an easy way to calculate determinants here?


Verify:

= +72

= -81

Calculating the Inverse of a Matrix

Given square matrix A, the inverse of A (i.e. A-1) is calculated as follows:


Steps:
(1) calculate

(assume: 0)

(2) construct the cofactor matrix (C) for A


(3) take the transpose of C

(i.e. C adjoint of A)

Calculating:
A-1 =

Constructing a Cofactor Matrix

To construct a cofactor matrix (C) for matrix A we simply replace each


element in matrix A (aij) with its cofactor (), where
(-1)i+j
Example:
A =
C

adj A C

=
=

Constructing a Cofactor Matrix (cont)

Example:
A

adjA C

Calculating the Inverse of a Matrix (cont)

Example:

=
Calculating:
C
Thus:

= (1)(1) + (0)(0) + (0)(0) = +1


=
-1

adj

Note: Identity matrix () is its own inverse (i.e. = -1)

Solving a Square System

Suppose:
Ax = d
where the coefficient matrix A is an nxn square matrix and d is an nx1
column vector where d 0. If 0 (i.e. matrix A is non-singular and
thus full rank) then A-1 exists and the unique solution to the system is:
x = A-1 d

Solutions of a Square System

Suppose:
Ax = d
where the coefficient matrix A is an nxn square matrix (i.e. Ax = d
denotes a square system with n-equations and n-unknown variables).
= 0:
=0

(1)

if d = 0 then many solutions exist, including: x

(2)
exists or many solutions exist
0:
=0

(1)

if d 0 then either no solution

if d = 0 then a unique solution exists where: x


(2)

exists where: x = A-1 d

if d 0 then a unique solution

Solutions of a Non-Square System


Suppose:
Ax = d
where the coefficient matrix A is an mxn non-square matrix (i.e. Ax = d
denotes a non-square system with m-equations and n-unknown
variables).
m < n:
possible

underdetermined system; generally many solutions

m > n:

overdetermined system; generally no solution possible

Cramers Rule
Given:

Ax = d

As we have already seen, if 0 and d 0 then a unique solution exists


where
x = A-1 d
According to Cramers rule
xj =
where Aj denotes the matrix A with the jth column replaced by the
column vector d.

Anda mungkin juga menyukai