Anda di halaman 1dari 253

ALJABAR LINEAR

Pertemuan ke-1

Reny Rian Marliana, S.Si.,M.Stat.


Point Penilaian
Nilai akhir akan ditentukan dengan komponen sebagai berikut:

• Terstruktur (TST): 20%


• Mandiri (MDR): 20%
• Ujian Tengah Semester (UTS): 20%
• Ujian Akhir Semester (UAS): 40%

Konversi Huruf Mutu :

A >=80
B 70-79,99
C 55-69.99
D 45-54,99
E <45
Aturan Nilai Akhir
1. Tidak ada ujian susulan untuk kuis.
2. Ujian Susulan untuk UTS dan UAS dapat
dilakukan dengan alasan sakit dan menunjukkan
surat keterangan sakit dari dokter.
3. Keterlambatan pengumpulan Tugas atau Latihan
Soal maksimal satu minggu dengan konsekuensi
nilai yang diberikan hanya 80% dari nilai
maksimal.
4. Jika terbukti melakukan kecurangan akademik
berupa mencontek atau bekerja sama pada saat
kuis, UTS dan UAS, maka akan mendapatkan
sanksi nilai 0.
Aturan Perkuliahan
1. Wajib Bergabung dalam GOOGLE
CLASSROOM menggunakan kode wdt6x6i
2. Toleransi Keterlambatan 15 Menit dari jadwal
Perkuliahan
3. Handphone/Smartphone, Tablet dan alat
Elektronik pribadi lainnya WAJIB di Silent
4. Tidak berbincang-bincang selama proses belajar
mengajar
5. Tidak meninggalkan sampah di ruangan kelas
6. Membawa Kalkulator
Aturan Pengumpulan Tugas
1. Pada setiap jawaban tugas WAJIB
mencantumkan Tanggal Penugasan
2. Nama lengkap
3. NIM
4. Kelas
5. Program Studi
Referensi
• Andreescu, Titu. Essential Linear Algebra with
Applications. Springer Science+Business Media
New York. 2014.
• Anton, Howard, Chirs Rorres. Elementary
Linear Algebra, 9th Edition. Applications
Version. John Wiley & Sons, Inc. 2005
• Elden, Lars. Numerical Linear Algebra and
Applications in Data Mining, Preliminary
Version. Department Mathematics, Linkoping
University. 2005.
• Blyth, T.S & E.F. Robertson. Basic Linear
Algebra. 2nd Edition. Springer. 2002
Materi
• Matriks
• Matriks Invers
• Determinan
• Persamaan-persamaan Linier
• Persamaan-persamaan Linier Homogen
• Vektor
• Ruang Vektor
• Transformasi Linier
TERIMA KASIH
MATRIKS
P E RT E M UA N 2

RENY RIAN MARLIANA


DEFINISI
Howard Anton (2005) :
A matrix is a rectangular array of numbers. The numbers in the array are
called the entries in the matrix.

Secara umum :
Matriks adalah susunan segi empat siku-siku dari bilangan yang
dibatasi dengan tanda kurung
NOTASI
T.S. Blyth (2002) :
If m and n are positive integers then by a matrix of size m by n, or an m x n matrix,
we shall mean a rectangular array consisting of mn numbers in a boxed display
consisting of m rows and n columns.

Secara umum :
• Matriks tersusun atas baris dan kolom, jika matriks tersusun atas m baris dan n
kolom maka dikatakan matriks tersebut berukuran (berordo) m x n.
• Penulisan matriks biasanya menggunakan huruf besar contoh A, B, C, dan
seterusnya.
• Penulisan matriks beserta ukurannya adalah Am x n
BENTUK UMUM
• Bentuk umum matriks Amn

𝑎11 𝑎12 ⋯ 𝑎1𝑛


𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴𝑚𝑛 = ⋮ ⋮ ⋱ ⋮
𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛

• Contoh :
1 2 3 4
1 2 1
5 6 7 8
𝐴1𝑥5 = 10 9 8 7 6 𝐴3𝑥2 = 3 4 𝐴4𝑥4 = 𝐴3𝑥1 = 2
9 10 11 12
5 6 3
13 14 15 16
CONTOH
• Data lama (waktu) mahasiswa belajar dalam satu minggu :

• Jika dituliskan dalam matriks maka :

2 3 2 4 1 4 2
𝐴3𝑥7 = 0 3 1 4 3 2 2
4 1 3 1 0 0 2
CONTOH

Gambar untuk Satu Digit


merupakan matriks
berukuran 16 x 16
CONTOH
OPERASI MATRIKS

• Definisi (Howard Anton, 2005) :


Two matrices are defined to be equal if they have the same size and their
corresponding entries are equal.

Dalam notasi matriks, jika A=[aij] dan B=[bij] memiliki ukuran yang
sama, maka A=B, jika dan hanya jika (A)ij = (B)ij atau ekuivalen dengan
aij = bij untuk semua i dan j
CONTOH
OPERASI MATRIKS
Howard Anton (2005) :
If A and B are matrices of the same size, then the sum A+B is the matrix
obtained by adding the entries of B to the corresponding entries of A, and the
difference A-B is the matrix obtained by subtracting the entries of B from the
corresponding entries of A. Matrices of different sizes cannot be added or
subtracted.
𝑎 𝑏 𝑒 𝑓 𝑎+𝑒 𝑏+𝑓 𝑎 𝑏 𝑒 𝑓 𝑎−𝑒 𝑏−𝑓
+ = − =
𝑐 𝑑 𝑔 ℎ 𝑐+𝑔 𝑑+ℎ 𝑐 𝑑 𝑔 ℎ 𝑐−𝑔 𝑑−ℎ
OPERASI MATRIKS
Howard Anton (2005) :
If A is any matrix and c is any scalar, then the product cA is the matrix
obtained by multiplying each entry of the matrix A by c. The matrix cA is said
to be a scalar multiple of A.
OPERASI MATRIKS
T.S. Blyth (2002) :
OPERASI MATRIKS
Howard Anton (2005) :
If A is an m x r matrix and B is an r x n matrix, then the product AB is the
m x n matrix whose entries are determined as follows. To find the entry in row
i and column j of AB , single out row i from the matrix A and column j from
the matrix B. Multiply the corresponding entries from the row and column
together, and then add up the resulting products.
OPERASI MATRIKS
Perkalian matriks :

𝑎 𝑏 𝑐
𝐴2𝑥3 =
𝑑 𝑒 𝑓
𝑘 𝑛
𝐵3𝑥2 = 𝑙 𝑜
𝑚 𝑝
𝑎𝑘 + 𝑏𝑙 + 𝑐𝑚 𝑎𝑛 + 𝑏𝑜 + 𝑐𝑝
𝐴𝐵2𝑥2 =
𝑑𝑘 + 𝑒𝑙 + 𝑓𝑚 𝑑𝑛 + 𝑜𝑒 + 𝑓𝑝
OPERASI MATRIKS
Howard Anton (2005) :

SIFAT OPERASI MATRIKS


A+B=B+A A+ (B+C) = (A+B)+C
A(BC)=(AB)C A(B+C)=AB+AC
(B+C)A=BA+CA A(B-C)=AB-AC
(B-C)A=BA=CA a(B+C)=aB+aC
(a+b)C=aC+bC (a-b)C=aC-bC
a(BC)=(ab)C A(BC )=(aB)C=B(aC)
TRANSPOSE MATRIKS
Howard Anton (2005) :
If A is any m x n matrix, then the transpose of A, denoted by AT is defined to
be the n x m matrix that results from interchanging the rows and columns of A
; that is, the first column of AT is the first row of A, the second column of AT is
the second row of A, and so forth.

𝑎 𝑏 𝑐
𝐴2𝑥3 =
𝑑 𝑒 𝑓
𝑎 𝑑
𝐴𝑇3𝑥2 = 𝑏 𝑒
𝑐 𝑓
TRANSPOSE MATRIX
TRANSPOSE MATRIX
• Howard Anton (2005) :

Sifat Matriks Transpose


((A)T )T =A
(A+B)T = AT + BT dan (A-B)T = AT - BT
(kA)T =kAT
(AB)T =BT AT
TRACE MATRIX
Howard Anton (2005) :
If A is a square matrix, then the trace of A, denoted by tr(A) , is defined to be the sum
of the entries on the main diagonal of A. The trace of A is undefined if A is not a
square matrix.
Matrix bujur
sangkar

Matrix
diagonal

MATRIX Matrix nol

Matrix
segitiga

Matrix
identitas
MATRIX BUJUR SANGKAR
• Matriks yang jumlah barisnya sama dengan jumlah kolomnya, m = n
• Memiliki elemen diagonal yang berjumlah n untuk matrix bujur sangkar dengan
ukuran n x n yaitu a11, a22,…, ann

𝑎11 𝑎12
𝐴22 = 𝑎 𝑎22
21

𝑎11 𝑎12 𝑎13


𝐴33 = 𝑎21 𝑎22 𝑎23
𝑎31 𝑎32 𝑎33
MATRIX DIAGONAL
• Matriks yang elemen bukan diagonalnya bernilai nol.
• Dalam hal ini tidak disyaratkan bahwa elemen diagonal tak nol

1 0
𝐴=
0 3
1 0
𝐵=
0 0
0 0
𝐶=
0 0
MATRIX NOL
• Matrix yang semua elemennya bernilai nol

0 0
𝐴=
0 0
0 0 0
𝐵= 0 0 0
0 0 0
MATRIX NOL
Howard Anton (2005) : Sifat Matriks Nol
A+0=0+A=A
A-A=0
0-A=-A
0A=A
A0=0
MATRIX SEGITIGA
• Matrix bujur sangkar yang elemen-elemen di bawah atau di atas elemen diagonal
bernilai nol.
• Jika yang bernilai nol adalah elemen-elemen di bawah diagonal maka disebut matrix
segitiga atas.
• Jika yang bernilai nol adalah elemen-elemen di atas elemen diagonal maka disebut
matriks segitiga bawah,

1 0 1 0 0 0 1 0 0
𝐴= 0 0 2 𝐵= 1 0 0 𝐶= 0 1 0
0 0 1 0 1 0 0 0 2
MATRIX IDENTITAS
• Matriks diagonal yang elemen diagonalnya bernilai 1
• AIn = A dan InA=A

1 0 0 0
1 0 0
0 1 0 0
I= 0 1 0 I=
0 0 1 0
0 0 1
0 0 0 1
MATRIX IDENTITAS
CONTOH APLIKASI
A certain company manufactures three products P, Q, R in four different plants
W, X, Y, Z. The various costs (in whole dollars) involved in producing a single
item of a product are given in the table :
P Q R
Material 1 2 1
Labor 3 2 2
Overheads 2 1 2

The numbers of items produced in one month at the four locations are as follows:

W X Y Z
P 2000 3000 1500 4000
Q 1000 500 500 1000
R 2000 2000 2500 2500
CONTOH APLIKASI
The problem is to find the total monthly costs of material, labor and overheads at
each factory.
Let C be the "cost" matrix formed by the first set of data and let N be the matrix
formed by the second set of data. Thus

The total costs per month at factory W are


CONTOH APLIKASI
TERIMA KASIH
PERTEMUAN KE-3

RENY RIAN MARLIANA


Howard Anton (2005) :
If A is a square matrix, and if a matrix B of the same size can
be found such that AB=BA=I , then A is said to be
invertible and B is called an inverse of A. If no such matrix B
can be found, then A is said to be singular.
Howard Anton (2005) :
▪ If B and C are both inverses of the matrix A, then B=C.
▪ If A and B are invertible matrices of the same size, then AB is invertible and (AB)-1 =B-
1 A-1
▪ The matrix

is invertible if ad-bc≠0, in which case the inverse is given by the formula


Howard Anton (2005) :
Howard Anton (2005) :
Howard Anton (2005) :
To find the inverse of an invertible matrix A, we must find a
sequence of elementary row operations that reduces A to
the identity and then perform this same sequence of
operations on In to obtain A-1 .
Adjoin the identity matrix to the right side of A, thereby
producing a matrix of the form [A|I], apply row operations
to this matrix until the left side is reduced to I; these
operations will convert the right side to A-1, so the final
matrix will have the form [I|A-1].
Find the inverse of
▪ Consider the matrix
Find the inverse of
Find the inverse of
Find the inverse of
DETERMINAN
MATRIKS
Pertemuan ke-4
RENY RIAN MARLIANA
Definisi
Howard Anton (2005) :
𝑎 𝑏
The 2 x 2 matrix is invertible if ad-bc≠0. 𝐴 =
𝑐 𝑑

The expression ad-bc occurs so frequently in mathematics that it has a name, it is called
determinant of the matrix A and is denoted by the symbol det(A).
Minors and Cofactors
Finding Minors and Cofactors
Let

The minor of entry a11

The cofactor of a11 is


Finding Minors and Cofactors
the minor of entry a32 is

The cofactor of a32 is


Finding Minors and Cofactors
the cofactor and the minor of an element aij differ only in sign. That is Cij = ±Mij
Cofactor Expansions
The definition of a 3 x 3 determinant in terms of minors and cofactors is :

shows that the determinant of A can be computed by multiplying the entries in the first row of
A by their corresponding cofactors and adding the resulting products. More generally, we define
the determinant of an n x n matrix to be

This method of evaluating det(A) is called cofactor expansion along the first row of A .
Cofactor Expansion Along the First Row
Cofactor Expansion Along the First Row
If A is a 3 x 3 matrix, then its determinant is
Cofactor Expansion Along the First Row
If A is a 3 x 3 matrix, then its determinant is
Expansions by Cofactors
The determinant of an n x n matrix can be computed by multiplying the entries in any
row (or column) by their cofactors and adding the resulting products; that is, for each 1≤ i
≤ n and 1≤ j ≤ n.

det 𝐴 = 𝑎1𝑗 𝐶1𝑗 + 𝑎2𝑗 𝐶2𝑗 + ⋯ + 𝑎𝑛𝑗 𝐶𝑛𝑗


cofactor expansion along the jth column

And
det 𝐴 = 𝑎𝑖1 𝐶𝑖1 + 𝑎𝑖2 𝐶𝑖2 + ⋯ + 𝑎𝑖𝑛 𝐶𝑖𝑛
cofactor expansion along the ith row
Cofactor Expansion Along the First Column
Evaluate det(A) by cofactor expansion along the first column of A .
Adjoint of Matrix
If A is any n x n matrix and Cij is the cofactor of aij , then the matrix

Is called the matrix of cofactor from A. the transpose of this matrix is called the adjoint
of A and is denoted by adj(A).
Inverse of a Matrix
Inverse of a Matrix using its Adjoint
Determinant of triangular matrix

If A is an n x n triangular matrix (upper triangular, lower triangular, or


diagonal), then det(A) is the product of the entries on the main diagonal
of the matrix, that is

det 𝐴 = 𝑎11 · 𝑎22 · 𝑎33 · ⋯ · 𝑎𝑛𝑛


Determinant of an Upper Triangular Matrix
EVALUATING DETERMINANTS BY ROW REDUCTION

Howard Anton (2005) :

◦ Let A be a square matrix. If A has a row of zeroes or a column of


zeroes, then det(A) = 0
◦ Let A be a square matrix. Then det(A)=det(AT)
◦ If A is a square matrix with two proportional rows or two
proportional column, then det(A)=0
Elementary Row Operations
Howard Anton (2005) :
TERIMA KASIH
SISTEM PERSAMAAN LINEAR
Pertemuan ke-6

Reny Rian Marliana


Definisi
Howard Anton (2005) :
A linear equation in the n variables, x1 , x2 , …, xn to be one that can
be expressed in the form :

𝑎𝑥1 + 𝑎2 𝑥2 + ⋯ + 𝑎𝑛 𝑥𝑛 = 𝑏

Where a1 , a2 , …, an and b are the real constants.


the variables in a linear equation are sometimes called unknowns.
Solution

Howard Anton (2005) :


A solution of a linear equation is a sequence of n numbers 1,2,3,…,n
such that the equation is satisfied when we substitute
x1 = 1 ; x2 = 2 ; x3 = 3 ; x4 = 4 ; … ; xn = n
The set of all solution of the equation is called its solution set or
sometimes the general solution of the equation.
System of Linear Equations

Howard Anton (2005) :


• A finite set of linear equations in the variables , x1 , x2 , …, xn
is called a system of linear equations or linear system.
• A sequence of numbers 1,2,3,…,n is called a solution of
the system if x1 = 1 ; x2 = 2 ; x3 = 3 ; x4 = 4 ; … ; xn = n is a
solution of every equations in the system.
Example
4𝑥1 − 𝑥2 + 3𝑥3 = −1
3𝑥1 + 𝑥2 + 9𝑥3 = −4

Has the solution x1 = 1 ; x2 = 3 ; x3 = -1 since these values


satisfy both equations.
However x1 = 1 ; x2 = 8 ; x3 = 1 is not a solution since these
values satisfy only the first equation in the system.
Solution
Howard Anton (2005) :
• A system of rquations that has no solution is said
to be inconsistent, if there is at least one
solution of the system, it is called consistent.
• Every system of linear equations has no
solutions, or has exactly one solution, or has
infinitely many solutions.
Solution

Consider a general system of two linear equations in the unknowns x


and y :
Solution
Augmented Matrices
An arbitrary system of m linear equations in n unknowns can be written
as :

Where x1 , x2 , …, xn are the unknowns and the subscripted a’s and b’s
denote constants.
Augmented Matrices

A system of m linear equations in n unknowns can be abbreviated by


writing only the rectangular array of numbers

This is called the augmented matrix for the system.


Example
Method for Solving a System of
Linear Equations
The basic method for solving a system of linear equations is to replace
the given system by a new system that has the same solution set but is
easier to solve.
This new system is generally obtained in a series of steps by applying
the following three types of operations to eliminate unknowns
systematically:
1. Multiply an equation through by a nonzero constant.
2. Interchange two equations.
3. Add a multiple of one equation to another.
Method for Solving a System of
Linear Equations

Since the rows (horizontal lines) of an augmented matrix correspond to


the equations in the associated system, these three operations
correspond to the following operations on the rows of the augmented
matrix:
1. Multiply a row through by a nonzero constant.
2. Interchange two rows.
3. Add a multiple of one row to another row.
Gaussian Elimination
Howard Anton (2005) :
To be of reduced row-echelon form, a matrix must have the following properties:
1. If a row does not consist entirely of zeros, then the first nonzero number in the row is a 1. We
call this a leading 1.
2. If there are any rows that consist entirely of zeros, then they are grouped together at the bottom
of the matrix.
3. In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row
occurs farther to the right than the leading 1 in the higher row.
4. Each column that contains a leading 1 has zeros everywhere else in that column.
A matrix that has the first three properties is said to be in row-echelon form. (Thus, a matrix in
reduced row-echelon form is of necessity in row-echelon form, but not conversely.)
Reduced row-echelon form
Reduced row-echelon form
Example
TERIMA KASIH
SISTEM PERSAMAAN LINEAR
HOMOGEN
P E RT EMUAN K E - 7

R E NY R I A N M A R LIANA
Definisi
Howard Anton (2005) :
A system of linear equations is said to be homogeneous if the constant terms are all zero, the
system has the form :

Every homogeneous system of linear equations is consistent, since all such systems have x1 = 0 ; x2
= 0 ; x3 = 0 ; x4 = 0 ; … ; xn = 0 as solution.
This solution is called the trivial solution., if there are other solutions, they are called nontrivial
solutions.
A homogeneous system of linear equations with more unknowns than equations has infinitely many
solutions.
Special case
In the special case of a homogeneous linear system of two equations in
two unknowns, say :

The graphs of the equations are lines through the origin, and the trivial
solution corresponds to the points of intersection at the origin.
Special case
LATIHAN
Solve the following homogeneous systems of linear equations by any method.
A. 2𝑥 − 𝑦 − 3𝑧 = 0
−𝑥 + 2𝑦 − 3𝑧 = 0
𝑥 + 𝑦 + 4𝑧 = 0

B. 𝑥1 + 3𝑥2 + 𝑥4 = 0
𝑥1 + 4𝑥2 + 2𝑥3 =0
− 2𝑥2 − 2𝑥3 − 𝑥4 = 0
2𝑥1 − 4𝑥2 + 𝑥3 + 𝑥4 = 0
𝑥1 − 2𝑥2 − 𝑥3 + 𝑥4 = 0
TERIMA KASIH
VEKTOR
Pertemuan ke-9

RENY RIAN MARLIANA


DEFINSI
Howard Anton (2005):
• Vectors can be represented geometrically as directed line segments or arrows
in 2-space or 3-space.
• The direction of the arrow specifies the direction of the vector, and the
length of the arrow describes its magnitude
• The tail of the arrow is called the initial point of the vector
• The tip of the arrow the terminal point
• Symbolically, we shall denote vectors in lowercase boldface type (for instance,
a, k, v, w, and x)
Vector

• The initial point of a vector v is A and the terminal point is B, we write

𝑣 = 𝐴𝐵
Vector
• Vectors with the same length and same direction, are called equivalent
• Equivalent vectors are regarded as equal even though they may be located in
different positions
• If v and w are equivalent, we write v = w
Sum of Two Vectors
Howard Anton (2005) :
If v and w are any two vectors, then the sum v + w is the vector determined
as follows: Position the vector w so that its initial point coincides with the
terminal point of v. The vector is represented by the arrow from the initial
point of v to the terminal point of w
Sum of Two Vectors
The vector of length zero is called the zero vector and Is denoted by 0. for every
vetor v , we define :

𝟎+𝐯=𝐯+𝟎=𝐯
Sum of Two Vectors
If v is any nonzero vector, then -v , the negative of v, is defined to be the
vector that has the same magnitude as v but is oppositely directed. This vector
has the property
Difference of Two Vectors

Howard Anton (2005) :


If v and w are any two vectors, then the difference of w from v is defined by

𝐯 − 𝐰 = 𝐯 + −𝐰
Difference of Two Vectors
To obtain the difference v-w without constructing -w , position v and w so
that their initial points coincide; the vector from the terminal point of w to the
terminal point of v is then the vector v-w
Scalar Multiple of Vector
Howard Anton (2005) :
If v is a nonzero vector and k is a nonzero real number (scalar), then the
product kv is defined to be the vector whose length is |k| times the length
of v and whose direction is the same as that of v if k>0 and opposite to
that of v if k<0 . We define kv = 0 if k=0 or v = 0 .
Scalar Multiple of Vector
• A vector of the form kv is called a scalar multiple of v.
• Vectors that are scalar multiples of each other are parallel
Vectors in Coordinate Systems
• Problems involving vectors can often be simplified by introducing a
rectangular coordinate system
• Let v be any vector in the plane, and assume that v has been positioned so
that its initial point is at the origin of a rectangular coordinate system.
• The coordinates (v1 , v2 ) of the terminal point of v are called the
components of v, and we write
𝐯 = 𝑣1 , 𝑣2
Vectors in Coordinate Systems
• v1 and v2 are the components of v.
Vectors in Coordinate Systems
If equivalent vectors, v and w, are located so that their initial points fall at the
origin, then it is obvious that their terminal points must coincide (since the
vectors have the same length and direction); thus the vectors have the same
components.
Conversely, vectors with the same components are equivalent since they
have the same length and the same direction.
Vectors in Coordinate Systems
In summary, two vectors

𝐯 = 𝑣1 , 𝑣2 and 𝐰 = 𝑤1 , 𝑤2

Are equivalent if and only if

𝑣1 = 𝑤1 and 𝑣2 = 𝑤2
Vectors in Coordinate Systems
If
𝐯 = 𝑣1 , 𝑣2 and 𝐰 = 𝑤1 , 𝑤2
Then

𝐯 + 𝐰 = 𝑣1 + 𝑤1 , 𝑣2 + 𝑤2
Vectors in Coordinate Systems
If v = (v1,v2) and k is any scalar, then by using a geometric argument involving
similar triangles, it can be shown that

𝑘𝐯 = 𝑘𝑣1 , 𝑘𝑣2
Vectors in Coordinate Systems
For example, if v =(1,-2) and w = (7,6) then
𝐯 + 𝐰 = 1, −2 + 7,6 = (1 + 7, −2 + 6ሻ = (8,4ሻ

And
4𝐯 = 4 1, −2 = (4 1 , 4 −2 ሻ = (4, −8ሻ

And
𝐯 − 𝐰 = 𝑣1 − 𝑤1 , 𝑣2 − 𝑤2 = 6, −8
Vectors in 3-Space
• Vectors in 3-space can be described by triples of real numbers by introducing
a rectangular coordinate system.
• To construct such a coordinate system, select a point O, called the origin,
and choose three mutually perpendicular lines, called coordinate axes,
passing through the origin.
• Label these axes x, y, and z, and select a positive direction for each
coordinate axis as well as a unit of length for measuring distances
Vectors in 3-Space
Vectors in 3-Space
• Each pair of coordinate axes determines a plane called a coordinate plane
• These are referred to as the xy-plane, the xz-plane, and the yz-plane.
• To each point P in 3-space we assign a triple of numbers (x, y, z), called the
𝑥 = 𝑂𝑋, 𝑦 = 𝑂𝑌, 𝑧 = 𝑂𝑍
coordinates of P, as follows: Pass three planes through P parallel to the
coordinate planes, and denote the points of intersection of these planes with
the three coordinate axes by X, Y, and Z
• The coordinates of P are defined to be the signed lengths
𝑥 = 𝑂𝑋, 𝑦 = 𝑂𝑌, 𝑧 = 𝑂𝑍
Vectors in 3-Space
Vectors in 3-Space
• Rectangular coordinate systems in 3-space fall into two categories, left-
handed and right-handed.
• A right-handed system has the property that an ordinary screw pointed in
the positive direction on the z-axis would be advanced if the positive x-axis
were rotated 90°. toward the positive y-axis
• The system is left-handed if the screw would be retracted
Vectors in 3-Space
Vectors in 3-Space
If a vector v in 3-space is positioned so its initial point is at the origin of a
rectangular coordinate system, then the coordinates of the terminal point are
called the components of v, and we write

𝐯 = 𝑣1 , 𝑣2 , 𝑣3
Vectors in 3-Space
Howard Anton (2005) :
v and w are equivalent if and only if v1 = w1, v2 = w2 and v3 = w3

𝐯 + 𝐰 = 𝑣1 + 𝑤1 , 𝑣2 + 𝑤2 , 𝑣3 + 𝑤3

And if k is any skalar


𝑘𝐯 = 𝑘𝑣1 , 𝑘𝑣2 , 𝑘𝑣3
Properties of Vector
Operations
Norm of a Vector
Howard Anton (2005) :
• The length of a vector u is often called the norm of u and is denoted by
‖𝐮‖
• It follows from the Theorem of Pythagoras that the norm of a vector
u=(u1,u2) in 2-space is

‖𝐮‖ = 𝑢12 + 𝑢22


Norm of a Vector
Let u=(u1,u2 ,u3) be a vector in 3-space. Then

‖𝐮‖2 = 𝑂𝑅 2 + 𝑅𝑃 2 = 𝑂𝑄 2 + 𝑂𝑆 2 + 𝑅𝑃 2 = 𝑢12 + 𝑢22 + 𝑢32

‖𝐮‖ = 𝑢12 + 𝑢22 + 𝑢32

A vector of norm 1 is called a unit vector.


Norm of a Vector
Norm of a Vector
If P1=(x1,y1,z1) and P2=(x2,y2,z2) are two points in 3-space, then the distance
d between them is the norm of the vector
𝑃1 𝑃2 = 𝑥2 − 𝑥1 , 𝑦2 − 𝑦1 , 𝑧2 − 𝑧1

It follows
𝑑= 𝑥2 − 𝑥1 2 + 𝑦2 − 𝑦1 2 + 𝑧2 − 𝑧1 2
Norm of a Vector
If P1=(x1,y1) and P2=(x2,y2) are two points in 2-space, then the distance d
between them is the norm of the vector

𝑃1 𝑃2 = 𝑥2 − 𝑥1 , 𝑦2 − 𝑦1

It follows
𝑑= 𝑥2 − 𝑥1 2 + 𝑦2 − 𝑦1 2
Norm of a Vector
Norm of a Vector
Norm of a Vector
TERIMA KASIH
VEKTOR
Pertemuan 10

R E N Y R I AN M AR L I AN A
Dot Product of Vectors
◦ Let u and v be two nonzero vectors in 2-space or 3-space, and
assume these vectors have been positioned so that their initial
points coincide.
◦ By the angle between u and v, we shall mean the angle θ
determined by u and v that satisfies 0≤≤
Dot Product of Vectors
Howard Anton (2005) :
If u and v are vectors in 2-space or 3-space and θ is the angle
between u and v, then the dot product or Euclidean inner
product u.v is defined by
Example
As shown in Figure 3.3.2, the angle
between the vectors u=(0,0,1) and
v=(0,2,2) is 45°. Thus

𝐮 · 𝐯 = ‖𝐮‖‖𝐯‖cos𝜃
1
= 02 + 02 + 12 02 + 22 + 22
2
=2
Component Form of the Dot Product
◦ Howard Anton (2005) :
Let u=(u1, u2, u3) and v=(v1, v2, v3) be two nonzero
vectors. If, as shown in Figure 3.3.3, θ is the angle
between u and v, then the law of cosines yields

𝐮 · 𝐯 = 𝑢1 𝑣1 + 𝑢2 𝑣2 + 𝑢3 𝑣3
If u and v are two vectors in 2-space, then the
formula

𝐮 · 𝐯 = 𝑢1 𝑣1 + 𝑢2 𝑣2
Finding the Angle Between Vectors
If u and v are nonzero vectors, then Formula 1 can be written as

𝐮·𝐯
cos𝜃 =
‖𝐮‖‖𝐯‖
Example
Consider the vectors u=(2,-1,1) and v=(1,1,2). Find u.v
and determine the angle θ between u and v.
𝐮 · 𝐯 = 𝑢1 𝑣1 + 𝑢2 𝑣2 + 𝑢3 𝑣3 = 2 1 + −1 1 + 1 2 = 3
‖𝐮‖ = ‖𝐯‖ = 6
𝐮·𝐯 3 1
cos𝜃 = = =
‖𝐮‖‖𝐯‖ 6 6 2
−1
1
𝜃 = cos = 600
2
Orthogonal Vectors
◦ Perpendicular vectors are also called orthogonal vectors
◦ Two nonzero vectors are orthogonal if and only if their dot product
is zero
◦ If we agree to consider u and v to be perpendicular when either or
both of these vectors is 0, then we can state without exception
that two vectors u and v are orthogonal (perpendicular) if and
only if u.v=0
◦ To indicate that u and v are orthogonal vectors, we write

𝐮⊥𝐯
Properties of the Dot Product
Cross Product of Vectors
Howard Anton (2005) :
If u=(u1, u2, u3) and v=(v1, v2, v3) are vectors in 3-space, then the
cross product u x v is the vector defined by
𝐮 × 𝐯 = 𝑢2 𝑣3 − 𝑢3 𝑣2 , 𝑢3 𝑣1 − 𝑢1 𝑣3 , 𝑢1 𝑣2 − 𝑢1 𝑣1
Or in determinant notation,
Relationships Involving Cross Product
and Dot Product
Properties of Cross Product
Vectors in n-Space
Howard Anton (2005) :
If n is a positive integer, then an ordered n-tuple is a sequence
of n real numbers (a1, a2, a3, …, an) . The set of all ordered n-
tuples is called n-space and is denoted by Rn .
Properties of Vector Operations in n-Space
Euclidean n-Space
Properties of Euclidean Inner Product
If u, v, and w are vectors in Rn and k is any scalar, then:
Norm and Distance in Euclidean n-
Space
Cauchy–Schwarz Inequality in Rn
Properties of Length in
Properties of Distance in Rn
Euclidean inner product in Rn
Orthogonality
Two vectors u and v in Rn are called orthogonal if u.v-=0.
Linear Independence
Definition :
Note
A set S with two or more vectors is :
a)Linearly dependent if and only if at least one of the vectors in S is
expressible as a linear combination of the other vectors in S.
b)Linearly independent if and only if no vector in S is expressible as
a linear combination of the other vectors in S.
c) A finite set of vectors that contains the zero vector is linearly
dependent.
d)A set with exactly two vectors is linearly independent if and only
if neither vector is a scalar multiple of the other.
Geometric Interpretation of Linear
Independence
Geometric Interpretation of Linear
Independence
SUBSPACE
Howard Anton (2005) :
◦ A subset W of a vector space V is called a subspace
of V if W is itself a vector space under the addition
and scalar multiplication defined on V.
◦ If W is a set of one or more vectors from a vector
space V, then W is a subspace of V if and only if the
following conditions hold.
a) If u and v are vectors in W, then u+v is in W
b) If k is any scalar and u is any vector in W then ku is
in w
SUBSPACE
LINEAR COMBINATION
Howard Anton (2005) :
A vector w is called a linear combination of the vectors v1, v2,
…,vT if it can be expressed in the form

𝐰 = 𝑘1 𝐯1 + 𝑘2 𝐯2 + ⋯ + 𝑘 𝑇 𝐯𝑇

Where k1, k2, … kt are scalars


TERIMA KASIH
BASIS DAN
DIMENSI
P E RT E M UA N 1 1

RENY RIAN MARLIANA


BASIS
Howard Anton (2005) :
If V is any vector space and S={v1 , v2 , …, vn} is a set of vectors in
V, then S is called a basis for V if the following two
conditions hold:
(a) S is linearly independent.
(b) S spans V
UNIQUENESS OF BASIS
REPRESENTATION
If S={v1 , v2 , …, vn} is a basis for a vector space V, then
every vector v in V can be expressed in the form

v=c1v1+c2v2+…+cnvn
in exactly one way.
COORDINATES RELATIVE TO A BASIS
If S={v1 , v2 , …, vn} is a basis for a vector space V and
v=c1v1+c2v2+…+cnvn
Is the expression for a vector v in terms of the basis S, then the scalars c1, c2, …,
cn are called the coordinates of v relative to the basis S.
The vector (c1, c2, …, cn) in Rn constructed from these coordinates is called the
coordinate vector of v relative to S. It is denoted by :

(v)S = (c1, c2, …, cn)


BASIS
• A nonzero vector space V is called finite-dimensional if it
contains a finite set of vectors {v1 , v2 , …, vn} that forms a
basis.
• If no such set exists, V is called infinite-dimensional. In
addition, we shall regard the zero vector space to be finite
dimensional.
• All bases for a finite-dimensional vector space have the same
number of vectors.
BASIS
Let V be a finite-dimensional vector space, and let
{v1,v2,…,vn} be any basis.
a) If a set has more than n vectors, then it is linearly
dependent.
b) If a set has fewer than n vectors, then it does not span V.
DIMENSION
Howard Anton (2005) :
The dimension of a finite-dimensional vector space V,
denoted by dim(V), is defined to be the number of
vectors in a basis for V.
In addition, we define the zero vector space to have
dimension zero.
DIMENSIONS OF SOME VECTOR SPACES

𝑛
dim 𝑅 = 𝑛
dim 𝑃𝑛 = 𝑛 + 1
dim 𝑀𝑚𝑛 = 𝑚𝑛
ROW SPACE, COLUMN SPACE, AND
NULL SPACE
If A is an mxn matrix, then :
1. the subspace of Rn spanned by the row vectors of A is called
the rows pace of A,
2. the subspace of Rn spanned by the column vectors of A is called
the column space of A.
3. The solution space of the homogeneous system of equations
Ax=0 , which is a subspace of Rn , is called the nullspace of A.
BASES FOR ROW SPACES, COLUMN
SPACES, AND NULLSPACES
• Elementary row operations do not change the nullspace of a matrix.
• Elementary row operations do not change the row space of a matrix.
• If A and B are row equivalent matrices, then
a) A given set of column vectors of A is linearly independent if and only if the
corresponding column vectors of B are linearly independent.
b) A given set of column vectors of A forms a basis for the column space of A if
and only if the corresponding column vectors of B form a basis for the column
space of B.
• If a matrix R is in row-echelon form, then the row vectors with the leading 1' s ( the
nonzero row vectors) form a basis for the row space of R, and the column vectors
with the leading 1' s of the row vectors form a basis for the column space of R.
TERIMA K ASIH
TRANSFORMASI LINEAR
P E RT E M UA N 1 3

RENY RIAN MARLIANA


TRANSFORMASI LINEAR
Salah satu penerapan
dari transformasi
linier digunakan dalam
computer graphics
untuk mengkonstruksi
suatu objek tertentu.
TRANSFORMASI LINEAR
A linear transformation from Rn to Rm was defined as a
function :
𝑇 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 = 𝑤1 , 𝑤2 , ⋯ , 𝑤𝑛

For which the equation relating w1, w2, …, wn and x1, x2, …,
xn are linear.
TRANSFORMASI LINEAR
Howard Anton (2005) :
If T:VW is a function from a vector space V into a vector space W, then T
is called a linear transformation from V to W if, for all vectors u and v
in V and all c,
(a) T(u+v)=T(u)+T(v)
(b) T(cu)= c T(u)
In the special case where V=W , the linear transformation T: VV is called a
linear operator on V
MATRIX TRANSFORMATIONS
A linear transformation from Rn to Rm are linear
transformations under this more general definition as well.
We shall call linear transformations from Rn to Rm matrix
transformations, since they can be carried out by matrix
multiplication.
THE ZERO TRANSFORMATION
Let V and W be any two vector spaces. The mapping T:VW such T(v)=0 for
every v in V is a linear transformation called the zero transformation. To see
that T is linear, observe that :

𝑇 𝐮 + 𝐯 = 𝟎; 𝑇 𝐮 = 𝟎; 𝑇 𝐯 = 𝟎; 𝑇 𝑘𝐮 = 𝟎

Therefore,

𝑇 𝐮 + 𝐯 = 𝑇 𝐮 +𝑇 𝐯 ; 𝑇 𝑘𝐮 = 𝑘𝑇 𝐮
THE IDENTITY OPERATOR
Let V be any vector space. The mapping :
I:VV
defined by
I(v)=v

is called the identity operator on V.


PROPERTIES OF LINEAR
TRANSFORMATIONS
If T:VW is a linear transformation, then
a) T(0)=0
b) T(-v)=-T(v) for all v in V
c) T(v-w)=T(v)-T(w) for all v and w in V
COMPOSITION OF MATRIX
TRANSFORMATIONS
If T1 : U V and T2 : V W are linear transformations, then the
composition of T2 with T1 , denoted by T2 o T1 is the function defined
by the formula

𝐓2 ∘ 𝐓1 𝐮 = 𝐓2 𝐓1 𝐮
Where u is a vector in U.

If T1 : U V and T2 : V W are linear transformations, then (T2 o T1):


UW is aslo a linear transformation,
COMPOSITION OF MATRIX
TRANSFORMATIONS
KERNEL AND RANGE
If T : V W is a linear transformation, then :
• The set of vectors in V that T maps into 0 is called kernel
of T, it is denoted by ker(T)
• The set of all vectors in W that are images under T of at
least one vector in V is called the range of T, it is denoted
by R(T)
PROPERTIES OF KERNEL AND RANGE
If T : V W is a linear transformation, then :
• The kernel of T is a subspace of V
• The range of T is a subspace of W.
INVERSE LINEAR TRANSFORMATIONS
Howard Anton (2005) :
• A linear transformation T : V W is said to be one-to-one
if T maps distinct vectors in V into distinct vectors in W.
• If T : V W is a linear transformation, then the following
are equivalent :
a) T is one-to-one
b) The kernel of T contains only the zero vector, that is,
ker(T)=0
c) Nullity(T)=0
INVERSE LINEAR TRANSFORMATIONS
If V is a finite-dimensional vector space, and T : V V is a linear
operator, then the following are equivalent :
a) T is one-to-one
b) The kernel of T contains only the zero vector, that is, ker(T)=0
c) Nullity(T)=0
d) The range of T is V , that is R(T)=V
INVERSE LINEAR TRANSFORMATIONS
INVERSE LINEAR TRANSFORMATIONS
If T1 : U V and T2 : V W are one-to-one
linear transformation, then :
• T2 o T1 is one-to-one
• (T2 o T1 )-1 = T2-1 o T1-1
TERIMA K ASIH
EIGENVALUES AND EIGENVECTORS

Pertemuan 14

RENY RIAN MARLIANA


DEFINITION
If A is an n x n matrix, then a nonzero vector x in Rn is called an
eigenvector of A if Ax is a scalar multiple of x, thet is, if

Ax = x
For some scalar . The scalar  is called an eigenvalue of A, and x
is said to be an eigenvector of A corresponding to 
DEFINITION
 In R2 and R3 n multiplication by Amaps each eigenvector x of A (if any) onto the
same line trough the origini as x.
 Depending on the sign and the magnitude of eigenvalue  corresponding to x,
the linear operator Ax = x compresses or stretches x by a factor , with a
reversal of direction in the case where  is negative
FIND THE EIGENVALUES MATRIX A
To find the eigenvalues of an n x n matrix A, we rewrite Ax=x as :

𝐴𝐱 = 𝜆𝐈𝐱
𝜆𝐈 − 𝐴 𝐱 = 𝟎
For  to be an eigenvalue, there must be a nonzero solution of this equation
above has a nonzero solution if and only if :
det 𝜆𝐈 − 𝐴 𝐱 = 𝟎
This is called the characteristic equation of A. the scalars satisfying this equation
are the eigenvalues of A.
When expanded, the determinant is always a polynomal p in , called the
characteristic polynomial of A.
FIND THE EIGENVALUES MATRIX A
If A is an n x n matrix then the characteristic polynomial of A has
degree n and the coefficient of n is 1. that is the characteristic
polynomial p() of an nxn matrix has the form
𝑝 𝜆 = det 𝜆𝐼 − 𝐴 = 𝜆𝑛 + 𝑐1 𝜆𝑛−1 + 𝑐𝑛
It follows that the characteristic equation
𝜆𝑛 + 𝑐1 𝜆𝑛−1 + 𝑐𝑛 = 0
Has at most n distinct solution, so an nxn matrix has at most n
distinct eigenvalues.
EIGENVALUES OF AN UPPER TRIANGULAR MATRIX
Howard Anton (2005) :
If A is an n x n triangular matrix (upper
triangular, lower triangular, or diagonal), then
the eigenvalues of A are the entries on the
main diagonal of A
EIGENVALUES OF AN UPPER TRIANGULAR MATRIX

We get that

𝜆 − 𝑎11 𝜆 − 𝑎22 𝜆 − 𝑎33 𝜆 − 𝑎44 = 0


𝜆1 = 𝑎11 ; 𝜆2 = 𝑎22 ; 𝜆3 = 𝑎33 ; 𝜆4 = 𝑎44
EQUIVALENT STATEMENTS
Howard Anton (2005) :
If A is an n x n matrix and  is a real number, then the
following are equaivalent :
1.  is an eigenvalue of A
2. The system of equations (I-A)x=0 has nontrivial
solutions.
3. There is a nonzero vector x in Rn such that Ax= x
4.  is a solution of the characteristic equation det(I-A)=0
FINDING EIGENVECTORS AND BASES FOR
EIGENSPACES
Howard Anton (2005) :
The eigenvectors of A corresponding to an eigenvalue  are the
nonzero vectors x that satisfy Ax= x .
Equivalently, the eigenvectors corresponding to  are the nonzero
vectors in the solution space of (I-A)x=0 that is, in the null space
of I-A .
We call this solution space the eigenspace of A corresponding to
.
POWERS OF A MATRIX
Once the eigenvalues and eigenvectors of a matrix A are found, it is a simple
matter to find the eigenvalues and eigenvectors of any positive integer power
of A; for example, if  is an eigenvalue of A and x is a corresponding
eigenvector, then

𝐴2 𝐱 = 𝐴 𝐴𝐱 = 𝐴 𝜆𝐱 = 𝜆 𝐴𝐱 = 𝜆 𝜆𝐱 = 𝜆2 𝐱

Which shows that 2 is an eigen value of A2 and that x is a corresponding


eigenvector.
POWERS OF A MATRIX
Howard Anton (2005) :
If k is a positive integer,  is an eigenvalue of a
matrix A, and x is a corresponding eigenvector, then
k is an eigenvalue of Ak and x is a corresponding
eigenvector.
A square matrix A is invertible if and only if =0 is
not an eigenvalue of A
TERIMA KASIH

Anda mungkin juga menyukai