Anda di halaman 1dari 108

Dasar Matematik dari fotogrametri

Scanner Photogrammetric
Scanner Photogrammetric adalah peralatan yang digunakan untuk
mengkonversi isi foto dari format analog ke fprmat digital.
Pengukuran koordinate dari citra digital yang dihasilkan dapat dilakukan secara
manual atau melalui algoritma pemrosesan citra.
Persyaratan: resolusi geometrik dan radiometrik yang cukup dan ketelitian
geometris yang tinggi.
Resolusi Geometric/spatial menunjukkan ukuran piksel dari citra. Semakin
kecil ukuran piksel, semkain banyak detil dapat dideteksi dari citra. Untuk
scaner fotogramtrik yang berkualitas tinggi ukuran piksel terkecil sekitar 5
sampai 15µm
Resolusi Radiometric menunjukkan angka dari tingakt kuantisasi. Paling kecil
256 tingkatan (8 bit); sebagian besar scaner berkemampuan 1024 levels (10 bit)
atau lebih tinggi.
Kualitas geometris menunjukkan ketelitian posisi dari piksel . Untuk scaner
kualitas tinggi sekitar 2 sampai 3 µm.

Chapter 4
10/23/2022 Virtual Environment Lab, UTA 2
Sumber kesalahan koordinat foto
Berikut adalah beberapa sumber kesalahan yang dapat menyimpangkan
koordinat foto yang benar:
1. Distorsi film karena pengekerutan, pengembangan atau tidak datarnya flm
2. Ketidak tepatan sumbu fidusial memotong titik utama
3. Distorsi lensa
4. Distorsi refraksi atmosfir
5. Distorsi kelengkungan bumi
6. Kesalahan operator dalam pengukuran
7. Kesalahan yang terjadi pada teknik korelasi secara otomatis

Chapter 4
10/23/2022 Virtual Environment Lab, UTA 3
Fotogrametri analitis
Definisi: Fotogramtri analitis adalah istilah yang digunakan untuk
menjelaskan perhitungan matematis yang handal dari koordinat titik di
ruang obyek berdasarkan parameter kamera, koordinat foto yang terukur
dan kontrol tanah.

Kelebihan fotogrametri analitis:


→ mampu mengatasi setiap kemiringan
→ umumnya melibatkan pemecahan sistem persamaan lebih yang komplek
dengan metode kuadrat terkecil
→ menjadi dasar dari sistem software dan hardware termasuk stereoploter,
pembentukan DTM, pembentukan ortofoto, rektifikasi foto digital dan
triangulasi udara.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 4
Pengukuran citra
Sebelum menggunakan koordinat foto x dan y,, kondisi berikut perlu
diperhatikan:
1. Koordinates (biasanya dalam mm) adalah relativ terhadap titik utama- origin
nya.
2. Fotogrametri analitis mendasarkan pada asumsi bahwa “ sinar menjalar
pada garis lurus” dan “ bidang fokus dari kamera adalah datar” . Jadi
pembetulan koordinat diperlukan untuk mengkompensasi sumber
kesalahan, yang melanggar asumsi.
3. Pengukuran harus memiliki ketelitian tinggi.
4. Titik obyek harus diidentifikasi secara teliti pada foto-foto sehingga
pengukuran akan konsisten.
5. Koodinat ruang obyek berdasar sistem kartesi 3 D.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 5
Kondisi kesegarisan
Titik exposure, titik obyek dan titik di cita foto terletak pada satu garis lurus.
Berdasarkan kondisi ini dapat dibangun hubungan matematis.

Appendix D
10/23/2022 Virtual Environment Lab, UTA 6
Persamaan kondisi kesegarisan
Koordinat statsiun exposur adalah X L, YL, ZL ,
dalam sistem koordinat obyek (tanah) XYZ
Koordinates titik obyek A adalah X A, YA, ZA dalam
sistem koordinat obyek (tanah) XYZ
Koordinates citra titik a dari titik obyek A adalah xa,
ya, za dalam sistem koordinat foto xy
Koordinates citra titik a adalah xa’, ya’, za’ dalam
bidang citra hasil rotasi x’y’z’ yang paralel terhadap
sistem koordinat obyek
Transformasi dari (xa’, ya’, za’) ke (xa, ya, za)
dilaksanakan emnggunakan persamaan rotasi.

Appendix D
10/23/2022 Virtual Environment Lab, UTA 7
Persamaan rotasi
Rotasi Omega terhadap sumbu x’ :
Koordinat baru (x1,y1,z1) dari suatu titik
(x’,y’,z’) setelah rotasi koordinat awal
terhadap sumbu x dengan sudut rotasi ω
adalah:
x1 = x’
y1 = y’ cos ω + z’ sin ω
z1 = -y’sin ω + z’ cos ω
Dengan cara yang sama, persamaan untuk
rotasi phi terhadap sumbu y :
x2 = -z1sin Ф + x1 cos Ф
y2 = y 1
z2 = z1 cos Ф + x1 sin Ф

Dan persaman untuk rotasi kappa terhadap


sumbu z :
x = x2 cos қ + y2 sin қ
y = -x2 sin қ + y2 cos қ
z = z2
Appendix C
10/23/2022 Virtual Environment Lab, UTA 8
Persamaan rotasi akhir
Kita substitusikan setiap tahapan sehingga diperoleh:
x = m11 x’ + m12 y’ + m13 z’
Dimana m’s adalah fungsi
y = m21 x’ + m22 y’ + m23 z’
rotasi sudut ω,Ф dan қ
z = m31 x’ + m32 y’ + m33 z’
Dlm bentuk matrix: X = M X’
dimana
 x  m11 m12 m13   x'
X   y  M  m21 m22 m23  X '   y '
 z  m31 m32 m33   z ' 

Karakteristik matrix rotasi M:


1. Jumlah kuadrat dari ketuga arah cosinus (elemen matik M) di setiap baris atau
kolom adalah unity
2. M adalah orthogonal, i.e. M-1 = MT

Appendix C
10/23/2022 Virtual Environment Lab, UTA 9
Kembali ke persamaan kolinieritas

10/23/2022 Virtual Environment Lab, UTA 10


Persamaan kesegarisan
Using property of similar triangles: xa ' ya '  za '
 
X A  X L YA  YL Z L  Z A
 X  XL   Y Y   Z  ZL 
 xa '   A  z a ' ; ya '   A L  za ' ; z a '   A  z a '
 Z A  ZL   Z A  ZL   Z A  ZL 
Substitute this into rotation formula:  X  XL  '  Y Y   Z  ZL  '
 xa  m11  A  za  m12  A L  z a'  m13  A  z a
Z
 A  Z L  Z
 A  Z L  Z
 A  Z L 

Now,  X  XL  '  Y Y   Z  ZL  '


ya  m21  A  z a  m22  A L  z a'  m23  A  za
factor out za’/(ZA-ZL), divide xa, ya by za  Z A  ZL   ZA  ZL   Z A  ZL 
add corrections for offset of principal point (x o,yo)  X  XL  '  Y Y   Z  ZL  '
z a  m31  A  z a  m32  A L  z a'  m33  A  z a
and equate za=-f, to get:
 Z A  ZL   ZA  ZL   Z A  ZL 
 m ( X  X L )  m12 (YA  YL )  m13 ( Z A  Z L ) 
xa  xo  f  11 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 

 m ( X  X L )  m22 (YA  YL )  m23 ( Z A  Z L ) 


ya  yo  f  21 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 

Appendix D
10/23/2022 Virtual Environment Lab, UTA 11
Review Persamaan Kesegarisan
Persamaan
kesegarisan: Persamaan kesegarisan:
 m ( X  X L )  m12 (YA  YL )  m13 ( Z A  Z L ) 
xa  xo  f  11 A  • Tidak linear dan
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 
• melibatkan 9 parameter:
 m ( X  X L )  m22 (YA  YL )  m23 ( Z A  Z L )  1. omega, phi, kappa
ya  yo  f  21 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L )  inherent in the m’s
2. Object coordinates (XA,
Where,
Y A, Z A )
xa, ya are the photo coordinates of image point a
XA, YA, ZA are object space coordinates of object/ground
3. Exposure station
point A coordinates (XL, YL, ZL )
XL, YL, ZL are object space coordinates of exposure
station location
f is the camera focal length
xo, yo are the offsets of the principal point coordinates
m’s are functions of rotation angles omega, phi, kappa
(as derived earlier)

Ch. 11 & App D


10/23/2022 Virtual Environment Lab, UTA 12
Now that we know about the collinearity condition, lets
see where we need to apply it.
First, we need to know what it is that we need to find…

10/23/2022 Virtual Environment Lab, UTA 13


Elemen dari orientasi luar
Persamaan kesegarisan memiliki 9 bilangan anu:
1) Posisi Stasiun pemotretan (omega, phi, kappa),
2) Koordinat stasiun pemotretan (XL, YL, ZL ), dan
3) Koordinat titik obyek (XA, YA, ZA).

Pertama, kita harus menghitung posisi dan ketinggiandari stasiun pemotretan,


juga elemen orientasi luar .

Jadi 6 elemen orientasi luar adalah:


4) posisi spasial (XL, YL, ZL) dari kamera dan
5) orientasi sudut (omega, phi, kappa) dari kamera

Untuk menentukan lemen orientasi luar dari foto tunggal diperlukan : ,


6) Citra fotographic paling tidak 3 titik kontrol yang sudah diketahui koordinat
tanah X, Y and Z, dan
7) Panjang fokus kamera yang sudah dikalibrasi .

Chapter 10
10/23/2022 Virtual Environment Lab, UTA 14
Pemotongan kembali dalam ruang menggunakan
konsep kesegarisan
Pemotongan kembali dlm ruang dengan kesegarisan memerlukan koordinat titik kontrol
tanah X, Y dan Z dan yang juga diketahui koordinat di citra.
Enam parameter orientasi luar dihitung berdasar:
• 2 persamaan dapat dibentuk untuk setiap titik kontrol
• 3 titik kontrol memberikan 6 persamaan : solusi tunggal, sementara 4 atau lebih titik kontrol (lebih
dari 6 persamaan) memungkinkan solusi dengan kuadrat terkecil.
Nilai pendekatan diperlukan untuk parameter orientasi karena persamaan
kesegeraisan tidak linier, harus dilinearisasi dengan deret Taylor.

Jumlah Parameter orientasi yang tak


Jumlah titik
persamaan diketahui
1 2 6
2 4 6
3 6 6
4 8 6

Chapter 10 & 11
10/23/2022 Virtual Environment Lab, UTA 15
Kondisi kesebidangan

Sebagaimana kondisi kesegarisan, kesebidangan, adalah kondisi bahwa 2


stasiun pemotretan, titik obyek dan setiap titik obyek dan koordinat citranya
pada 2 foto, semua terletak pada satu bidang.

Seperti persamaan kesegrisan, persamaan kesbidangan adalah tidak linear


sehingga perlu dilinierkan menggunakan teorema Taylor’s .

Pemotongan dalam ruang merupakan salah satu metode yang masih


digunakan untuk menentukan elemen orientasi luar.

10/23/2022 Virtual Environment Lab, UTA 16


Pendekatan awal untuk pemotongan
kembali dalam ruang
• Kita membutuhkan nilai awal untuk 6 parameter orientasi luar.
• Sudut Omega dan Phi : untuk foto mendekati vertikal, nilai pendekatan bisa
diambil sama dengan nol
• H:
• Pembacaan Altimeter untuk perhitungan kasar
• Hitung ZL (tinggi H diatas bidang datum) menggunakan garis di tanah yang
dikartahui panjangnya yang ada di atas foto

• Untuk menghitung H, membutuhkan 2 titik kontrol, yang lain dianggap lebih.


Pendekatan dapat ditingkatkan dengan merata ratakan nilai H.

Chapter 11 & 6
10/23/2022 Virtual Environment Lab, UTA 17
Perhitungan tinggiterbang (H)
Tinggi terbang dapat dihitung menggunakan obyek garis di permukkaan tanah
yang diketahui panjangnya dan nampak di atas foto.
Obyek garis haruslah terletak pada bidang datar, karena perbedaan ketinggian
di ujung garis akan menghasilkan tinggi terbang hasil hitungan salah.
Hasil yang teliti diperoleh seandainya kedua titik ujung garis memilliki jarak
yang sama terhadap titik utama dari foto dan terletak di garis yag melalui titik
utama.

H dapat dihitung menggunakan persamaan skalla dari suatu foto:


S = ab/AB = f/H
(skala foto diatas terein datar)
atau
S = f/(H-h)
(skala foto pada setiap titik yang ketinggiannya di atas datum sebesar h)

Chapter 6
10/23/2022 Virtual Environment Lab, UTA 18
As an explanation of the equations from which H is calculated:

Skala dari foto


S = ab/AB = f/H
SAB = ab/AB = La/LA = Lo/LO = f/(H-h)
Dimana:
1) S adalah skala foto vertikal pada terein yang datar
2) SAB adakah skala foto diatas terein yang berbeda beda
3) ab adalah jarak antara titk citra A dan B di atas foto
4) AB adalah jarak sebenarnya antara titik A dan B
5) f adalah panjang fokus
6) La adalah jarak antara pusat pemotretan L dan citra dari
titik A pada positive foto.
7) LA adalah jarakantara pusat pemotretan L dan titik A.
8) Lo = f adalah jarak dati titik L ke titik utama dari foto
9) LO = H-h adalah jarak dari L ke proyeksi dari o ke bidang
horisontal yang engandung titik A dengntinggi sebesar h di
atas bidang datum.
Note: For vertical photographs taken over variable terrain, there
are an infinite number of different scales.

Chapter 6
10/23/2022 Virtual Environment Lab, UTA 19
Initial Approx. for XL, YL and k
x’ and y’ ground coordinates of any point can be obtained by simply multiplying x and y
photo coordinates by the inverse of photo scale at that point.
This requires knowing
• f, H and
• elevation of the object point Z or h.
A 2D conformal coordinate transformation (comprising rotation and translation) can then be performed,
which relates these ground coordinates computed from the vertical photo equations to the control
values:
X = a.x’ – b.y’ + TX; Y = a.y’ + b.x’ + TY
We know (x,y) and (x’,y’) for n sets are known giving us 2n equations.
The 4 unknown transformation parameters (a, b, T X, TY) can therefore be calculated by least
squares. So essentially we are running the resection equations in a diluted mode with
initial values of as many parameters as we can find, to calculate the initial parameters of
those that cannot be easily estimated.
TX and TY are used as initial approximation for X L and YL, resp.
Rotation angle θ = tan-1(b/a) is used as approximation for κ (kappa).

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 20
Space Resection by Collinearity:
Summary
(To determine the 6 elements of exterior orientation using collinearity condition)
Summary of steps:
1. Calculate H (ZL)
2. Compute ground coordinates from assumed vertical photo for the control points.
3. Compute 2D conformal coordinate transformation parameters by a least squares
solution using the control points (whose coordinates are known in both photo
coordinate system and the ground control cood sys)
4. Form linearized observation equations
5. Form and solve normal equations.
6. Add corrections and iterate till corrections become negligible.

Summary of Initializations:
• Omega, Phi -> zero, zero
• Kappa -> Theta
• XL, YL -> TX, TY
• ZL ->flying height H

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 21
If space resection is used to determine the elements of
exterior orientation for both photos of a stereopair, then
object point coordinates for points that lie in the stereo
overlap area can be calculated by the procedure known as
space intersection…

10/23/2022 Virtual Environment Lab, UTA 22


Pemotongan dalam ruang dengan
kesegarisan
Pengunaan: Untuk menentukan koordinat titik obyek untuk titik titik yang terletak di daerah
overlap dari 2 foto yang membentuk stereo.
Prinsip: Sinar yang seasal dari 2 foto harus berpotongan pada satu titik..

Untuk titik A:
Persamaan kesegarisan ditulis untuk titik citra a1
pada foto kiri (dari stereopair), dan titik citra a2
untuk foto kanan , memberian 4 persamaan.
Yang tidak diketahui adalah XA, YA dan ZA.
Karena persamaan sudag dilinearkan
menggunakan teorema Taylor’s, pendekatan awal
diperlukan untuk setiap titik yang koordinat ruang
objek akan ditentukan.
Pendekatan awal ditentukan menggunakan
persamaan paralax.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 23
Persamaan Parallax
Persamaan Parallax :
1) pa = xa – x’a
2) hA = H – B.f/pa
3) XA = B.xa/pa
4) YA = B.ya/pa
dimana
hA adalah ketinggian titik A diatas datum
H adlah tinggi terbang diatas datum
B adalah basis udara (jarak anatara stasiun exposure )
f adalah panjang fokus dari kamera
pa adalah paralaks titik A
XA dan YA adalah koordinat tanah dari titik A dalam sistem
koordinat dengan origin pada datum titik P pada Lpho,
sb X di bidang vertikal vertical sebagaimana x dan x’ sb
terbang dan sb Y axis titik datum dari Lpho dan
tegaklurus sumbu X
xa dan ya adalah koordinat foto dari titik titik yang diukur di
foto kiri

Chapter 8
10/23/2022 Virtual Environment Lab, UTA 24
Pengunaan persamaan paralks untuk
pemotongan dalam ruang
Untuk menggunakan persamaan paralaks, H dan B harus ditentukan :
Karena koordinat X, Y, Z untuk stasiun exposure diketahui, H ditentukan
sebagai rata rata dari ZL1 dandZL2 dan
B = [ (XL2-XL1)2 + (YL2-YL1)2 ]1/2

Koordinat yang dihasilkan dari persamaan parallaks pada sistem koordinat


tanah sembarang .
Untuk merubah , misalkan WGS84, transformasi koordinat sebangun
digunakan.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 25
Now that we know how to determine object space
coordinates of a common point in a stereopair, we can
examine the overall procedure for all the points in the
stereopair...

10/23/2022 Virtual Environment Lab, UTA 26


Stereomodel Analytis
Fotoudara pada sebagian besar aplikasi diambil sehingga dengan foto terdekat memiliki
overlap lebih dari 50%. Dua foto yang berdekatan yang overlap akan membentuk
setereopair.
Titik titik obyek yang nampak di daerah overlap dari stereopair membentuk stereomodel .
Perhitungan matematis dari koordinat tanah 3 –D titik dalam stereomodel dengan teknik
fotogramteri analitis membentuk streomodel analitis..
Proses untuk membentuk stereomodel secara analitis memiliki 3 langkah:
1. Orientasi dalam juga disebut dengan “ penghalusan koordinat foto”): Secara
matematis membuat geometri kamera sebagaimana ketika exposur dilaksanakan..
2. Orientasi relativ: menentukan sudut ketinggian relativ dan pergeseran posisi antara
foto sebagaimana ketika foto diambil .
3. Orientasi absolute : menentukan sudut ketiggian dan posisi kedua foto..
Setelah ketiga tahap ini dicapai, titik di stereomodel akan memiliki koordinat obyek dalam
sistem koordinat tanah.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 27
Orientasi relativ secara analitis
Analytical relative orientation involves defining (assuming) certain elements of exterior orientation
and calculating the remaining ones.

Initialisasi:

Jika parameter diberi nilai tertentu


(imisal, ω1=Ф1=қ1=XL1=YL1=0, ZL1=f,
XL2=b),

Kemudian skala dari model stereo


didekati sama dengan skala foto.

Maka, koordinat foto x dan y dari foto kiri


merupakan pendekatan yang bagus
untuk koordinat ruang X dan Y , dan

Kosong merupakan pendekatan yan


bagus unttuk Z pada koordinat ruang

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 28
Orientasi relative secara analitis
1) Semua elemen orientasi luar , tak termasuk Z L1 dari foto kiri dari stereopair diberi nilai
nol .
2) Untuk memudahkan, ZL dari foto kiri (ZL1) iditentukan sebesar f dan XL dari foto kanan
(XL2) diset sebesar basis b.
3) Ini meninggalkan 5 elemen dari foto kanan yang harus ditentukan
4) Menggunakan kondisi kesegarisan, paling sedikit 5 titik objek diperlukan untuk
memecahkan parametr yang tak diketahui, jika setiap titik digunakan dalam orientasi
relativ akan memberikan 1 persamaan (karena koordinat X,Y and Z merupakan yang
tak diketahui pula)
No. of points in overlap No. of equations No. of unknowns
1 4 (2+2) 5+3=8
2 4+4=8 8 + 3 = 11
3 8 + 4 = 12 11 + 3 = 14
4 12 + 4 = 16 14 + 3 = 17
5 16 + 4 =20 17 + 3 =20
6 20 + 4 = 24 20 + 3 = 23

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 30
10/23/2022 Virtual Environment Lab, UTA 31
Analytical Absolute Orientation
Stereomodel coordinates of tie points are related to their 3D coordinates in a (real, earth based)
ground coordinate system. For small stereomodel such as that computed from one stereopair,
analytical absolute orientation can be performed using a 3D conformal coordinate transformation.

Requires minimum of two horizontal and three vertical control points. (20 equations with 8 unknowns
plus the 12 exposure station parameters for the two photos:closed form solution). Additional control
points provide redundancy, enabling a least squares solution.

(horizontal control: the position of the point in object space is known wrt a horizontal datum;
vertical control: the elevation of the point is known wrt a vertical datum)

Once the transformation parameters have been computed, they can be applied to the remaining
stereomodel points, including the X L, YL and ZL coordinates of the left and right photographs. This
gives the coordinates of all stereomodel points in the ground system.
No. of equations No. of additional unknowns Total no. of unknowns
1 horizontal control point 2 per photo =>total 4 1 unknown Z value 12 exterior orientation
parameters + 1 = 13
1 vertical control point 2 equations per photo => 4 2 unknown X and Y values 12 + 2 = 14
equations total
2 horizontal control points 4 * 2 = 8 equations 1*2=2 12 + 2 = 14

3 vertical control points 4 * 3 = 12 equations 2*3=6 12 + 6 = 18

2 horizontal + 3 vertical 8 + 12 = 20 equations 2+6=8 12 + 8 = 20


control points
Chapter 16 & 11
10/23/2022 Virtual Environment Lab, UTA 32
As already mentioned while covering camera
calibration, camera calibration can also be included
in a combined interior-relative-absolute orientation.
This is known as analytical self-calibration…

10/23/2022 Virtual Environment Lab, UTA 33


Analytical Self Calibration
Analytical self-calibration is a computational process wherein camera calibration
parameters are included in the photogrammetric solution, generally in a
combined interior-relative-absolute orientation.

The process uses collinearity equations that have been augmented with
additional terms to account for adjustment of the calibrated focal length,
principal-point offsets, and symmetric radial and decentering lens distortion.
In addition, the equations might include corrections for atmospheric refraction.

With the inclusion of the extra unknowns, it follows that additional independent
equations will be needed to obtain a solution.

Chapter 11
10/23/2022 Virtual Environment Lab, UTA 34
So far we have assumed that a certain amount of
ground control is available to us for using in space
resection, etc. Lets take a look at the acquisition of
these ground control points…

10/23/2022 Virtual Environment Lab, UTA 35


Ground Control
for Aerial Photogrammetry
Ground control consists of any points
• whose positions are known in an object-space coordinate system and
• whose images can be positively identified in the photographs.

Classification of photogrammetric control:


1. Horizontal control: the position of the point in object space is known wrt a
horizontal datum
2. Vertical control: the elevation of the point is known wrt a vertical datum

Images of acceptable photo control points must satisfy two requirements:


3. They must be sharp, well defined and positively identified on all photos, and
4. They must lie in favorable locations in the photographs

Chapter 16
10/23/2022 Virtual Environment Lab, UTA 36
Photo Control Points
for Aerotriangulation
The Number of ground-surveyed photo control needed varies with
1. size, shape and nature of area,
2. accuracy required, and
3. procedures, instruments, and personnel to be used.

In general, more dense the ground control, the better the accuracy in the
supplemental control determined by aerotriangulation. – thesis of our targeting
project!!

There is an optimum number, which affords maximum economic benefit and maintains a
satisfactory standard of accuracy.

The methods used for establishing ground control are:

4. Traditional land surveying techniques


5. Using Global Positioning System (GPS)

Chapter 16
10/23/2022 Virtual Environment Lab, UTA 37
Ground Control by GPS
While GPS is most often used to compute horizontal position, it is capable of
determining vertical position (elevation) to nearly the same level of accuracy.

Static GPS can be used to determine coordinates of unknown points with


errors at the centimeter level.

Note: The computed vertical position will be


related to the ellipsoid, not the geoid or mean
sea level. To relate the GPS-derived
elevation (ellipsoid height) to the more
conventional elevation (orthometric height), a
geoid model is necessary.

However, if the ultimate reference frame is


related to the ellipsoid, this should not pose a
problem.

Chapter 16
10/23/2022 Virtual Environment Lab, UTA 38
Having covered processing techniques for single
points, we examine the process at a higher level, for
all the photographs…

10/23/2022 Virtual Environment Lab, UTA 39


Aerotriangulation
• It is the process of determining the X, Y, and Z ground coordinates of
individual points based on photo coordinate measurements.
• consists of photo measurement followed by numerical interior,
relative, and absolute orientation from which ground coordinates
are computed.
• For large projects, the number of control points needed is extensive
• cost can be extremely high
• . Much of this needed control can be established by aerotriangulation
for only a sparse network of field surveyed ground control.
• Using GPS in the aircraft to provide coordinates of the camera
eliminates the need for ground control entirely
• in practice a small amount of ground control is still used to
strengthen the solution.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 40
Pass Points for Aerotriangulation
• selected as 9 points in a format of 3 rows X 3 columns,
equally spaced over photo.
• The points may be images of natural, well-defined
objects that appear in the required photo areas
• if such points are not available, pass points may be
artificially marked.
• Digital image matching can be used to select points in
the overlap areas of digital images and automatically
match them between adjacent images.
• essential step of “automatic aerotriangulation”.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 41
Analytical Aerotriangulation
The most elementary approach consists of the following basic steps:

1. relative orientation of each stereomodel


2. connection of adjacent models to form continuous strips and/or
blocks, and
3. simultaneous adjustment of the photos from the strips and/or blocks
to field-surveyed ground control

X and Y coordinates of pass points can be located to an accuracy of


1/15,000 of the flying height, and Z coordinates can be located to an
accuracy of 1/10,000 of the flying height.

With specialized equipment and procedures, planimetric accuracy of


1/350,000 of the flying height and vertical accuracy of 1/180,000
have been achieved.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 42
Analytical Aerotriangulation Technique
• Several variations exist.

• Basically, all methods consist of writing equations that express the unknown
elements of exterior orientation of each photo in terms of camera constants,
measured photo coordinates, and ground coordinates.

• The equations are solved to determine the unknown orientation parameters


and either simultaneously or subsequently, coordinates of pass points are
calculated.

• By far the most common condition equations used are the collinearity
equations.

• Analytical procedures like Bundle Adjustment can simultaneously enforce


collinearity condition on to 100s of photographs.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 43
Simultaneous Bundle Adjustment
Adjusting all photogrammetric measurements to ground control values
in a single solution is known as a bundle adjustment. The process is so
named because of the many light rays that pass through each lens
position constituting a bundle of rays.

The bundles from all photos are adjusted simultaneously so that


corresponding light rays intersect at positions of the pass points and
control points on the ground.

After the normal equations have been formed, they are solved for the
unknown corrections to the initial approximations for exterior orientation
parameters and object space coordinates.

The corrections are then added to the approximations, and the


procedure is repeated until the estimated standard deviation of unit
weight converges.
Chapter 17
10/23/2022 Virtual Environment Lab, UTA 44
Quantities in Bundle Adjustment
The unknown quantities to be obtained in a bundle adjustment consist of:
1. The X, Y and Z object space coordinates of all object points, and
2. The exterior orientation parameters of all photographs

The observed quantities (measured) associated with a bundle adjustment are:


3. x and y photo coordinates of images of object points,
4. X, Y and/or Z coordinates of ground control points,
5. direct observations of exterior orientation parameters of the photographs.

The first group of observations, photo coordinates, is the fundamental photogrammetric


measurements.

The next group of observations is coordinates of control points determined through field
survey.

The final set of observations can be estimated using airborne GPS control system as well
as inertial navigation systems (INSs) which have the capability of measuring the
angular attitude of a photograph.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 45
Bundle Adjustment on a Photo Block
Consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass
points and 6 control points, totaling 26 object points; with 6 of those also serving
as tie points connecting the two adjacent strips.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 46
Bundle Adjustment on a Photo Block
To repeat, consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass points and
6 control points, totaling 26 object points; with 6 of those also serving as tie points connecting the
two adjacent strips.

In this case,
The number of unknown object coordinates No. of imaged points =
= no. of object points X no. of coordinates per object point = 26X3 = 78 4X8
The number of unknown exterior orientation parameters (photos 1, 4, 5 & 8
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48 have 8 imaged points
Total number of unknowns = 78 + 48 = 126 each)
+
The number of photo coordinate observations
4 X 11
= no. of imaged points X no. of photo coordinates per point = 76 X 2 = 152
The number of ground control observations (photos 2, 3, 6 & 7 have
11 imaged points each)
= no. of 3D control points X no. of coordinates per point = 6X3 = 18
The number of exterior orientation parameters = total 76 point images
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48

If all 3 types of observations are included, there will be a total of 152+18+48=218 observations; but if
only the first two types are included, there will be only 152+18=170 observations
Thus, regardless of whether exterior orientation parameters were observed, a least squares solution is
possible since the number of observations in either case (218 and 170) is greater than the number
of unknowns (126 and 78, respectively).

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 47
The next question is, how are these equations
solved.

Well, we start with observations equations, which


would be the collinearity condition equations that we
have already seen, we linearize them, and then use
least squares procedure to find the unknowns.

We will start by refreshing our memories on least


squares solution of over-determined equation set.

10/23/2022 Virtual Environment Lab, UTA 48


Relevant Definitions
Observations are the directly observed (or measured) quantities which
contain random errors.
True Value is the theoretically correct or exact value of a quantity. It can never
be determined, because no matter how accurate, the observation will always
contain small random errors.
Accuracy is the degree of conformity to the true value.
Since true value of a continuous physical quantity can never be known,
accuracy is likewise never known. Therefore, it can only be estimated.
Sometimes, accuracy can be assessed by checking against an independent,
higher accuracy standard.
Precision is the degree of refinement of quantity.
The level of precision can be assessed by making repeated measurements
and checking the consistency of the values.
If the values are very close to each other, the measurements have high
precision and vice versa.

Appendix A & B
10/23/2022 Virtual Environment Lab, UTA 49
Relevant Definitions
Error is the difference between any measured quantity and the true value for that
quantity.

Random errors (accidental and compensating)


Types of errors Systematic errors (cumulative; measured and modeled to compensate)
Mistakes or blunders (avoided as far as possible; detected and eliminated)

Most probable value is that value for a measured or indirectly determined


quantity which, based upon the observations, has the highest probability.

The MPV of a quantity directly and independently measured having observations of equal
weight is simply the mean.

x Where Σx is the sum of the individual measurements, and m is


MPV  the number of observations.
m
Appendix A & B
10/23/2022 Virtual Environment Lab, UTA 50
Relevant Definitions
Residual is the difference between any measured quantity and the most probable
value for that quantity.
It is the value which is dealt with in adjustment computations, since errors are
indeterminate. The term error is frequently used when residual is in fact meant.
Degrees of freedom is the number of redundant observations (those in excess of
the number actually needed to calculate the unknowns).
Weight is the relative worth of an observation compared to nay other observation.
Measurements are weighted in adjustment computations according to their
precisions.
Logically, a precisely measured value should be weighted more in an adjustment
so that the correction it receives is smaller than that received by less precise
measurements.
If same equipment and procedures are used on a group of measurements, each
observation is given an equal weight.

Appendix B
10/23/2022 Virtual Environment Lab, UTA 51
Relevant Definitions
Standard deviation (also called “root mean square error” or “68 percent error”) is a
quantity used to express the precision of a group of measurements.
For ‘m’ number of direct, equally weighted observations of a quantity, its standard
deviation is:

 v2 Where Σv2 is the sum of squares of the residuals and r is


S  the number of degrees of freedom (r=m-1)
r
According to the theory of probability, 68% of
the observations in a group should have
residuals smaller than the standard deviation.

The area between –S and +S in a Gaussian


distribution curve (also called Normal
distribution curve) of the residual, which is
same as the area between average-S and
average+S on the curve of measurements, is
68%.

Appendix B
10/23/2022 Virtual Environment Lab, UTA 52
Fundamental Condition of Least Squares
For a group of equally weighted observations, the fundamental condition which
is enforced in least square adjustment is that the sum of the squares of the
residuals is minimized.
Suppose a group of ‘m’ equally weighted measurements were taken with
residuals v1, v2, v3,…, vm then:

m
2  v 2  v 2  v 2  ...  v 2  minimum
 vi 1 2 3 m
i 1

Basic assumptions underlying least squares theory:


1. Number of observations is large
2. Frequency distribution of the errors is normal (gaussian)

Appendix B
10/23/2022 Virtual Environment Lab, UTA 53
Applying Least Squares
Steps:
1) Write observation equations (one for each measurement) relating
measured values to their residual errors and the unknown
parameters.
2) Obtain equation for each residual error from corresponding
observation.
3) Square and add residuals
4) To minimize Σv2 take partial derivatives wrt each unknown variable
and set them equal to zero
5) This gives a set of equations called normal equations which are
equal in number to the number of unknowns.
6) Solve normal equations to obtain the most probable values for the
unknowns.

Appendix B
10/23/2022 Virtual Environment Lab, UTA 54
Least Squares Example Problem
Let:
AB be a line segment In this least squares problem, the coefficients of
C divide AB into 2 parts of length X and Y unknowns in the observation equations are other
than zero and unity
D be midpoint of AC, i.e. AD = DC = x
E and F trisect CB, i.e. CE = EF = FB = y 4 observation equations (m=2) in 2 variables/unknowns (n=2)
2x 3y Take Σv2 and differentiate partially w.r.t. the unknowns to get 2
equations in 2 unknowns.
x x y y y Solution gives the most probable values of x and y.

Corresponding Note:
Observation Eqns:
If D is not the exact midpoint and E
x + 3y = 10.1 + v1 & F do not trisect the into exactly
equal parts,
2y = 6.2 + v3
Actual x and y values may differ
x + 2y = 6.9 + v2 from segment to segment.

2x + y = 4.8 + v4 We only get the ‘most probable’


values for x and y!

10/23/2022 Virtual Environment Lab, UTA 55


Formulating Equations
Step 1) Observatio n Equations (one for each measuremen t) :
(include a residual for each observatio n)
x  3 y  10.1  v1
x  2 y  6. 9  v 2
2 y  6.2  v3
2 x  y  4. 8  v 4
Step 2) Equation for each residual error from correspond ing observatio n
v1  x  3 y  10.1
v2  x  2 y  6.9
v3  2 y  6.2
v 4  2 x  y  4. 8
Step 3) Square and add residuals :
 v 2  v12  v22  v32  v42
 ( x  3 y  10.1) 2  ( x  2 y  6.9) 2  (2 y  6.2) 2  (2 x  y  4.8) 2

10/23/2022 Virtual Environment Lab, UTA 56


Normal Equations and Solution
Step 4) Taking partial derivative s of  v 2 :
  v2
 2( x  3 y  10.1)  2( x  2 y  6.9)  0  2( 2 x  y  4.8) * 2
x
  v2
 2( x  3 y  10.1) * 3  2( x  2 y  6.9) * 2  2(2 y  6.2) * 2  2( 2 x  y  4.8)
y
Normal Equations :
2( x  3 y  10.1)  2( x  2 y  6.9)  0  2(2 x  y  4.8) * 2  0
2( x  3 y  10.1) * 3  2( x  2 y  6.9) * 2  2(2 y  6.2) * 2  2( 2 x  y  4.8)  0
Simplified Normal Equations :
12 x  14 y  53.2  0
14 x  36 y  122.6  0
Step 5) Solving :
12 14   x   53.2  6 7   x  26.6
14 36  y   122.6  7 18  y   61.3
         
1
 x  6 7  26.6 0.8424
      61.3  3.0780
y
   7 18     

10/23/2022 Virtual Environment Lab, UTA 57


General Form of Observation Equations
Step 1:
‘m’ linear observation equations of equal weight containing ‘n’ unknowns:
For m<n: underdetermined set of equations.
For m=n: solution is unique
For m>n: m-n observations are redundant, least squares can be applied to find MPVs

a11 X 1  a12 X 2  a13 X 3  ...  a1n X n  L1  v1


a21 X 1  a22 X 2  a23 X 3  ...  a2 n X n  L2  v2
a31 X 1  a32 X 2  a33 X 3  ...  a3n X n  L3  v3 (equations I)
……………………………………………………
am1 X 1  am 2 X 2  am 3 X 3  ...  amn X n  Lm  vm
Where:
Xj: unknown
aij: coefficients of the unknown Xj’s
Li: observations
vi: residuals

Appendix B
10/23/2022 Virtual Environment Lab, UTA 58
General Form of Normal Equations
Equations obtained at the end of Step 4:
m m m m m

 (a a
i 1
i1 i1 ) X 1   (ai1ai 2 ) X 2   (ai1ai 3 ) X 3  ...   (ai1ain ) X n   (ai1 Li )
i 1 i 1 i 1 i 1

m m m m m

 (a
i 1
a ) X 1   (ai 2 ai 2 ) X 2   (ai 2 ai 3 ) X 3  ...   (ai 2 ain ) X n   (ai 2 Li )
i 2 i1
i 1 i 1 i 1 i 1
m m m m m (equations II)
 (a
i 1
a ) X 1   (ai 3ai 2 ) X 2   (ai 3ai 3 ) X 3  ...   (ai 3 ain ) X n   (ai 3 Li )
i 3 i1
i 1 i 1 i 1 i 1

…………………………………………………………………………
m m m m m

 (a
i 1
a ) X 1   (aim ai 2 ) X 2   (aim ai 3 ) X 3  ...   (aim ain ) X n   (aim Li )
im i1
i 1 i 1 i 1 i 1

At step 1 we have m equations in n variables.


At the end of Step 4 we have n equations in n variables.
Appendix B
10/23/2022 Virtual Environment Lab, UTA 59
Matrix Forms of Equations
n 1 1 1
Equations I (observation equations) in matrix form: m A n X  m L  mV

Equations II (normal equations) in matrix form: ( AT A) X  AT L


 X  ( AT A) 1 ( AT L)
where:
 a11 a12 a13 ... a1n   L1 
a a22 a23 ... a2 n   X1  L   v1 
 21 X  v 
 2  2
n
  2
m A  a31 a32 a33 ... a3n  1
 
1
 
n L  L3
1
 
nV  v3
n X  X3  
     
 ... ... ... ... ...       
 Ln  vn 
 am1 am 2 am 3 ... amn   X n 

Appendix B
10/23/2022 Virtual Environment Lab, UTA 60
Standard Deviation of residuals
The observation equation in matrix form: V  AX  L

Standard deviation of unit weight for an unweighted adjustment is: V TV


S0 
r

Standard deviations of the adjusted quantities are:


S xi  S 0 Q X i X i

where,
r is the number of degrees of freedom and equals the number of observation minus the
number of unknowns i.e. r = m – n
SXi is the standard deviation of the ith adjusted quantity, i.e., the quantity in the ith row of the
X matrix
S0 is the standard deviation of unit weight
QXiXi is the element in the ith row and the ith column of the matrix (ATA)-1 in the unweighted
case or the matrix (ATWA)-1
Appendix B
10/23/2022 Virtual Environment Lab, UTA 61
Standard Deviations in Example
 v1  1 3 10.1
v  1 2 x  6.9 
V  AX  L   2        
 v3  0 2  y   6.2 
     
v4   2 1   4.8 
 v1  1 3 10.1  0.0236
v  1 2 0.8424  6.9   0.0984 
For our example problem, we   2     
  
 
v3  0 2 3.0780  6.2   0.0440
find the standard deviation of
       
x and y to be: v
 4  2 1   4 .8    0 .0372 
Sx=0.2016 and Sy=0.3492
V TV
S0   0.0823
r
6 7 
 AT A   
7 18
 S xi  S 0 Q X i X i
 S x  S 0 * 6 and S y  S 0 * 18
 S x  0.2016 and S y  0.3492

10/23/2022 Virtual Environment Lab, UTA 62


Linearization of our non-linear equation
set
 Our Least Squares Solution was for a linear
set of equations
 Remember in all our photogrammetric
equations we have sines, cosines etc.
 Need to linearize
 Use Taylor Series Expansion

10/23/2022 Virtual Environment Lab, UTA 63


Review of Collinearity Equations
Collinearity equations:
Collinearity equations:
 m ( X  X L )  m12 (YA  YL )  m13 ( Z A  Z L ) 
xa  xo  f  11 A  • are nonlinear and
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L ) 
• involve 9 unknowns:
 m ( X  X L )  m22 (YA  YL )  m23 ( Z A  Z L )  1. omega, phi, kappa
ya  yo  f  21 A 
 m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L )  inherent in the m’s
2. Object point coordinates
Where,
(XA, YA, ZA )
xa, ya are the photo coordinates of image point a
3. Exposure station
XA, YA, ZA are object space coordinates of object/ground
coordinates (XL, YL, ZL )
point A
XL, YL, ZL are object space coordinates of exposure
station location
f is the camera focal length
xo, yo are the coordinates of the principal point
m’s are functions of rotation angles omega, phi, kappa
(as derived earlier)

Ch. 11 & App D


10/23/2022 Virtual Environment Lab, UTA 64
Linearization of Collinearity
Equations
Rewriting the collinearity equations:

r  where
F  xo  f    xa
q q  m31 ( X A  X L )  m32 (YA  YL )  m33 ( Z A  Z L )

s r  m11 ( X A  X L )  m12 (YA  YL )  m13 ( Z A  Z L )


G  yo  f    y a
q s  m21 ( X A  X L )  m22 (YA  YL )  m23 ( Z A  Z L )

Applying Taylor’s theorem to these equations (using only upto first order
partial derivatives), we get…

Appendix D
10/23/2022 Virtual Environment Lab, UTA 66
Linearized Collinearity Equations Terms
 F   F   F   F   F 
F0    d    d    d    dX L    dYL
   0    0    0  X L 0  YL 0
 F   F   F   F 
   dZ L    dX A    dYA    dZ A  xa
 Z L 0  X A  0  YA  0  Z A  0

 G   G   G   G   G 
G0    d    d    d    dX L    dYL
   0    0    0  X L 0  YL 0
 G   G   G   G 
   dZ L    dX A    dYA    dZ A  ya
 Z L 0  X A  0  YA  0  Z A  0
where
F0, G0: functions of F and G evaluated at the initial approximations for the 9
unknowns;

 F   F   G   G  are partial derivatives of F and G wrt the


  ,   ,   ,   , etc., indicated unknowns evaluated at the initial
  0   0   0   0
approximation

d , d , d , etc., are unknown corrections to be applied to the initial approximations.


(angles are in radians)
Appendix D
10/23/2022 Virtual Environment Lab, UTA 67
Simplified Linearized Collinearity
Equations
Since photo coordinates xa and ya are measured values, if the equations are to be used
in a least squares solution, residual terms must be included to make the equations
consistent.
The following simplified forms of the linearized collinearity equations include these
residuals:
b11 d  b12d  b13d  b14 dX L  b15dYL  b16 dZ L
 b14 dX A  b15dYA  b16 dZ A  J  v xa

b21d  b22d  b23d  b24dX L  b25dYL  b26 dZ L


 b24dX A  b25dYA  b26 dZ A  K  v ya
where J = xa – F0, K = ya - G0 and the b’s are coefficients equal to the partial derivatives

In linearization using Taylor’s series, higher order terms are ignored, hence
these equations are approximations.
They are solved iteratively, until the magnitudes of corrections to initial
approximations become negligible.
Chapter 11
10/23/2022 Virtual Environment Lab, UTA 68
We need to generalize and rewrite the linearized
collinearity conditions in matrix form.
While looking at the collinearity condition, we were only
concerned with one object space point (point A).
Lets first generalize and then express the equations in
matrix form…

10/23/2022 Virtual Environment Lab, UTA 69


Generalizing Collinearity Equations
The observation equations which are the foundation of a bundle adjustment are the
collinearity equations:
 m11i ( X j  X Li )  m12i (Y j  YLi )  m13i ( Z j  Z Li ) 
xij  xo  f   These non-linear equations
m (
 31i j X  X Li )  m32i (Y j  YLi )  m33 ( Z j  Z )
Li  
involve 9 unknowns: omega,
phi, kappa inherent in the m’s,
object point coordinates (X j, Yj,
 m21i ( X j  X L i )  m22i (Y j  YLi )  m23i ( Z j  Z Li )  Zj ) and exposure station
yij  yo  f   coordinates (XLi, YLi, ZLi )
m (
 31i j X  X Li )  m32i (Y j  YLi )  m33i ( Z j  Z )
Li  
Where,
xij, yij are the measured photo coordinates of the image of point j on photo i related to the
fiducial axis system
Xj, Yj, Zj are coordinates of point j in object space
XLi, YLi, ZLi are the coordinates of the eyepoint of the camera
f is the camera focal length
xo, yo are the coordinates of the principal point
m11i, m12i, ..., m33i are the rotation matrix terms for photo i
Ch. 11 & App D
10/23/2022 Virtual Environment Lab, UTA 70
Linearized Equations in Matrix Form
. . .. ..
B ij  i  B ij  j   ij  Vij
. b11ij b12ij b13ij  b14ij  b15ij  b16ij   di 
B ij     d 
b21ij b22ij b23ij  b24ij  b25ij  b26ij   i  dX j 
.  d i  ..
   J ij   vxij 
i     j   dY j   ij    Vij   
.. b14ij b15ij b16ij  dX
 Li   K ij  vy ij 
B ij    dZ j 
b24ij b25ij b26ij   dYL   
 i
 dZ Li 
.
Matrix B ij contains the partial derivatives of the collinearity equations with respect to the exterior
orientation parameters of photo i, evaluated at the initial approximations.
..
Matrix B ij contains the partial derivatives of the collinearity equations with respect to the object space
coordinates of point j, evaluated at the initial approximations.
.
Matrix  i contains corrections for the initial approximations of the exterior orientation parameters for
photo i.
..
Matrix  j contains corrections for the initial approximations of the object space coordinates of point j.
Matrix  ij contains measured minus computed x and y photo coordinates for point j on photo i.
Matrix Vij contains residuals for the x and y photo coordinates.

Ch. 17
10/23/2022 Virtual Environment Lab, UTA 71
Coming to the actual observations in the observation
equations (collinearity conditions), first we consider the
photo coordinate observations, then ground control and
finally exterior orientation parameters…

10/23/2022 Virtual Environment Lab, UTA 72


Weights of Photo Coordinate
Observations
Proper weights must be assigned to photo coordinate observations in order to be included in the
bundle adjustment.
Expressed in matrix form, the weights for x and y photo coordinate observations of point j on photo i
are:
1
2
  x2ij  xij yij 
Wij   o  2 

 yij xij  yij  
where  o2 is the reference variance;  x2ij and  y2ij are variances in x ij and y ij ,
respective ly; and  x ijyij   yijx ij is the covariance of x ij with y ij.
The reference variance is an arbitrary parameter which can be set equal to 1,
and in many cases, the covariance in photo coordinate s is equal to zero.
In this case, the weight matrix for photo coordinate s simplifies to
 1 
 2 0 
Wij   ij 
x

 1 
 0  y2ij 

Ch. 17
10/23/2022 Virtual Environment Lab, UTA 73
Ground Control
Observation equations for ground control coordinates are:
where
X j  X 00
j  vX j
X j , Yj and Z j are unknown coordinate s of point j
00
Yj  Y j  vY j
X 00 00 00
j , Yj and Z j are the measured coordinate values for point j
Z j  Z 00
j  vZ j v X j , vY j and vZ j are the coordinate residuals for point j
Even though ground control observation equations are linear, in order to be consistent with the
collinearity equations, they will also be approximated by the first-order terms of Taylor’s series:
X 0j  dX j  X 00 where
j  vX j

0 00 X 0j , Yj0 and Z0j are the initial approximat ions for the coordinate s of point j
Y  dY j  Y
j j  vY j
dX j , dYj and dZ j are correction s to the approximat ions for the coordinate s
Z 0j  dZ j  Z 00
j  vZ j
of point j
Rearranging the terms and expressing in matrix form:
where
.. .. ..
 j  C j  Vj dX j   X 00j  X 0j  v X 
..
  ..
  ..  j
 j   dY j  C j   Y j00  Y j0  V j   vY j 
 dZ j   Z 00j  Z 0j  v 
     Zj 

Ch. 17
10/23/2022 Virtual Environment Lab, UTA 74
Weights of Ground Control Observations
As with photo coordinate measurements, proper weights must be assigned to ground control
coordinate observations in order to be included in the bundle adjustment. Expressed in matrix form,
the weights for X, Y and Z ground control coordinate observations of point j are:
1
  X2 X Y X Z 
2 
j j j j j
..
W j   o  Y j X j  Y2 j
Y Z
j j 
 Z Y  Z2 
 ZjX j j j j 

where
 o2 is the reference variance
 X2 j ,  Y2j and  Z2 j are the variances in X 00 00 00
j , Yj and Z j , respective ly

 X j Y j   Y j X j is the covariance of X 00 00
j with Yj

 Y j Z j   Z j Y j is the covariance of Yj00 with Z 00


j

 X j Z j   Z j X j is the covariance of X 00 00
j with Z j

(X 00 00 00
j , Yj and Z j are the measured coordinate values for point j)

Ch. 17
10/23/2022 Virtual Environment Lab, UTA 75
Exterior Orientation Parameters
The final type of observation consists of measurements of exterior orientation parameters. The form of
their observation equations is similar to that of ground control:

i  i00  vi i  i00  vi  i   i00  v i


X Li  X L00i  v X Li YLi  YL00i  vYLi Z Li  Z L00i  vZ Li
The weight matrix for exterior orientation parameters has the following form:
1
  2i      X  Y  i Z L i 
 
i i i i i Li i Li

  i  i  2 i
  i i
 X i Li
 Y i Li
 i Z L i 
.       2  X  Y   i Z Li 
Wi   i i i i i i Li i Li

 X L ii X  Li i
X  Li i
 X2 Li
X Y Li Li
 X Li Z Li 
 Y Y Y  Y2  YLi Z L i 
 YL ii L i i L i i Li X Li Li 
2
 Z L ii Z Z Z Z  Z Li 
 L i i L i i Li X Li L i YL i

Ch. 17
10/23/2022 Virtual Environment Lab, UTA 76
Now that we have all our observation equations and the
observations, the next step in applying least squares, is
to form the normal equations…

10/23/2022 Virtual Environment Lab, UTA 77


Normal Equations
With the observation equations and weights defined as previously, the full set of normal equations
may be formed directly.
In matrix form, the full normal equations are: N where:
K
. .

 N 1W 1 06 06 ... 06 N 11 N 12 N 13 ... N 1n  .   .
K
. .

1  W 1 C1
 . 1 
6 6 6
 06
. .  . 
N 2n 
. .
N 2 W 2 06 ... 06 N 21 N 22 N 23 ...  2   K 2  W1 C 2 
 6 6 6

 6 06
. . .   . . . 
6 06 N 3  W 3 ... 6 06 N 31 N 32 N 33 ... N 3n   3   K 3  W1 C3 
 
 ... ... ... ... ... ... ... ... ... ...      
 06
. . .  . . . 
0 6
0 6
... N 3  W 3 N m1 N m2 N m3 ... N mn 
N 6 T 6 6
   .. m  K  K..m  W.. m C m

 N 11 T T T .. ..
   ..
03 03  K 1  W 1 C1
3
N N ... N N1W 1 ... 30 
.. 
21 31 m1 1
3 3
  .. .. ..
 N 12
T T T T .. ..
  2   K 2  W 2 C2 
N 22 N 32 ... N m2 3 03 N 2W 2 3 03 ... 3 0 3
 ..   .. .. .. 
 .. .. 
 N 13
T
N
T
N
T
... N
T
03 03 N 3  W 3 ... 3  3   K 3  W 3 C3 
23 33 m3 30 
 ...
3 3
   
 ... ... ... ... ... ... ... ... ...   ..   .. 
.. .. 
T
 N 1n
T T T .. ..
  n   K n  W n C n 
N 2n N 3n ... N mn 3 0 3
3 0 3
3 0 3
... N n  W n 

. n . T . . T . .. m .. T .. . n . T . m .. T
N i   B ij Wij B ij N ij  B ij Wij B ij N j   B ij Wij B ij K i   B ij Wij  ij K j   B ij Wij  ij
j 1 i 1 j 1 i 1

m is the number of photos, n is the number of points, i is the photo subscript, and j is the point subscript
If point j does not appear on photo i, correspond ing submatrix will be a zero matrix.
. . .
W i contributi ons to N matrix and W i C i contributi ons to K matrix are made only when observatio ns for exterior orientatio n parameters exist.
.. .. ..
W j contributi ons to N matrix and W j C j contributi ons to K matrix are made only for ground control point observatio ns.
Ch. 17
10/23/2022 Virtual Environment Lab, UTA 78
Now that we have the equations ready to solve, we can
solve them with the initial approximations and iterate till
the iterated solutions do not change in value.

10/23/2022 Virtual Environment Lab, UTA 79


In aerial photography, if GPS is used to determine
the coordinates for exposure stations, we can
include those in the bundle adjustment and reduce
the amount of ground control that is required…

10/23/2022 Virtual Environment Lab, UTA 81


Bundle Adjustment with GPS control
Using GPS in aircraft to estimate coordinates of the exposure stations in the
adjustment can greatly reduce the number of ground control points
required.
Considerations while using GPS control:
1. Object space coordinates obtained by GPS pertain to the phase center of
the antenna but the exposure station is defined as the incident nodal point
of the camera lens.
2. The GPS recorder records data at uniform time intervals called epochs
(which may be on the order of 1s each), but the camera shutter operates
asynchronously wrt the GPS fixes.
3. If a GPS receiver operating in the kinematic mode loses lock on too many
satellites, the integer ambiguities must be redetermined.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 82
Additional Precautions
regarding Airborne GPS
First, it is recommended that a bundle adjustment with analytical self-calibration
be employed when airborne GPS control is used.
Often, due to inadequate modeling of atmospheric refraction distortion, strict
enforcement of the calibrated principal distance (focal length) of the camera will
cause distortions and excessive residuals in photo coordinates. Use of analytical
self-calibration will essentially eliminate that effect.

Second, it is essential that appropriate object space coordinate systems be


employed in data reduction.
GPS coordinates in a geocentric coordinate system should be converted to local
vertical coordinates for the adjustment. After aerotriangulation is completed, the
local vertical coordinates can be converted to whatever system is desired.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 84
Though all our discussion so far has been for aerial
photography, satellite images can also be used for
mapping…

In fact, since the launch of IKONOS, QuickBird, and


OrbView-3 satellites, rigorous photogrammetric processing
methods similar to those of aerial imagery, such as block
adjustment used to solve aerial blocks totaling hundreds or
even thousands of images, are routinely being applied to
high-resolution satellite image blocks.

10/23/2022 Virtual Environment Lab, UTA 85


Aerotriangulation with Satellite
• linear sensor arrays that Images
scan an image strip while the satellite
orbits.

• Each scan line of the scene has its own set of exterior orientation
parameters, principal point in the center of the line.

• The start position is the projection of the center of row 0 (of an


image with m columns and n rows) on the ground.

• Since, the satellite is highly stable during acquisition of the image,


the exterior orientation parameters can be assumed to vary in a
systematic fashion.

• satellite image data providers supply Rational Polynomial Camera


(RPC) coefficients. Thus it is possible to block adjust imagery
described by an RPC model.
Chapter 17 &
Gene, Grodecki (2002)
10/23/2022 Virtual Environment Lab, UTA 86
Aerotriangulation with Satellite
The exterior orientation parameters vary systematically as functions of the x
coordinate: Images
ωx = ω0 + a1.x; Фx = Ф0 + a2.x; қx = қ0 + a3.x;
XLx =XL0 + a4.x; YLx = YL0 +a5.x; ZLx = ZL0 + a6.x
+ a7.x2
Here,
x is the row no. of some image position,
ωx, Фx, қx, XLx, YLx, ZLx, are the exterior orientation
parameters of the sensor when row x was
acquired,
ω0, Ф0, қ0, XL0, YL0, ZL0, are the exterior orientation
parameters of the sensor at the start position, and
a1 through a7 are coefficients which describe the
systematic variations of the exterior orientation
parameters as the image is acquired.

Chapter 17
10/23/2022 Virtual Environment Lab, UTA 87
This procedure of aerotriangulation, however, can only be performed
at the ground station by the image providers who have access to the
physical camera model.

For users wishing to block adjust imagery with their own proprietary
ground control, or other reasons, the image providers supply the
images with RPCs…

10/23/2022 Virtual Environment Lab, UTA 88


Introduction to RPCs
• RPC camera model is the ratio of two cubic functions
of latitude, longitude, and height.
• RPC models transform 3D object-space coordinates
into 2D image-space coordinates.
• RPC models have traditionally been used for
rectification and feature extraction and have recently
been extended to block adjustment.

10/23/2022 Virtual Environment Lab, UTA 89


Lets look at the formal RPC mathematical model.
We start with defining the domain of the functional model and its
normalization, and then go on to define the actual functions…

10/23/2022 Virtual Environment Lab, UTA 90


RPC Mathematical Model
Separate rational functions are used to express the object-space to line and the object-
space to sample coordinates relationship.
Assume that (φ,λ,h) are geodetic latitude, longitude and height above WGS84 ellipsoid in
degrees, degrees and meters, respectively of a ground point and
(Line, Sample) are denormalized image space coordinates of the corresponding image
point

To improve numerical precision, image-space and object-space coordinates are


normalized to <-1,+1>

Given the object-space coordinates (φ,λ,h) and the latitude, longitude and height offsets
and scale factors, we can normalize latitude, longitude and height:

P = (φ – LAT_OFF) / LAT_SCALE
L = (λ – LONG_OFF) / LONG_SCALE
H = (h – HEIGHT_OFF) / HEIGHT_SCALE

The normalized line and sample image-space coordinates (Y and X, respectively) are
then calculated from their respective rational polynomial functions f(.) and g(.)

10/23/2022 Virtual Environment Lab, UTA 91


Definition of RPC Coefficients
Y = f(φ,λ,h) = NumL(P,L,H) / DenL(P,L,H) = cTu / dTu
X = g(φ,λ,h) = NumS(P,L,H) / DenS(P,L,H) = eTu / fTu
where,
NumL(P,L,H) = c1 + c2.L + c3.P + c4.H + c5.L.P + c6.L.H + c7.P.H + c8.L2 + c9.P2 + c10.H2 + c11.P.L.H + c12.L3
+ c13.L.P2 + c14.L.H2 + c15.L2.P + c16.P3 + c17.P.H2 + c18.L2.H + c19.P2.H + c20.H3
DenL(P,L,H) = 1 + d2.L + d3.P + d4.H + d5.L.P + d6.L.H + d7.P.H + d8.L2 + d9.P2 + d10.H2 + d11.P.L.H + d12.L3
+ d13.L.P2 + d14.L.H2 + d15.L2.P + d16.P3 + d17.P.H2 + d18.L2.H + d19.P2.H + d20.H3
NumS(P,L,H) = e1 + e2.L + e3.P + e4.H + e5.L.P + e6.L.H + e7.P.H + e8.L2 + e9.P2 + e10.H2 + e11.P.L.H +
e12.L3 + e13.L.P2 + e14.L.H2 + e15.L2.P + e16.P3 + e17.P.H2 + e18.L2.H + e19.P2.H + e20.H3
DenS(P,L,H) = 1 + f2.L + f3.P + f4.H + f5.L.P + f6.L.H + f7.P.H + f8.L2 + f9.P2 + f10.H2 + f11.P.L.H + f12.L3 +
f13.L.P2 + f14.L.H2 + f15.L2.P + f16.P3 + f17.P.H2 + f18.L2.H + f19.P2.H + f20.H3
There are 78 rational polynomial coefficients
u = [1 L P H LP LH PH L2 P2 H2 PLH L3 LP2 LH2 L2P P3 PH2 L2H P2H H3]
c = [c1 c2 … c20]T; d = [1 d2 … d20]T; e = [e1 e2 … e20]T; f=[1 f2 … f20]T

The denormalized RPC models for image j are given by :


Line = p(φ,λ,h) = f(φ,λ,h) . LINE_SCALE + LINE_OFF
Sample = r(φ,λ,h) = g(φ,λ,h) . SAMPLE_SCALE + SAMPLE_OFF

10/23/2022 Virtual Environment Lab, UTA 92


RPC Block Adjustment Model
The RPC block adjustment math model proposed is defined in the image space.
It uses denormalized RPC models, p and r, to express the object-space to image-
space relationship, and the adjustable functions, Δp and Δr, which are added to the
rational functions to capture the discrepancies between the nominal and the measured
image-space coordinates.
For each image point ‘i’ on image ‘j’, the RPC block adjustment math model is thus
defined as follows:
Linei(j) = Δp(j) + p(j)(φk,λk,hk) + εLi
(εLi and εSi are random unobservable errors,
Sample = Δr + r (φk,λk,hk) + εSi
i
(j) (j) (j)

p(j) and r(j) are the given line and sample,


where denormalized RPC models for image j)
Linei(j) and Samplei(j) are measured (on image j) line and sample coordinates of the ith
image point, corresponding to the kth ground control or tie point with object space
coordinates (φk,λk,hk)
Δp(j) and Δr(j) are the adjustable functions expressing the differences between the
measured and the nominal line and sample coordinates of ground control and/or tie
points, for image j

10/23/2022 Virtual Environment Lab, UTA 93


RPC Block Adjustment Model
The following is a general polynomial model defined in the domain of
image coordinates to represent the adjustable functions, Δp and Δr:
Δp = a0 + aS.Sample + aL.Line + aSL.Sample.Line + aL2.Line2 +
aS2.Sample2+…
Δr = b0 + bS.Sample + bL.Line + bSL.Sample.Line + bL2.Line2 +
bS2.Sample2+…
The following truncated polynomial model defined in the domain of
image coordinates to represent the adjustable functions is proposed to
be used:
Δp = a0 + aS.Sample + aL.Line
Δr = b0 + bS.Sample + bL.Line

10/23/2022 Virtual Environment Lab, UTA 94


RPC Block Adjustment Algorithm

Multiple overlapping images can be block adjusted using the


RPC adjustment.
The overlapping images, with RPC models expressing the
object-space to image-space relationship for each image, are
tied together by tie points
Optionally, the block may also have ground control points with
known or approximately known object-space coordinates and
measured image positions.
Because there is only one set of observation equations per
image point, index “i” uniquely identifies that set.

10/23/2022 Virtual Environment Lab, UTA 97


RPC Block Adjustment Algorithm
For the kth ground control being the ith tie point on the jth image, the RPC block adjustment
equations are:

FLi   Linei( j )  p ( j )  p ( j ) (k , k , hk )   Li  0

( j) ( j) ( j)
(Observation equations)
FSi   Sample i  r  r (k , k , hk )   Si  0
with: ( j) ( j) ( j) ( j) ( j) ( j)
p a 0  a .Sample i  a .Line
S L i

( j) ( j) ( j) ( j) ( j) ( j)
r b 0  b .Sample i  b .Line
S L i

Thus, observation equations are formed for each image point i.


Measured image-space coordinates for each image point i (Line i(j) and Samplei(j)) constitute the
adjustment model observables, while the image model parameters (a 0(j), aS(j), aL(j), b0(j), bS(j), bL(j)) and
the object space coordinates (φk, λk, hk) comprise the unknown adjustment model parameters.
( j) ( j)
Line i and Sample i are approximate fixed values for the true image coordinates.
Since true image coordinates are not known, values of the measured image coordinates are used
instead.
Effect of using approximate values is negligible because measurements of image coordinates are
performed with sub-pixel accuracy.

10/23/2022 Virtual Environment Lab, UTA 98


RPC Block Adjustment Algorithm
F 
The observation equations can be written as: Fi   Li 
 FSi 
Applying Taylor Series expansion to the RPC block adjustment observation
equations results in the following linearized model:
Fi0  dFi    0
where:  Fi0   dFi     wPi

  Linei( j )  a0( 0j )  aS( 0j ) .Samplei( j ) 


 
 FLi 0    aL0 .Linei  p (k 0 , k 0 , hk 0 ) 
( j) ( j) ( j)

Fi0      ( j) ( j) ( j) ( j )    wPi
 FSi0   Samplei  b0 0  bS 0 .Samplei 
 b ( j ) .Line ( j )  r ( j ) ( ,  , h ) 
 L0 i k0 k0 k0 
And…

10/23/2022 Virtual Environment Lab, UTA 99


RPC Block Adjustment Algorithm
 FLi   FLi FLi 
   T 
dFLi   x T  x A xGT x0   dx A   dx 
dx   AAi AGi  A 
x0  x0
dFi     dx   
 dFSi   FSi 

FSi FSi
 G  dxG 
 x T  T T
 x0   x A x0
xG x 
0 

 dx A   x A0 

dx   ; x0   ; dxG  d1 d1 dh1 ... dm p dm p dhm  p
T

dxG   xG0 

dx A  da0(1) daS(1) daL(1) db0(1) dbS(1) dbL(1) ... da0( n ) daS( n ) daL( n ) db0( n ) dbS( n ) dbL( n ) 
T

dx = x -x0 is the vector of unknown corrections to the approximate model parameters, x 0,


dxA is the sub-vector of the corrections to the approximate image adjustment parameters for n images
dxG is the sub-vector of the corrections to the approximate object space coordinates for m ground control
and p tie points
x0 is the vector of the approximate model parameters
ε is a vector of unobservable random errors

10/23/2022 Virtual Environment Lab, UTA 100


RPC Block Adjustment Algorithm
As a consequence of the previous reductions, the RPC block adjustment model in matrix form reads
 AA1   AG1 
 AA AG   wP  CP 0 0    
dx
Cw   0 C A 0 
I  
 0   A      wA  or, A dx    w AA    AG   
 AGi 
dx  AAi 
 0 I   G   wG   0 0 CG     
 
 
0...0 1 Samplei( j ) Linei( j ) 0 0 0 0...0
AAi   
0...0 0 0 0 1 Samplei( j ) Linei( j ) 0...0
 FLi FLi FLi 
0...0 0...0
k k hk
AGi   x0 x0 x0 

FSi FSi FSi
0...0 0...0
 k x0
k x0
hk x0 

Cw: The a priori covariance matrix of the vector of misclosures, w,


AA: The first-order design matrix for the image adjustment parameters
AG: The first-order design matrix for the object space coordinates

10/23/2022 Virtual Environment Lab, UTA 103


RPC Block Adjustment Algorithm
wP is the vector of misclosures for the image-space coordinates,
wPi is the sub-vector of misclosures for the image-space coordinates of the ith
image point on the jth image
 wP1 
   Linei( j )  a0( 0j )  aS( 0j ) .Samplei( j )  aL( 0j ) .Linei( j )  p ( j ) (k 0 , k 0 , hk 0 ) 
wP   ; w Pi   ( j) ( j) ( j) ( j) ( j) ( j) ( j) 
 wPi   Sample i  b 00  b S0 .Sample i  b L0 . Linei  r (k0 , k0 , h )
k0  
 
 
wA=0 is the vector of misclosures for the image adjustment parameters,
wG=0 is the vector of misclosures for the object space coordinates,
CP is the a priori covariance matrix of image-space coordinates,
CA is the a priori covariance matrix of the image adjustment parameters,
CG is the a priori covariance matrix of object-space coordinates

10/23/2022 Virtual Environment Lab, UTA 104


A Priori Constraints
This block adjustment model allows the introduction of a priori information using the
Bayesian estimation approach, which blurs the distinction between observables and
unknowns – both are treated as random quantities.
In the context of least squares , a priori information is introduced in the form of weighted
constraints. A priori uncertainty is expressed by CA, CP, and CG.
CA: uncertainty of a priori knowledge of the image adjustment parameters.
In an offset only model, the diagonal elements of C A (the variances of a0 and b0), express the
uncertainty of a priori satellite attitude and ephemeris.
CP: prior knowledge of image-space coordinates for ground control and tie points.
Line and sample variances in CP are set according to the accuracy of the image
measurement process.
CG: prior knowledge of object-space coordinates for ground control and tie points.
In the absence of any prior knowledge of the object coordinates for tie points, the
corresponding entries in CG can be made large (like 10,000m) to produce no significant bias.
One could also remove the weighted constraints for object coordinates of tie points from the
observation equations. But being able to introduce prior information for the object
coordinates of tie points adds flexibility.

10/23/2022 Virtual Environment Lab, UTA 105


RPC Block Adjustment Algorithm
Since the math model is non-linear, the least squares solution needs to be
iterated until convergence is achieved. At each iteration step, application of the
least squares principle results in the following vector of estimated corrections
to the approximate values of the model parameters.
At the subsequent iteration step, the vector of approximate model parameters
x0 is replaced by the estimated values:

ˆ  A C A
T 1 1
dx w AT C w1 w

ˆ  x  dx
x 0
ˆ
The least squares estimation is repeated until convergence is reached.
The covariance matrix of the estimated model parameters is:


C xˆ  AT C w1 A  1

10/23/2022 Virtual Environment Lab, UTA 106


Experimental Results
Project located in Mississippi, with 6 stereo strips and 40 well-distributed GCPs.
Each of the 12 source images was produced as a georectified image with RPCs.
The images were then loaded onto a Socet SET workstation running the RPC block adjustment
model.
Multiple well-distributed tie-points were measured along the edges of the images.
Ground points were selectively changed between control and check points to quantify block
adjustment accuracy as a function of the number and distribution of GCPs.
The block adjustment results were obtained using a simple two-parameter, offset-only model with a
priori values for a0 and b0 of 0 pixels and a priori standard deviation of 10 pixels.
GCP Average Error Average Error Average Error Standard Standard Standard
Longitude (in m) Latitude (in m) Height (in m) Deviation Deviation Deviation Height
Longitude (in m) Latitude (in m) (in m)

None -5.0 6.2 1.6 0.97 1.08 2.02

1 in center -2.0 0.5 -1.1 0.95 1.07 2.02

3 on edge -0.4 0.3 0.2 0.97 1.06 1.96

4 in corners -0.2 0.3 0.0 0.95 1.06 1.95

All 40 GCPs 0.0 0.0 0.0 0.55 0.75 0.50

When all 40 GCPs are used, the ground control overwhelms the tie points and the a priori constraints, thus,
effectively adjusting each strip separately such that it minimizes control point errors on that individual strip.

10/23/2022 Virtual Environment Lab, UTA 108


RPC - Conclusion
• RPC camera model provides a simple, fast and accurate representation of
the Ikonos physical camera model.

• If the a-priori knowledge of exposure station position and angles permits a


small angle approximation, then adjustment of the exterior orientation
reduces to a simple bias in image space.

• Due to the high accuracy of IKONOS, even without ground control, block
adjustment can be accomplished in the image space.

• RPC models are equally applicable to a variety of imaging systems and so


could become a standardized representation of their image geometry.

• From simulation and numerical examples, it is seen that this method is as


accurate as the ground station block adjustment with the physical camera
model.

10/23/2022 Virtual Environment Lab, UTA 109


Finally, lets review all the topics that we have covered…

10/23/2022 Virtual Environment Lab, UTA 110


Summary
The mathematical concepts covered today were:
1. Least squares adjustment (formulating observation equations and reducing
to normal equations)
2. Collinearity condition equations (derivation and linearization)
3. Space Resection (finding exterior orientation parameters)
4. Space Intersection (finding object space coordinates of common point in
stereopair)
5. Analytical Stereomodel (interior, relative and absolute orientation)
6. Ground control for Aerial photogrammetry
7. Aerotriangulation
8. Bundle adjustment (adjusting all photogrammetric measurements to ground
control values in a single solution)- conventional and RPC based

10/23/2022 Virtual Environment Lab, UTA 111


Terms
A lot of the terminology is such that can sometimes cause confusion. For
instance, while pass points and tie points mean the same thing, (ground)
control points refer to tie points whose coordinates in the object
space/ground control coordinate system are known, while the term check
points refers to points that are treated as tie points, but whose actual
ground coordinates are very accurately known.
Below are some more terms used in photogrammetry, along with their brief
descriptions:
1. stereopair: two adjacent photographs that overlap by more than 50%
2. space resection: finding the 6 elements of exterior orientation
3. space intersection: finding object point coordinates for points in stereo
overlap

4. stereomodel: object points that appear in the overlap area of a stereopair


5. analytical stereopair: 3D ground coordinates of points in stereomodel,
mathematically calculated using analytical photogrammetric techniques

10/23/2022 Virtual Environment Lab, UTA 112


Terms
6. interior orientation: photo coordinate refinement, including corrections
for film distortions, lens distortion, atmospheric refraction, etc.
7. relative orientation: relative angular attitude and positional
displacement of two photographs.
8. absolute orientation: exposure station orientations related to a ground
based coordinate system.
9. aerotriangulation: determination of X, Y and Z ground coordinates of
individual points based on photo measurements.
10. bundle adjustment: adjusting all photogrammetric measurements to
ground control values in a single solution
11. horizontal tie points: tie pts whose X and Y coordinates are known.
12. vertical tie points: tie pts whose Z coordinate is known

10/23/2022 Virtual Environment Lab, UTA 113


Software Products Available
There is a variety of software solutions available in the market today to perform all the
functionalities that we have seen today. The following is a list of a few of them:

1. ERDAS IMAGINE (http://gi.leica-geosystems.com): ERDAS Imagine photogrammetry suite


has all of the basic photogrammetry tools like block adjustment, orthophoto creation, metric
and non-metric camera support, and satellite image support for SPOT, Ikonos, and others. It is
perhaps one of the most popular photogrammetric tools currently.
2. ESPA (http://www.espasystems.fi): ESPA is a desktop software aimed at digital aerial
photogrammetry and airborne Lidar processing.
3. Geomatica (http://www.pcigeomatics.com/geomatica/demo.html): PCI Geomatics’ Geomatica
that offers a single integrated environment for remote sensing, GIS, photogrammetry,
cartography, web and development tools. A demo version of the software is also available at
their website.
4. Image Station (http://www.intergraph.com): Intergraph’s Z/I Imaging ImageStation comprises
modules like Photogrammetric Manager, Model Setup, Digital Mensuration, Automatic
Triangulation, Stereo Display, Feature Collection, DTM Collection, Automatic Elevations,
ImageStation Base Rectifier, OrthoPro, PixelQue, Image Viewer, Image Analyst.
5. INPHO (http://www.inpho.de): INPHO is an end-to-end photogrammetric systems supplier.
INPHO’s portfolio covers the entire workflow of photogrammetric projects, including aerial
triangulation, stereo compilation, terrain modeling, orthophoto production and image capture.
6. iWitness (http://www.iwitnessphoto.com): iWitness from DeChant Consulting Services is a
close-range photogrammetry software system that has been developed for accident
reconstruction and forensic measurement.

10/23/2022 Virtual Environment Lab, UTA 115


Software Products Available
7. (Aerosys) OEM Pak (http://www.aerogeomatics.com/aerosys/products.html): This free package
from Aerosys offers the exact same features as its Pro Version, except that the bundle adjustment
is limited to a maximum of 15 photos.
8. PHOTOMOD (http://www.racurs.ru/?page=94): PHOTOMOD, a software family from Racurs,
Russia, comprises of products for photogrammetric processing of remote sensing data which allow
to extract geometrically accurate spatial information from almost any commercially available type of
imagery.
9. PhotoModeler (http://www.photomodeler.com/downloads/default.htm): PhotoModeler, the software
program from Eos Systems, allows you to create 3D models and measurements from photographs
with export capabilities to 3D Studio 3DS, Wavefront OBJ, OpenNURBS/Rhino, RAW, Maya Script
format, and Google Earth’s KML and KMZ, etc.
10. SOCET SET (http://www.socetgxp.com): This is a digital photogrammetry software application from
BAE Systems. SOCET SET works with the latest airborne digital sensors and includes innovative
point-matching algorithms for multi-sensor triangulation. SOCET-SET used to be the standard by
which all other photogrammetry packages were measured against.
11. SUMMIT EVOLUTION (http://www.datem.com/support/download.html):Summit Evolution is the
digital photogrammetric workstation from DAT/EM, released in April 2001 at the ASPRS
Conference. The features of the software include subpixel functionality, support for different
orientation methods and various formats.
12. Vr Mapping Software (http://www.cardinalsystems.net): Vr Mapping Software Suite includes
modules for 2D/3D collection and editing, stereo softcopy,orthophoto rectification, aerial
triangulation, bundle adjustment, ortho mosaicing, volume computation, etc.

10/23/2022 Virtual Environment Lab, UTA 116


Open Source Software Solutions
There are three separate modules for relative orientation (relor.exe), space
resection (resect.exe) and 3D conformal coordinate transformation
(3DCONF.exe) available at: http://www.surv.ufl.edu/wolfdewitt/download.html

Another open source program is DGAP, a program for General Analytical


Positioning that can be found at:
http://www.ifp.uni-stuttgart.de/publications/software/openbundle/index.en.html

10/23/2022 Virtual Environment Lab, UTA 117


References
1. Wolf, Dewitt: “Elements of Photogrammetry”, McGraw Hill, 2000
2. Dial, Grodecki: “Block Adjustment with Rational Polynomial
Camera Models”, ACSM-ASPRS 2002 Annual Conference
Proceedings, 2002
3. Grodecki, Dial: “Block Adjustment of High-Resolution Satellite
Images described by Rational Polynomials”, PE&RS Jan 2003
4. Wikipedia
5. Other online resources
6. Software reviews from:
http://www.gisdevelopment.net/downloads/photo/index.htm and
http://www.gisvisionmag.com/vision.php?article=200202%2Frevie
w.html

10/23/2022 Virtual Environment Lab, UTA 118

Anda mungkin juga menyukai