Anda di halaman 1dari 128

Mathematical Foundation of

Photogrammetry
(part of EE5358)
Dr. Venkat Devarajan
Ms. Kriti Chauhan

Photogrammetry
photo = "picture, grammetry = "measurement,
therefore photogrammetry = photo-measurement
Photogrammetry is the science or art of obtaining
reliable measurements by means of photographs.
Formal Definition:
Photogrammetry is the art, science and technology of obtaining reliable
information about physical objects and the environment, through
processes of recording, measuring, and interpreting photographic
images and patterns of recorded radiant electromagnetic energy and
other phenomena.
- As given by the American Society for Photogrammetry and Remote Sensing
(ASPRS)
Chapter 1
03/06/16

Virtual Environment Lab, UTA

Distinct Areas
Interpretative Photogrammetry

Metric Photogrammetry

making precise measurements from


photos determine the relative locations
of points.

finding distances, angles, areas,


volumes, elevations, and sizes and
shapes of objects.

Most common applications:


1.

2.
3.

preparation of planimetric and


topographic maps

Deals in recognizing and identifying


objects and judging their significance
through careful and systematic analysis.

Photographic
Interpretation

production of digital orthophotos


Military intelligence
targeting

such

Remote
Sensing
(Includes use of multispectral
cameras, infrared cameras,
thermal scanners, etc.)

as

Chapter 1
03/06/16

Virtual Environment Lab, UTA

Uses of Photogrammetry
Products of photogrammetry:
1.

Topographic maps: detailed and accurate graphic representation of cultural and


natural features on the ground.

2.

Orthophotos: Aerial photograph modified so that its scale is uniform throughout.

3.

Digital Elevation Maps (DEMs): an array of points in an area that have X, Y and Z
coordinates determined.

Current Applications:
1.

Land surveying

2.

Highway engineering

3.

Preparation of tax maps, soil maps, forest maps, geologic maps, maps for city
and regional planning and zoning

4.

Traffic management and traffic accident investigations

5.

Military digital mosaic, mission planning, rehearsal, targeting etc.


Chapter 1

03/06/16

Virtual Environment Lab, UTA

Types of photographs
Aerial

Terrestrial

Oblique
Vertical

Low oblique
(does not include horizon)

Truly Vertical
High oblique
(includes horizon)

Tilted
(1deg< angle < 3deg)

Chapter 1
03/06/16

Virtual Environment Lab, UTA

Of all these type of photographs, vertical and low


oblique aerial photographs are of most interest to
us as they are the ones most extensively used for
mapping purposes

03/06/16

Virtual Environment Lab, UTA

Aerial Photography
Vertical aerial photographs are taken along parallel passes called flight strips.
Successive photographs along a flight strip overlap is called end lap 60%
Area of common coverage called stereoscopic overlap area.

Overlapping photos
called a stereopair.

Chapter 1
03/06/16

Virtual Environment Lab, UTA

Aerial Photography
Position of camera at each exposure is called the exposure station.
Altitude of the camera at exposure time is called the flying height.
Lateral overlapping of adjacent flight strips is called a side lap (usually 30%).
Photographs of 2 or more sidelapping strips used to cover an area is referred to
as a block of photos.

Chapter 1
03/06/16

Virtual Environment Lab, UTA

Now, lets examine the acquisition devices for these


photographs

03/06/16

Virtual Environment Lab, UTA

Camera / Imaging Devices


The general term imaging devices is used to describe instruments used for
primary photogrammetric data acquisition.

Types of imaging devices (based on how the image is formed):


1.

Frame sensors/cameras: acquire entire image simultaneously

2.

Strip cameras, Linear array sensors or Pushbroom scanners: sense


only a linear projection (strip) of the field of view at a given time and
require device to sweep across the gaming area to get a 2D image

3.

Flying spot scanners or mechanical scanners: detect only a small


spot at a time, require movement in two directions (sweep and scan)
to form 2D image.
Chapter 3

03/06/16

Virtual Environment Lab, UTA

10

Aerial Mapping Camera

Aerial mapping cameras


are
the
traditional
imaging devices used in
traditional
photogrammetry.

Chapter 3
03/06/16

Virtual Environment Lab, UTA

11

Lets examine terms and characteristics associated with a


camera, parameters of a camera, and how to determine
them

03/06/16

Virtual Environment Lab, UTA

12

Focal Plane of Aerial Camera


The focal plane of an aerial camera is the plane in which all incident light rays are
brought to focus.
Focal plane is set as exactly as possible at a distance equal to the focal length behind
the rear nodal point of the camera lens. In practice, the film emulsion rests on the focal
plane.

Rear nodal point: The emergent


nodal point of a thick combination
lens. (N in the figure)

Note: Principal point is a 2D point


on the image plane. It is the
intersection of optical axis and
image plane.
Chapter 3
03/06/16

Virtual Environment Lab, UTA

13

Fiducials in Aerial Camera


Fiducials are 2D control points whose xy coordinates are precisely and
accurately determined as a part of camera calibration.
Fiducial marks are situated in the middle of the sides of the focal plane
opening or in its corners, or in both locations.
They provide coordinate reference for principal point and image points.
Also allow for correction of film distortion (shrinkage and expansion)
since each photograph contains images of these stable control points.
Lines joining opposite fiducials intersect at a point called the indicated
principal point. Aerial cameras are carefully manufactured so that
this occurs very close to the true principal point.
True principal point: Point in the focal plane where a line from the rear
nodal point of the camera lens, perpendicular to the focal plane,
intersects the focal plane.
Chapter 3
03/06/16

Virtual Environment Lab, UTA

14

Elements of Interior
Orientation

Elements of interior orientation are the parameters needed to determine accurate spatial
information from photographs. These are as follows:
1.

Calibrated focal length (CFL), the focal length that produces an overall mean
distribution of lens distortion.

2.

Symmetric radial lens distortion, the symmetric component of distortion that occurs
along radial lines from the principal point. Although negligible, theoretically always
present.

3.

Decentering lens distortion, distortion that remains after compensating for


symmetric radial lens distortion. Components: asymmetric radial and tangential lens
distortion.

4.

Principal point location, specified by coordinates of a principal point given wrt x and
y coordinates of the fiducial marks.

5.

Fiducial mark coordinates: x and y coordinates which provide the 2D positional


reference for the principal point as well as images on the photograph.

The elements of interior orientation are determined through camera calibration.


Chapter 3
03/06/16

Virtual Environment Lab, UTA

15

Other Camera
Characteristics
Other camera characteristics that are often of significance are:

1.

Resolution for various distances from the principal point (highest near the
center, lowest at corners of photo)

2.

Focal plane flatness: deviation of platen from a true plane. Measured by


a special gauge, generally not more than 0.01mm.

3.

Shutter efficiency: ability of shutter to open instantaneously, remain open


for the specified exposure duration, and close instantaneously.

Chapter 3
03/06/16

Virtual Environment Lab, UTA

16

Camera Calibration:
General Approach
Step 1) Photograph an array of targets whose relative positions
are accurately known.
Step 2) Determine elements of interior orientation

make precise measurements of target images

compare actual image locations to positions they should


have occupied had the camera produced a perfect
perspective view.

This is the approach followed in most methods.

Chapter 3
03/06/16

Virtual Environment Lab, UTA

17

After determining interior camera parameters, we


consider measurements of image points from images

03/06/16

Virtual Environment Lab, UTA

18

Photogrammetric Scanners
Photogrammetric scanners are the devices used to convert the content of
photographs from analog form (a continuous-tone image) to digital form (an
array of pixels with their gray levels quantified by numerical values).
Coordinate measurement on the acquired digital image can be done either
manually, or through automated image-processing algorithms.
Requirements: sufficient geometric and radiometric resolution, and high
geometric accuracy.
Geometric/spatial resolution indicates pixel size of resultant image. Smaller
the pixel size, greater the detail that can be detected in the image. For high
quality photogrammetric scanners, min pixel size is on the order of 5 to 15m
Radiometric resolution indicates the number of quantization levels. Min should
be 256 levels (8 bit); most scanners capable of 1024 levels (10 bit) or higher.
Geometric quality indicates the positional accuracy of pixels in the resultant
image. For high quality scanners, it is around 2 to 3 m.
Chapter 4
03/06/16

Virtual Environment Lab, UTA

19

Sources of Error in Photo


Coordinates
The following are some of the sources of error that can distort the true photo
coordinates:
1.

Film distortions due to shrinkage, expansion and lack of flatness

2.

Failure of fiducial axes to intersect at the principal point

3.

Lens distortions

4.

Atmospheric refraction distortions

5.

Earth curvature distortion

6.

Operator error in measurements

7.

Error made by automated correlation techniques

Chapter 4
03/06/16

Virtual Environment Lab, UTA

20

Now that we have covered the basics of image


acquisition and measurement, we turn to analytical
photogrammetry

03/06/16

Virtual Environment Lab, UTA

21

Analytical Photogrammetry
Definition: Analytical photogrammetry is the term used to describe the
rigorous mathematical calculation of coordinates of points in object space
based upon camera parameters, measured photo coordinates and ground
control.

Features of Analytical photogrammetry:

rigorously accounts for any tilts

generally involves solution of large, complex systems of redundant


equations by the method of least squares

forms the basis of many modern hardware and software system including
stereoplotters, digital terrain model generation, orthophoto production, digital
photo rectification and aerotriangulation.

Chapter 11
03/06/16

Virtual Environment Lab, UTA

22

Image Measurement
Considerations

Before using the x and y photo coordinate pair, the following conditions should
be considered:
1.

Coordinates (usually in mm) are relative to the principal point - the origin.

2.

Analytical photogrammetry is based on assumptions such as light rays


travel in straight lines and the focal plane of a frame camera is flat. Thus,
coordinate refinements may be required to compensate for the sources of
errors, that violate these assumptions.

3.

Measurements must be ensured to have high accuracy.

4.

While making measurements of image coordinates of common points that


appear in more than one photograph, each object point must be precisely
identified between photos so that the measurements are consistent.

5.

Object space coordinates is based on a 3D cartesian system.

Chapter 11
03/06/16

Virtual Environment Lab, UTA

23

Now, we come to the most fundamental and useful


relationship in analytical photogrammetry, the
collinearity condition

03/06/16

Virtual Environment Lab, UTA

24

Collinearity Condition
The collinearity condition is illustrated in the figure below. The exposure station
of a photograph, an object point and its photo image all lie along a straight
line. Based on this condition we can develop complex mathematical relationships.

Appendix D
03/06/16

Virtual Environment Lab, UTA

25

Let:

Collinearity Condition
Equations

Coordinates of exposure station be XL, YL, ZL wrt


object (ground) coordinate system XYZ
Coordinates of object point A be XA, YA, ZA wrt
ground coordinate system XYZ
Coordinates of image point a of object point A be
xa, ya, za wrt xy photo coordinate system (of which
the principal point o is the origin; correction
compensation for it is applied later)
Coordinates of image point a be xa, ya, za in a
rotated image plane xyz which is parallel to the
object coordinate system
Transformation of (xa, ya, za) to (xa, ya, za) is
accomplished using rotation equations, which
we derive next.
Appendix D
03/06/16

Virtual Environment Lab, UTA

26

Rotation Equations

Omega rotation about x axis:

New coordinates (x1,y1,z1) of a point (x,y,z)


after rotation of the original coordinate
reference frame about the x axis by angle
are given by:
x1 = x
y1 = y cos + z sin
z1 = -ysin + z cos
Similarly, we obtain equations for phi rotation
about y axis:
x2 = -z1sin + x1 cos
y2 = y1
z2 = z1 cos + x1 sin
And equations for kappa rotation about z axis:
x = x2 cos + y2 sin
y = -x2 sin + y2 cos
z = z2
Appendix C
03/06/16

Virtual Environment Lab, UTA

27

Final Rotation Equations


We substitute the equations at each stage to get the following:
x = m11 x + m12 y + m13 z
Where ms are function of
y = m21 x + m22 y + m23 z
rotation angles , and
z = m31 x + m32 y + m33 z
In matrix form: X = M X
where

x
X y
z

m11
M m21
m31

m12
m22
m32

m13
m23
m33

x'
X ' y '
z '

Properties of rotation matrix M:


1.

Sum of squares of the 3 direction cosines (elements of M) in any row or column is


unity.

2.

M is orthogonal, i.e. M-1 = MT


Appendix C

03/06/16

Virtual Environment Lab, UTA

28

Coming back to the collinearity condition

03/06/16

Virtual Environment Lab, UTA

29

Collinearity Equations
Using property of similar triangles:

xa '
ya '
za '

X A X L YA YL Z L Z A
XA XL
Y Y
Z ZL
za ' ; ya ' A L z a ' ; za ' A
z a '
Z

Z
Z

Z
Z

Z
L
L
L
A
A
A

xa '

XA XL '
Y Y
Z ZL '
z a m12 A L z a' m13 A
z a
Z

Z
Z

Z
Z

Z
L
L
L
A
A
A

Substitute this into rotation formula:

xa m11

Now,

ya m21

factor out za/(ZA-ZL), divide xa, ya by za

XA XL '
Y Y
Z ZL '
z a m22 A L z a' m23 A
z a
Z A ZL
ZA ZL
ZA ZL

add corrections for offset of principal point (xo,yo) z m X A X L z ' m YA YL


a
31
a
32
Z A Z L

Z A ZL
and equate za=-f, to get:

ZA ZL '
z a
Z

Z
L
A

z a' m33

m11 ( X A X L ) m12 (YA YL ) m13 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

xa xo f

m21 ( X A X L ) m22 (YA YL ) m23 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

y a yo f

Appendix D
03/06/16

Virtual Environment Lab, UTA

30

Review of Collinearity
Collinearity equations:
Equations
Collinearity equations:
m11 ( X A X L ) m12 (YA YL ) m13 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

xa xo f

m21 ( X A X L ) m22 (YA YL ) m23 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

are nonlinear and

involve 9 unknowns:

y a yo f
Where,

1.

2.

xa, ya are the photo coordinates of image point a


XA, YA, ZA are object space coordinates of object/ground
point A
XL, YL, ZL are object space coordinates of exposure
station location

3.

omega, phi, kappa


inherent in the ms
Object coordinates (XA,
YA, ZA )
Exposure station
coordinates (XL, YL, ZL )

f is the camera focal length


xo, yo are the offsets of the principal point coordinates
ms are functions of rotation angles omega, phi, kappa
(as derived earlier)
03/06/16

Virtual Environment Lab, UTA

Ch. 11 & App D


31

Now that we know about the collinearity condition, lets


see where we need to apply it.
First, we need to know what it is that we need to find

03/06/16

Virtual Environment Lab, UTA

32

Elements of Exterior
As already mentioned, the
collinearity conditions involve 9 unknowns:
Orientation
Exposure station attitude (omega, phi, kappa),
1)
2)
3)

Exposure station coordinates (XL, YL, ZL ), and


Object point coordinates (XA, YA, ZA).

Of these, we first need to compute the position and attitude of the exposure
station, also known as the elements of exterior orientation.
Thus the 6 elements of exterior orientation are:
1)
spatial position (XL, YL, ZL) of the camera and
2)
angular orientation (omega, phi, kappa) of the camera
All methods to determine elements of exterior orientation of a single tilted
photograph, require:
1) photographic images of at least three control points whose X, Y and Z
ground coordinates are known, and
2) calibrated focal length of the camera.
Chapter 10
03/06/16

Virtual Environment Lab, UTA

33

As an aside, from earlier discussion:

Elements of Interior
Orientation

Elements of interior orientation which can be determined through camera calibration are
as follows:
1.

Calibrated focal length (CFL), the focal length that produces an overall mean
distribution of lens distortion. Better termed calibrated principal distance since it
represents the distance from the rear nodal point of the lens to the principal point of
the photograph, which is set as close to optical focal length of the lens as possible.

2.

Principal point location, specified by coordinates of a principal point given


wrt x and y coordinates of the fiducial marks.

3.

Fiducial mark coordinates: x and y coordinates of the fiducial marks which


provide the 2D positional reference for the principal point as well as images
on the photograph.

4.

Symmetric radial lens distortion, the symmetric component of distortion that occurs
along radial lines from the principal point. Although negligible, theoretically always
present.

5.

Decentering lens distortion, distortion that remains after compensating for


symmetric radial lens distortion. Components: asymmetric radial and tangential lens
distortion.
Chapter 3

03/06/16

Virtual Environment Lab, UTA

34

Next, we look at space resection which is used for


determining the camera station coordinates from a
single, vertical/low oblique aerial photograph

03/06/16

Virtual Environment Lab, UTA

35

Space Resection By
Collinearity

Space resection by collinearity involves formulating the collinearity equations for a


number of control points whose X, Y, and Z ground coordinates are known and whose
images appear in the vertical/tilted photo.
The equations are then solved for the six unknown elements of exterior orientation that
appear in them.

2 equations are formed for each control point

3 control points (min) give 6 equations: solution is unique, while 4 or more control points (more than
6 equations) allows a least squares solution (residual terms will exist)

Initial approximations are required for the unknown orientation parameters, since
the collinearity equations are nonlinear, and have been linearized using Taylors
theorem.

03/06/16

No. of points

No. of equations

Unknown ext. orientation


parameters

Virtual Environment Lab, UTA

Chapter 10 & 11
36

Coplanarity Condition
A similar condition to the collinearity condition, is coplanarity, which is the
condition that the two exposure stations of a stereopair, any object point and
its corresponding image points on the two photos, all lie in a common plane.
Like collinearity equations, the coplanarity equation is nonlinear and must be
linearized by using Taylors theorem. Linearization of the coplanarity equation
is somewhat more difficult than that of the collinearity equations.
But, coplanarity is not used nearly as extensively as collinearity in analytical
photogrammetry.
Space resection by collinearity is the only method still commonly used
to determine the elements of exterior orientation.

03/06/16

Virtual Environment Lab, UTA

37

Initial Approximations
for Space Resection

We need initial approximations for all six exterior orientation parameters.

Omega

and Phi angles: For the typical case of near-vertical photography, initial
values of omega and phi can be taken as zeros.

H:

Altimeter reading for rough calculations

Compute ZL (height H about datum plane) using ground line of known


length appearing on the photograph

To compute H, only 2 control points are required, rest are redundant.


Approximation can be improved by averaging several values of H.

03/06/16

Virtual Environment Lab, UTA

Chapter 11 & 6
38

Calculating Flying Height (H)


Flying height H can be calculated using a ground line of known length that
appears on the photograph.
Ground line should be on fairly level terrain as difference in elevation of
endpoints results in error in computed flying height.
Accurate results can be obtained despite this though, if the images of the end
points are approximately equidistant from the principal point of the photograph
and on a line through the principal point.
H can be calculated using equations for scale of a photograph:
S = ab/AB = f/H
(scale of photograph over flat terrain)
Or
S = f/(H-h)
(scale of photograph at any point whose elevation above datum is h)
Chapter 6
03/06/16

Virtual Environment Lab, UTA

39

As an explanation of the equations from which H is calculated:

Photographic Scale
S = ab/AB = f/H
SAB = ab/AB = La/LA = Lo/LO = f/(H-h)
where
1)
S is scale of vertical photograph over a flat terrain
2)

SAB is scale of vertical photograph over variable terrain

3)

ab is distance between images of points A and B on the


photograph

4)

AB is actual distance between points A and B

5)

f is focal length

6)

La is distance between exposure station L & image a of


point A on the photo positive

7)

LA is distance between exposure station L and point A

8)

Lo = f is the distance from L to principal point on the


photograph

9)

LO = H-h is the distance from L to projection of o onto the


horizontal plane containing point A with h being height of
point A from the datum plane

Note: For vertical photographs taken over variable terrain, there


are an infinite number of different scales.
Chapter 6
03/06/16

Virtual Environment Lab, UTA

40

Initial Approx. for XL, YL and

x and y ground coordinates of any point can be obtained by simply multiplying x and y
photo coordinates by the inverse of photo scale at that point.
This requires knowing

f, H and

elevation of the object point Z or h.

A 2D conformal coordinate transformation (comprising rotation and translation) can then be performed,
which relates these ground coordinates computed from the vertical photo equations to the control
values:

X = a.x b.y + TX;

Y = a.y + b.x + TY

We know (x,y) and (x,y) for n sets are known giving us 2n equations.
The 4 unknown transformation parameters (a, b, TX, TY) can therefore be calculated by least
squares. So essentially we are running the resection equations in a diluted mode with
initial values of as many parameters as we can find, to calculate the initial parameters of
those that cannot be easily estimated.
TX and TY are used as initial approximation for XL and YL, resp.
Rotation angle = tan-1(b/a) is used as approximation for (kappa).
Chapter 11
03/06/16

Virtual Environment Lab, UTA

41

Space Resection by Collinearity:


(To determine the 6 elements of
exterior orientation using collinearity condition)
Summary
Summary of steps:
1.

Calculate H (ZL)

2.

Compute ground coordinates from assumed vertical photo for the control points.

3.

Compute 2D conformal coordinate transformation parameters by a least squares


solution using the control points (whose coordinates are known in both photo
coordinate system and the ground control cood sys)

4.

Form linearized observation equations

5.

Form and solve normal equations.

6.

Add corrections and iterate till corrections become negligible.

Summary of Initializations:

Omega, Phi -> zero, zero

Kappa -> Theta

XL, YL -> TX, TY

ZL ->flying height H
Chapter 11
03/06/16

Virtual Environment Lab, UTA

42

If space resection is used to determine the elements of


exterior orientation for both photos of a stereopair, then
object point coordinates for points that lie in the stereo
overlap area can be calculated by the procedure known as
space intersection

03/06/16

Virtual Environment Lab, UTA

43

Space Intersection By
Collinearity

Use: To determine object point coordinates for points that lie in the stereo overlap area of two
photographs that make up a stereopair.
Principle: Corresponding rays to the same object point from two photos of a stereopair must
intersect at the point.

For a ground point A:


Collinearity equations are written for image point
a1 of the left photo (of the stereopair), and for
image point a2 of the right photo, giving 4
equations.
The only unknowns are XA, YA and ZA.
Since equations have been linearized using
Taylors theorem, initial approximations are
required for each point whose object space
coordinates are to be computed.
Initial approximations are determined using
the parallax equations.
Chapter 11
03/06/16

Virtual Environment Lab, UTA

44

Parallax Equations

Parallax Equations:
1)

pa = xa xa

2)

hA = H B.f/pa

3)

XA = B.xa/pa

4)

YA = B.ya/pa

where
hA is the elevation of point A above datum
H is the flying height above datum
B is the air base (distance between the exposure stations)
f is the focal length of the camera
pa is the parallax of point A
XA and YA are ground coordinates of point A in the
coordinate system with origin at the datum point P of
the Lpho, X axis is in same vertical plane as x and x
flight axes and Y axis passes through the datum point
of the Lpho and is perpendicular to the X axis
xa and ya are the photo coordinates of point a measured wrt
the flight line axes on the left photo
03/06/16

Virtual Environment Lab, UTA

Chapter 8

45

Applying Parallax Equations


to Space Intersection
For applying parallax equations, H and B have to be determined:
Since X, Y, Z coordinates for both exposure stations are known,
H is taken as average of ZL1 and ZL2 and
B = [ (XL2-XL1)2 + (YL2-YL1)2 ]1/2

The resulting coordinates from the parallax equations are in the arbitrary
ground coordinate system.
To convert them to, for instance WGS84, a conformal coordinate transformation
is used.

Chapter 11
03/06/16

Virtual Environment Lab, UTA

46

Now that we know how to determine object space


coordinates of a common point in a stereopair, we can
examine the overall procedure for all the points in the
stereopair...

03/06/16

Virtual Environment Lab, UTA

47

Analytical Stereomodel

Aerial photographs for most applications are taken so that adjacent photos overlap by
more than 50%. Two adjacent photographs that overlap in this manner form a
stereopair.
Object points that appear in the overlap area of a stereopair constitute a stereomodel.
The mathematical calculation of 3D ground coordinates of points in the stereomodel by
analytical photogrammetric techniques forms an analytical stereomodel.

The process of forming an analytical stereomodel involves 3 primary steps:


1.

Interior orientation (also called photo coordinate refinement): Mathematically


recreates the geometry that existed in the camera when a particular photograph was
exposed.

2.

Relative (exterior) orientation: Determines the relative angular attitude and positional
displacement between the photographs that existed when the photos were taken.

3.

Absolute (exterior) orientation: Determines the absolute angular attitude and positions
of both photographs.

After these three steps are achieved, points in the analytical stereomodel will have object
coordinates in the ground coordinate system.
Chapter 11
03/06/16

Virtual Environment Lab, UTA

48

Analytical Relative
Orientation

Analytical relative orientation involves defining (assuming) certain elements of exterior orientation
and calculating the remaining ones.

Initialization:
If the parameters are set to the values
mentioned (i.e., 1=1=1=XL1=YL1=0,
ZL1=f, XL2=b),
Then the scale of the stereomodel is
approximately equal to photo scale.
Now, x and y photo coordinates of the
left photo are good approximations for X
and Y object space coordinates, and
zeros are good approximations for Z
object space coordinates.
Chapter 11
03/06/16

Virtual Environment Lab, UTA

49

1)

2)

3)
4)

Analytical Relative
Orientation

All exterior orientation elements, excluding ZL1 of the left photo of the stereopair are
set to zero values.
For convenience, ZL of left photo (ZL1) is set to f and XL of right photo (XL2) is set to
photo base b.
This leaves 5 elements of the right photo that must be determined.
Using collinearity condition, min of 5 object points are required to solve for the
unknowns, since each point used in relative orientation is net gain of one equation for
the overall solution (since their X,Y and Z coordinates are unknowns too)
No. of points in overlap

No. of equations

No. of unknowns

4 (2+2)

5+3=8

4+4=8

8 + 3 = 11

8 + 4 = 12

11 + 3 = 14

12 + 4 = 16

14 + 3 = 17

16 + 4 =20

17 + 3 =20

20 + 4 = 24

20 + 3 = 23

Chapter 11
03/06/16

Virtual Environment Lab, UTA

51

Analytical Absolute
Orientation

Stereomodel coordinates of tie points are related to their 3D coordinates in a (real, earth based)
ground coordinate system. For small stereomodel such as that computed from one stereopair,
analytical absolute orientation can be performed using a 3D conformal coordinate transformation.
Requires minimum of two horizontal and three vertical control points. (20 equations with 8 unknowns
plus the 12 exposure station parameters for the two photos:closed form solution). Additional control
points provide redundancy, enabling a least squares solution.
(horizontal control: the position of the point in object space is known wrt a horizontal datum;
vertical control: the elevation of the point is known wrt a vertical datum)

Once the transformation parameters have been computed, they can be applied to the remaining
stereomodel points, including the X L, YL and ZL coordinates of the left and right photographs. This
gives the coordinates of all stereomodel points in the ground system.
No. of equations

No. of additional unknowns

Total no. of unknowns

1 horizontal control point

2 per photo =>total 4

1 unknown Z value

12 exterior orientation
parameters + 1 = 13

1 vertical control point

2 equations per photo => 4


equations total

2 unknown X and Y values

12 + 2 = 14

2 horizontal control points

4 * 2 = 8 equations

1*2=2

12 + 2 = 14

3 vertical control points

4 * 3 = 12 equations

2*3=6

12 + 6 = 18

2 horizontal + 3 vertical
control points

8 + 12 = 20 equations

2+6=8

12 + 8 = 20

03/06/16

Virtual Environment Lab, UTA

Chapter 16 & 11
52

As already mentioned while covering camera


calibration, camera calibration can also be included
in a combined interior-relative-absolute orientation.
This is known as analytical self-calibration

03/06/16

Virtual Environment Lab, UTA

53

Analytical Self Calibration


Analytical self-calibration is a computational process wherein camera calibration
parameters are included in the photogrammetric solution, generally in a
combined interior-relative-absolute orientation.
The process uses collinearity equations that have been augmented with
additional terms to account for adjustment of the calibrated focal length,
principal-point offsets, and symmetric radial and decentering lens distortion.
In addition, the equations might include corrections for atmospheric refraction.

With the inclusion of the extra unknowns, it follows that additional independent
equations will be needed to obtain a solution.

Chapter 11
03/06/16

Virtual Environment Lab, UTA

54

So far we have assumed that a certain amount of


ground control is available to us for using in space
resection, etc. Lets take a look at the acquisition of
these ground control points

03/06/16

Virtual Environment Lab, UTA

55

Ground Control
for Aerial Photogrammetry
Ground control consists of any points

whose positions are known in an object-space coordinate system and

whose images can be positively identified in the photographs.

Classification of photogrammetric control:


1. Horizontal control: the position of the point in object space is known wrt a
horizontal datum
2. Vertical control: the elevation of the point is known wrt a vertical datum
Images of acceptable photo control points must satisfy two requirements:
1.

They must be sharp, well defined and positively identified on all photos, and

2.

They must lie in favorable locations in the photographs

Chapter 16
03/06/16

Virtual Environment Lab, UTA

56

Photo Control Points


for Aerotriangulation
The Number of ground-surveyed photo control needed varies with
1.
size, shape and nature of area,
2.
accuracy required, and
3.
procedures, instruments, and personnel to be used.
In general, more dense the ground control, the better the accuracy in the
supplemental control determined by aerotriangulation. thesis of our targeting
project!!
There is an optimum number, which affords maximum economic benefit and maintains a
satisfactory standard of accuracy.
The methods used for establishing ground control are:
1.
2.

Traditional land surveying techniques


Using Global Positioning System (GPS)

Chapter 16
03/06/16

Virtual Environment Lab, UTA

57

Ground Control by GPS


While GPS is most often used to compute horizontal position, it is capable of
determining vertical position (elevation) to nearly the same level of accuracy.
Static GPS can be used to determine coordinates of unknown points with
errors at the centimeter level.
Note: The computed vertical position will be
related to the ellipsoid, not the geoid or mean
sea level. To relate the GPS-derived
elevation (ellipsoid height) to the more
conventional elevation (orthometric height), a
geoid model is necessary.
However, if the ultimate reference frame is
related to the ellipsoid, this should not pose a
problem.

Chapter 16
03/06/16

Virtual Environment Lab, UTA

58

Having covered processing techniques for single


points, we examine the process at a higher level, for
all the photographs

03/06/16

Virtual Environment Lab, UTA

59

Aerotriangulation
It is the process of determining the X, Y, and Z ground coordinates of
individual points based on photo coordinate measurements.

consists

of photo measurement followed by numerical interior,


relative, and absolute orientation from which ground coordinates
are computed.

For large projects, the number of control points needed is extensive

cost can be extremely high

Much of this needed control can be established by aerotriangulation


for only a sparse network of field surveyed ground control.
Using GPS in the aircraft to provide coordinates of the camera
eliminates the need for ground control entirely

in practice a small amount of ground control is still used to


strengthen the solution.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

60

Pass Points for


selected asAerotriangulation
9 points in a format of 3 rows X 3 columns,

equally spaced over photo.

The points may be images of natural, well-defined objects


that appear in the required photo areas

if such points are not available, pass points may be


artificially marked.

Digital image matching can be used to select points in the


overlap areas of digital images and automatically match
them between adjacent images.

essential step of automatic aerotriangulation.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

61

Analytical Aerotriangulation
The most elementary approach consists of the following basic steps:
1.
2.

3.

relative orientation of each stereomodel


connection of adjacent models to form continuous strips and/or
blocks, and
simultaneous adjustment of the photos from the strips and/or blocks
to field-surveyed ground control

X and Y coordinates of pass points can be located to an accuracy of


1/15,000 of the flying height, and Z coordinates can be located to an
accuracy of 1/10,000 of the flying height.
With specialized equipment and procedures, planimetric accuracy of
1/350,000 of the flying height and vertical accuracy of 1/180,000
have been achieved.
Chapter 17
03/06/16

Virtual Environment Lab, UTA

62

Analytical Aerotriangulation
Several variations exist. Technique

Basically, all methods consist of writing equations that express the unknown
elements of exterior orientation of each photo in terms of camera constants,
measured photo coordinates, and ground coordinates.

The equations are solved to determine the unknown orientation parameters


and either simultaneously or subsequently, coordinates of pass points are
calculated.

By far the most common condition equations used are the collinearity
equations.

Analytical procedures like Bundle Adjustment can simultaneously enforce


collinearity condition on to 100s of photographs.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

63

Simultaneous Bundle
Adjusting all photogrammetric measurements to ground control values
Adjustment
in a single solution is known as a bundle adjustment. The process is so
named because of the many light rays that pass through each lens
position constituting a bundle of rays.
The bundles from all photos are adjusted simultaneously so that
corresponding light rays intersect at positions of the pass points and
control points on the ground.
After the normal equations have been formed, they are solved for the
unknown corrections to the initial approximations for exterior orientation
parameters and object space coordinates.
The corrections are then added to the approximations, and the
procedure is repeated until the estimated standard deviation of unit
weight converges.
Chapter 17
03/06/16

Virtual Environment Lab, UTA

64

Quantities in Bundle
Adjustment

The unknown quantities to be obtained in a bundle adjustment consist of:


1.
The X, Y and Z object space coordinates of all object points, and
2.
The exterior orientation parameters of all photographs

The observed quantities (measured) associated with a bundle adjustment are:


1.
x and y photo coordinates of images of object points,
2.
X, Y and/or Z coordinates of ground control points,
3.
direct observations of exterior orientation parameters of the photographs.
The first group of observations, photo coordinates, is the fundamental photogrammetric
measurements.
The next group of observations is coordinates of control points determined through field
survey.
The final set of observations can be estimated using airborne GPS control system as well
as inertial navigation systems (INSs) which have the capability of measuring the
angular attitude of a photograph.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

65

Bundle Adjustment on a
Consider a small block consisting
of 2 strips
with 4 photos per strip, with 20 pass
Photo
Block

points and 6 control points, totaling 26 object points; with 6 of those also serving
as tie points connecting the two adjacent strips.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

66

Bundle Adjustment on a
Photo Block

To repeat, consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass points and
6 control points, totaling 26 object points; with 6 of those also serving as tie points connecting the
two adjacent strips.
In this case,
The number of unknown object coordinates
= no. of object points X no. of coordinates per object point = 26X3 = 78
The number of unknown exterior orientation parameters
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48
Total number of unknowns = 78 + 48 = 126
The number of photo coordinate observations
= no. of imaged points X no. of photo coordinates per point = 76 X 2 = 152
The number of ground control observations
= no. of 3D control points X no. of coordinates per point = 6X3 = 18
The number of exterior orientation parameters
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48

No. of imaged points =


4X8
(photos 1, 4, 5 & 8
have 8 imaged points
each)
+
4 X 11
(photos 2, 3, 6 & 7 have
11 imaged points each)
= total 76 point images

If all 3 types of observations are included, there will be a total of 152+18+48=218 observations; but if
only the first two types are included, there will be only 152+18=170 observations
Thus, regardless of whether exterior orientation parameters were observed, a least squares solution is
possible since the number of observations in either case (218 and 170) is greater than the number
of unknowns (126 and 78, respectively).
Chapter 17
03/06/16

Virtual Environment Lab, UTA

67

The next question is, how are these equations


solved.
Well, we start with observations equations, which
would be the collinearity condition equations that we
have already seen, we linearize them, and then use
least squares procedure to find the unknowns.
We will start by refreshing our memories on least
squares solution of over-determined equation set.

03/06/16

Virtual Environment Lab, UTA

68

Relevant Definitions
Observations are the directly observed (or measured) quantities which
contain random errors.
True Value is the theoretically correct or exact value of a quantity. It can never
be determined, because no matter how accurate, the observation will always
contain small random errors.
Accuracy is the degree of conformity to the true value.
Since true value of a continuous physical quantity can never be known,
accuracy is likewise never known. Therefore, it can only be estimated.
Sometimes, accuracy can be assessed by checking against an independent,
higher accuracy standard.
Precision is the degree of refinement of quantity.
The level of precision can be assessed by making repeated measurements
and checking the consistency of the values.
If the values are very close to each other, the measurements have high
precision and vice versa.
03/06/16

Virtual Environment Lab, UTA

Appendix A & B
69

Relevant Definitions
Error is the difference between any measured quantity and the true value for that
quantity.

Random errors (accidental and compensating)

Types of errors

Systematic errors (cumulative; measured and modeled to compensate)


Mistakes or blunders (avoided as far as possible; detected and eliminated)

Most probable value is that value for a measured or indirectly determined


quantity which, based upon the observations, has the highest probability.
The MPV of a quantity directly and independently measured having observations of equal
weight is simply the mean.

x
MPV
m
03/06/16

Where x is the sum of the individual measurements, and m is


the number of observations.

Virtual Environment Lab, UTA

Appendix A & B
70

Relevant Definitions
Residual is the difference between any measured quantity and the most probable
value for that quantity.
It is the value which is dealt with in adjustment computations, since errors are
indeterminate. The term error is frequently used when residual is in fact meant.
Degrees of freedom is the number of redundant observations (those in excess of
the number actually needed to calculate the unknowns).
Weight is the relative worth of an observation compared to nay other observation.
Measurements are weighted in adjustment computations according to their
precisions.
Logically, a precisely measured value should be weighted more in an adjustment
so that the correction it receives is smaller than that received by less precise
measurements.
If same equipment and procedures are used on a group of measurements, each
observation is given an equal weight.
Appendix B
03/06/16

Virtual Environment Lab, UTA

71

Relevant Definitions
Standard deviation (also called root mean square error or 68 percent error) is a
quantity used to express the precision of a group of measurements.
For m number of direct, equally weighted observations of a quantity, its standard
deviation is:

v2
S
r

Where v2 is the sum of squares of the residuals and r is


the number of degrees of freedom (r=m-1)

According to the theory of probability, 68% of


the observations in a group should have
residuals smaller than the standard deviation.
The area between S and +S in a Gaussian
distribution curve (also called Normal
distribution curve) of the residual, which is
same as the area between average-S and
average+S on the curve of measurements, is
68%.

Appendix B
03/06/16

Virtual Environment Lab, UTA

72

Fundamental Condition of Least


Squares
For a group of equally weighted observations, the fundamental condition which
is enforced in least square adjustment is that the sum of the squares of the
residuals is minimized.
Suppose a group of m equally weighted measurements were taken with
residuals v1, v2, v3,, vm then:

2 v 2 v 2 v 2 ... v 2 minimum
m
vi 1 2 3
i 1

Basic assumptions underlying least squares theory:


1.

Number of observations is large

2.

Frequency distribution of the errors is normal (gaussian)


Appendix B

03/06/16

Virtual Environment Lab, UTA

73

Applying Least Squares


Steps:
1)

Write observation equations (one for each measurement) relating


measured values to their residual errors and the unknown
parameters.

2)

Obtain equation for each residual error from corresponding


observation.

3)

Square and add residuals

4)

To minimize v2 take partial derivatives wrt each unknown variable


and set them equal to zero

5)

This gives a set of equations called normal equations which are


equal in number to the number of unknowns.

6)

Solve normal equations to obtain the most probable values for the
unknowns.

Appendix B
03/06/16

Virtual Environment Lab, UTA

74

Least Squares Example


Problem

Let:

AB be a line segment

C divide AB into 2 parts of length X and Y

In this least squares problem, the coefficients of


unknowns in the observation equations are other
than zero and unity

D be midpoint of AC, i.e. AD = DC = x


E and F trisect CB, i.e. CE = EF = FB = y
2x
x

Take v2 and differentiate partially w.r.t. the unknowns to get 2


equations in 2 unknowns.

3y
x

4 observation equations (m=2) in 2 variables/unknowns (n=2)

Solution gives the most probable values of x and y.

Corresponding
Observation Eqns:
x + 3y = 10.1 + v1
2y = 6.2 + v3
x + 2y = 6.9 + v2
2x + y = 4.8 + v4

03/06/16

Virtual Environment Lab, UTA

Note:
If D is not the exact midpoint and E
& F do not trisect the into exactly
equal parts,
Actual x and y values may differ
from segment to segment.
We only get the most probable
values for x and y!

75

Formulating Equations
Step 1) Observation Equations (one for each measurement) :
(include a residual for each observation)
x 3 y 10.1 v1
x 2 y 6.9 v2
2 y 6.2 v3
2 x y 4.8 v4
Step 2) Equation for each residual error from corresponding observation
v1 x 3 y 10.1
v2 x 2 y 6.9
v3 2 y 6.2
v4 2 x y 4.8
Step 3) Square and add residuals :
v 2 v12 v22 v32 v42
( x 3 y 10.1) 2 ( x 2 y 6.9) 2 (2 y 6.2) 2 (2 x y 4.8) 2
03/06/16

Virtual Environment Lab, UTA

76

Normal Equations and


Solution
Step 4) Taking partial derivatives of v 2 :

v2
2( x 3 y 10.1) 2( x 2 y 6.9) 0 2(2 x y 4.8) * 2
x
v2
2( x 3 y 10.1) * 3 2( x 2 y 6.9) * 2 2(2 y 6.2) * 2 2( 2 x y 4.8)
y
Normal Equations :

2( x 3 y 10.1) 2( x 2 y 6.9) 0 2( 2 x y 4.8) * 2 0


2( x 3 y 10.1) * 3 2( x 2 y 6.9) * 2 2(2 y 6.2) * 2 2(2 x y 4.8) 0
Simplified Normal Equations :
12 x 14 y 53.2 0
14 x 36 y 122.6 0
Step 5) Solving :
12 14 x 53.2
6 7

14 36 y 122.6
7 18

x 6 7

y 7 18
03/06/16

x 26.6
y 61.3

26.6 0.8424
61.3 3.0780

Virtual Environment Lab, UTA

77

General Form of Observation


Equations
Step 1:
m linear observation equations of equal weight containing n unknowns:
For m<n: underdetermined set of equations.
For m=n: solution is unique
For m>n: m-n observations are redundant, least squares can be applied to find MPVs

a11 X 1 a12 X 2 a13 X 3 ... a1n X n L1 v1


a21 X 1 a22 X 2 a23 X 3 ... a2 n X n L2 v2

a31 X 1 a32 X 2 a33 X 3 ... a3n X n L3 v3

(equations I)

am1 X 1 am 2 X 2 am3 X 3 ... amn X n Lm vm


Where:
Xj: unknown
aij: coefficients of the unknown Xjs
Li: observations
vi: residuals
Appendix B
03/06/16

Virtual Environment Lab, UTA

78

General Form of Normal


Equations
obtained at the end of Step 4:
Equations
m

(a a

i1 i1

i 1

(a

i 1

i 1

i 1

i 1

) X 1 (ai1ai 2 ) X 2 (ai1ai 3 ) X 3 ... (ai1ain ) X n (ai1 Li )


m

i 1

i 1

i 1

i 1

i 1

i 1

i 1

i 1

a ) X 1 (ai 2 ai 2 ) X 2 (ai 2 ai 3 ) X 3 ... (ai 2 ain ) X n (ai 2 Li )

i 2 i1

i 1
m

(a

a ) X 1 (ai 3 ai 2 ) X 2 (ai 3 ai 3 ) X 3 ... (ai 3 ain ) X n (ai 3 Li )

(equations II)

i 3 i1

i 1

(a

i 1

i 1

i 1

i 1

a ) X 1 (aim ai 2 ) X 2 (aim ai 3 ) X 3 ... (aim ain ) X n (aim Li )

im i1

i 1

At step 1 we have m equations in n variables.


At the end of Step 4 we have n equations in n variables.
Appendix B
03/06/16

Virtual Environment Lab, UTA

79

Matrix Forms of Equations


Equations I (observation equations) in matrix form:
Equations II (normal equations) in matrix form:

n
1
1
1
A
X

V
m
n
m
m

( AT A) X AT L
X ( AT A) 1 ( AT L)

where:

a11
a
21
n

m A a31

...
am1

a12
a22
a32
...
am 2

a13
a23
a33
...
am 3

...
...
...
...
...

a1n
a2 n
a3n

...
amn

X1
X
2
1

n X X3

X n

L1
L
2
1

n L L3


Ln

v1
v
2
1

nV v3


vn

Appendix B
03/06/16

Virtual Environment Lab, UTA

80

Standard Deviation of
residuals

The observation equation in matrix form:

V AX L

Standard deviation of unit weight for an unweighted adjustment is:

V TV
S0
r

Standard deviations of the adjusted quantities are:

S xi S 0 QX i X i

where,
r is the number of degrees of freedom and equals the number of observation minus the
number of unknowns i.e. r = m n
SXi is the standard deviation of the ith adjusted quantity, i.e., the quantity in the ith row of the
X matrix
S0 is the standard deviation of unit weight
QXiXi is the element in the ith row and the ith column of the matrix (ATA)-1 in the unweighted
case or the matrix (ATWA)-1
Appendix B
03/06/16

Virtual Environment Lab, UTA

81

Standard Deviations in
Example
For our example problem, we
find the standard deviation of
x and y to be:
Sx=0.2016 and Sy=0.3492

v1 1 3
10.1
v 1 2 x 6.9

V AX L 2
v3 0 2 y 6.2

4.8
v4 2 1
v1 1 3
10.1 0.0236
v 1 2 0.8424 6.9 0.0984


v3 0 2 3.0780 6.2 0.0440


v
2
1
4
.
8

0
.
0372

V TV
S0
0.0823
r
6 7
AT A

7 18
S xi S 0 Q X i X i
S x S 0 * 6 and S y S 0 * 18
S x 0.2016 and S y 0.3492

03/06/16

Virtual Environment Lab, UTA

82

Linearization of our non-linear


equation set

Our Least Squares Solution was for a linear


set of equations
Remember in all our photogrammetric
equations we have sines, cosines etc.
Need to linearize
Use Taylor Series Expansion

03/06/16

Virtual Environment Lab, UTA

83

Review of Collinearity
Collinearity equations:
Equations
Collinearity equations:
m11 ( X A X L ) m12 (YA YL ) m13 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

xa xo f

m21 ( X A X L ) m22 (YA YL ) m23 ( Z A Z L )

m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )

y a yo f
Where,

are nonlinear and

involve 9 unknowns:
1.

2.

xa, ya are the photo coordinates of image point a


XA, YA, ZA are object space coordinates of object/ground
point A

3.

XL, YL, ZL are object space coordinates of exposure


station location

omega, phi, kappa


inherent in the ms
Object point coordinates
(XA, YA, ZA )
Exposure station
coordinates (XL, YL, ZL )

f is the camera focal length


xo, yo are the coordinates of the principal point
ms are functions of rotation angles omega, phi, kappa
(as derived earlier)
03/06/16

Virtual Environment Lab, UTA

Ch. 11 & App D


84

Linearization of Collinearity
Equations
Rewriting the collinearity equations:
r
F xo f xa
q

s
G yo f y a
q

where

q m31 ( X A X L ) m32 (YA YL ) m33 ( Z A Z L )


r m11 ( X A X L ) m12 (YA YL ) m13 ( Z A Z L )

s m21 ( X A X L ) m22 (YA YL ) m23 ( Z A Z L )

Applying Taylors theorem to these equations (using only upto first order
partial derivatives), we get

Appendix D
03/06/16

Virtual Environment Lab, UTA

86

Linearized Collinearity
F
F
F
F
F
dX
dY
d
F
d Equations
d
Terms



X
Y
0

F
Z L

F
F
F
dZ L
dX A
dYA
dZ A xa

Z
A 0
A 0
A 0
0

G
G
G
G
d
d
d
0
0
0
X L

G
Z L

G0

G
dX L
0
YL

dYL
0

G
G
G
dZ L
dX A
dYA
dZ A ya
0
X A 0
YA 0
Z A 0

where

F0, G0: functions of F and G evaluated at the initial approximations for the 9
unknowns;
F F G G
,
, etc.,

,
,

0
0
0
0

are partial derivatives of F and G wrt the


indicated unknowns evaluated at the initial
approximation

d , d , d , etc., are unknown corrections to be applied to the initial approximations.


(angles are in radians)

Appendix D
03/06/16

Virtual Environment Lab, UTA

87

Simplified Linearized
Collinearity Equations
Since photo coordinates xa and ya are measured values, if the equations are to be used
in a least squares solution, residual terms must be included to make the equations
consistent.
The following simplified forms of the linearized collinearity equations include these
residuals:

b11d b12 d b13 d b14 dX L b15 dYL b16 dZ L


b14 dX A b15 dYA b16 dZ A J v xa

b21d b22 d b23 d b24 dX L b25 dYL b26 dZ L


b24 dX A b25 dYA b26 dZ A K v ya
where J = xa F0, K = ya - G0 and the bs are coefficients equal to the partial derivatives

In linearization using Taylors series, higher order terms are ignored, hence
these equations are approximations.
They are solved iteratively, until the magnitudes of corrections to initial
approximations become negligible.
Chapter 11
03/06/16

Virtual Environment Lab, UTA

88

We need to generalize and rewrite the linearized


collinearity conditions in matrix form.
While looking at the collinearity condition, we were only
concerned with one object space point (point A).
Lets first generalize and then express the equations in
matrix form

03/06/16

Virtual Environment Lab, UTA

89

Generalizing Collinearity
Equations

The observation equations which are the foundation of a bundle adjustment are the
collinearity equations:

m11i ( X j X Li ) m12i (Y j YLi ) m13i ( Z j Z Li )

xij xo f

m31i ( X j X Li ) m32i (Y j YLi ) m33 ( Z j Z Li )


m21i ( X j X L i ) m22 i (Y j YLi ) m23i ( Z j Z Li )

yij yo f

m31i ( X j X L i ) m32i (Y j YLi ) m33i ( Z j Z L i )

These non-linear equations


involve 9 unknowns: omega,
phi, kappa inherent in the ms,
object point coordinates (Xj, Yj,
Zj ) and exposure station
coordinates (XLi, YLi, ZLi )

Where,
xij, yij are the measured photo coordinates of the image of point j on photo i related to the
fiducial axis system
Xj, Yj, Zj are coordinates of point j in object space
XLi, YLi, ZLi are the coordinates of the eyepoint of the camera
f is the camera focal length
xo, yo are the coordinates of the principal point
m11i, m12i, ..., m33i are the rotation matrix terms for photo i
03/06/16

Virtual Environment Lab, UTA

Ch. 11 & App D


90

B ij

Linearized Equations in
Form
B Matrix
V
.

..

..

ij

ij

ij

b11ij b12ij b13ij b14ij b15ij b16ij


B ij

b21ij b22ij b23ij b24ij b25ij b26ij


.

b14ij
B ij
b24ij
..

b15ij
b25ij

b16ij
b26ij

di
d
i

.
d i
i

dX
Li

dYL
i

dZ

Li

dX j

j dY j
dZ j

..

J ij
ij

K
ij

v xij
Vij

v
y ij

Matrix B ij contains the partial derivatives of the collinearity equations with respect to the exterior
orientation parameters of photo i, evaluated at the initial approximations.
..

Matrix B ij contains the partial derivatives of the collinearity equations with respect to the object space
coordinates of point j, evaluated at the initial approximations.
.

Matrix i contains corrections for the initial approximations of the exterior orientation parameters for
photo i.
..

Matrix j contains corrections for the initial approximations of the object space coordinates of point j.
ij
Matrix

contains measured minus computed x and y photo coordinates for point j on photo i.

Matrix Vij contains residuals for the x and y photo coordinates.


Ch. 17
03/06/16

Virtual Environment Lab, UTA

91

Coming to the actual observations in the observation


equations (collinearity conditions), first we consider the
photo coordinate observations, then ground control and
finally exterior orientation parameters

03/06/16

Virtual Environment Lab, UTA

92

Weights of Photo Coordinate


Observations

Proper weights must be assigned to photo coordinate observations in order to be included in the
bundle adjustment.

Expressed in matrix form, the weights for x and y photo coordinate observations of point j on photo i
are:
1

x2ij xij yij


2
Wij o
2

yij
yij xij
where o2 is the reference variance; x2ij and y2ij are variances in x ij and y ij ,
respectively; and x ij yij yijx ij is the covariance of x ij with y ij .

The reference variance is an arbitrary parameter which can be set equal to 1,


and in many cases, the covariance in photo coordinates is equal to zero.
In this case, the weight matrix for photo coordinates simplifies to
1
2
xij

Wij

1
y2ij
Ch. 17

03/06/16

Virtual Environment Lab, UTA

93

Ground Control

Observation equations for ground control coordinates are:

X j X 00
j vX j
Yj Y

00
j

vY j

Z j Z 00
j vZ j

where
X j , Yj and Z j are unknown coordinates of point j
00
00
X 00
j , Yj and Z j are the measured coordinate values for point j

v X j , vY j and vZ j are the coordinate residuals for point j

Even though ground control observation equations are linear, in order to be consistent with the
collinearity equations, they will also be approximated by the first-order terms of Taylors series:

X 0j dX j X 00
j vX j
Y j0 dY j Y j00 vY j
Z 0j dZ j Z 00
j vZ j

where
X 0j , Yj0 and Z0j are the initial approximations for the coordinates of point j
dX j , dYj and dZ j are corrections to the approximations for the coordinates
of point j

Rearranging the terms and expressing in matrix form:

where
..

..

..

j C j Vj

dX j

j dY j
dZ j

..

X 00j X 0j

C j Y j00 Y j0
Z 00j Z 0j

..

vX
j


V j vY j
v
Zj
..

Ch. 17
03/06/16

Virtual Environment Lab, UTA

94

Weights of Ground Control


Observations

As with photo coordinate measurements, proper weights must be assigned to ground control
coordinate observations in order to be included in the bundle adjustment. Expressed in matrix form,
the weights for X, Y and Z ground control coordinate observations of point j are:

X2
j

W j o2 Y j X j

ZjX j
..

X jY j
Y2j
Z jY j

X jZ j

YjZ j
Z2 j

where

o2 is the reference variance


00
00
X2 j , Y2j and Z2 j are the variances in X 00
j , Yj and Z j , respectively
00
X j Y j Y j X j is the covariance of X 00
j with Yj

Y j Z j Z j Y j is the covariance of Yj00 with Z00


j
00
X j Z j Z j X j is the covariance of X 00
j with Z j
00
00
(X 00
j , Yj and Z j are the measured coordinate values for point j)

Ch. 17
03/06/16

Virtual Environment Lab, UTA

95

Exterior Orientation
Parameters

The final type of observation consists of measurements of exterior orientation parameters. The form of
their observation equations is similar to that of ground control:

i i00 vi

i i00 vi

i i00 v i

X Li X L00i v X Li

YLi YL00i vYLi

Z Li Z L00i vZ Li

The weight matrix for exterior orientation parameters has the following form:

i i

.
i i
Wi
X L ii

YLii
Z L ii

2
i

i i
2i
i i
X L i i

i i
i i
2i
X L i i

Z L i i

Z L i i

YL ii

YL i i

i X L i
i X L i
i X Li
X2 Li

iYLi
iYL i
iYLi
X L iYL i

Z Li X Li

Z L iYL i

YL i X L i

Y2Li

i Z L i

i Z L i
i Z Li

X Li Z Li

YL i Z L i

2
Z Li

Ch. 17
03/06/16

Virtual Environment Lab, UTA

96

Now that we have all our observation equations and the


observations, the next step in applying least squares, is
to form the normal equations

03/06/16

Virtual Environment Lab, UTA

97

Normal Equations
With the observation equations and weights defined as previously, the full set of normal equations
may be formed directly.
In matrix form, the full normal equations are:

N1W 1
6

60

N 2 W 2

T
11

T
21
T

06

...

06

N 11

N 12

N 13

...

N 1n

06

...

06

N 21

N 22

N 23

...

N 2n

6
60
...

N 31
...

N 32
...

N 33
...

...
...

N 3n
...

N m1

N m2

N m3

...

N 3 W 3 ...
...
...
6

T
31

N 32

T
13

T
23

T
33

N 1n

N
...

. T

N 2n
.

N i B ij Wij B ij
j 1

...

N 22

... N 3 W 3

N 12
N
...

where:

6
60
...

06

6
60
...
6

N K

...

N
...
T

N 3n
. T

T
m1
T

...

N mn

3
30

3
30

m .. T

..

..

..

3
30

...

3
30

..

03
...

. T

K i B ij Wij ij
j 1

..

1 W1 C1
K

.
.
.
K 2 W1 C 2

.. m

1

..
2
..
3

..
n

3
..

K 3 W1 C3

... N n W n

3
30

. 1
2

N mn

...

N 3 W 3 ...
...
...

03
...

N j B ij Wij B ij
i 1

N 2 W 2

03
...

..

..

3
30

T
m3

N
...

..

N1W 1

N m2

...
...

N ij B ij Wij B ij

..

m
K K..m W.. m C
..

1 W1 C1
K

..
..
..
K 2 W 2 C2
..
..
..

K 3 W 3 C3

..
..
..
K n W n C n

m .. T

K j B ij Wij ij
i 1

m is the number of photos, n is the number of points, i is the photo subscript, and j is the point subscript
If point j does not appear on photo i, corresponding submatrix will be a zero matrix.
.

..

..

W i contributions to N matrix and W i Ci contributions to K matrix are made only when observations for exterior orientation parameters exist.
..

W j contributions to N matrix and W j C j contributions to K matrix are made only for ground control point observations.
Ch. 17
03/06/16

Virtual Environment Lab, UTA

98

Now that we have the equations ready to solve, we can


solve them with the initial approximations and iterate till
the iterated solutions do not change in value.

03/06/16

Virtual Environment Lab, UTA

99

In aerial photography, if GPS is used to determine


the coordinates for exposure stations, we can
include those in the bundle adjustment and reduce
the amount of ground control that is required

03/06/16

Virtual Environment Lab, UTA

101

Bundle Adjustment with GPS


control
Using GPS in aircraft to estimate
coordinates of the exposure stations in the
adjustment can greatly reduce the number of ground control points
required.
Considerations while using GPS control:
1.

Object space coordinates obtained by GPS pertain to the phase center of


the antenna but the exposure station is defined as the incident nodal point
of the camera lens.

2.

The GPS recorder records data at uniform time intervals called epochs
(which may be on the order of 1s each), but the camera shutter operates
asynchronously wrt the GPS fixes.

3.

If a GPS receiver operating in the kinematic mode loses lock on too many
satellites, the integer ambiguities must be redetermined.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

102

Additional Precautions
regarding Airborne GPS
First, it is recommended that a bundle adjustment with analytical self-calibration
be employed when airborne GPS control is used.
Often, due to inadequate modeling of atmospheric refraction distortion, strict
enforcement of the calibrated principal distance (focal length) of the camera will
cause distortions and excessive residuals in photo coordinates. Use of analytical
self-calibration will essentially eliminate that effect.

Second, it is essential that appropriate object space coordinate systems be


employed in data reduction.
GPS coordinates in a geocentric coordinate system should be converted to local
vertical coordinates for the adjustment. After aerotriangulation is completed, the
local vertical coordinates can be converted to whatever system is desired.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

104

Though all our discussion so far has been for aerial


photography, satellite images can also be used for
mapping
In fact, since the launch of IKONOS, QuickBird, and
OrbView-3 satellites, rigorous photogrammetric processing
methods similar to those of aerial imagery, such as block
adjustment used to solve aerial blocks totaling hundreds or
even thousands of images, are routinely being applied to
high-resolution satellite image blocks.

03/06/16

Virtual Environment Lab, UTA

105

Aerotriangulation with
linear sensor arrays
that scan an image
strip while the satellite orbits.
Satellite
Images

Each scan line of the scene has its own set of exterior orientation
parameters, principal point in the center of the line.

The start position is the projection of the center of row 0 (of an image
with m columns and n rows) on the ground.

Since, the satellite is highly stable during acquisition of the image, the
exterior orientation parameters can be assumed to vary in a
systematic fashion.

satellite image data providers supply Rational Polynomial Camera


(RPC) coefficients. Thus it is possible to block adjust imagery
described by an RPC model.

03/06/16

Virtual Environment Lab, UTA

Chapter 17 &
Gene, Grodecki (2002)

106

Aerotriangulation with
The exterior orientation parameters vary systematically as functions of the x
coordinate:
Satellite Images
x = 0 + a1.x; x = 0 + a2.x; x = 0 + a3.x;
XLx =XL0 + a4.x; YLx = YL0 +a5.x; ZLx = ZL0 + a6.x
+ a7.x2
Here,
x is the row no. of some image position,
x, x, x, XLx, YLx, ZLx, are the exterior orientation
parameters of the sensor when row x was
acquired,
0, 0, 0, XL0, YL0, ZL0, are the exterior orientation
parameters of the sensor at the start position, and
a1 through a7 are coefficients which describe the
systematic variations of the exterior orientation
parameters as the image is acquired.

Chapter 17
03/06/16

Virtual Environment Lab, UTA

107

This procedure of aerotriangulation, however, can only be performed


at the ground station by the image providers who have access to the
physical camera model.
For users wishing to block adjust imagery with their own proprietary
ground control, or other reasons, the image providers supply the
images with RPCs

03/06/16

Virtual Environment Lab, UTA

108

Introduction to RPCs

RPC camera model is the ratio of two cubic functions


of latitude, longitude, and height.

RPC models transform 3D object-space coordinates


into 2D image-space coordinates.

RPC models have traditionally been used for


rectification and feature extraction and have recently
been extended to block adjustment.

03/06/16

Virtual Environment Lab, UTA

109

Lets look at the formal RPC mathematical model.


We start with defining the domain of the functional model and its
normalization, and then go on to define the actual functions

03/06/16

Virtual Environment Lab, UTA

110

RPC Mathematical Model

Separate rational functions are used to express the object-space to line and the objectspace to sample coordinates relationship.
Assume that (,,h) are geodetic latitude, longitude and height above WGS84 ellipsoid in
degrees, degrees and meters, respectively of a ground point and
(Line, Sample) are denormalized image space coordinates of the corresponding image
point
To improve numerical precision, image-space and object-space coordinates are
normalized to <-1,+1>
Given the object-space coordinates (,,h) and the latitude, longitude and height offsets
and scale factors, we can normalize latitude, longitude and height:
P = ( LAT_OFF) / LAT_SCALE
L = ( LONG_OFF) / LONG_SCALE
H = (h HEIGHT_OFF) / HEIGHT_SCALE
The normalized line and sample image-space coordinates (Y and X, respectively) are
then calculated from their respective rational polynomial functions f(.) and g(.)

03/06/16

Virtual Environment Lab, UTA

111

Definition of RPC
Coefficients
Y = f(,,h) = NumL(P,L,H) / DenL(P,L,H) = cTu / dTu

X = g(,,h) = NumS(P,L,H) / DenS(P,L,H) = eTu / fTu


where,

NumL(P,L,H) = c1 + c2.L + c3.P + c4.H + c5.L.P + c6.L.H + c7.P.H + c8.L2 + c9.P2 + c10.H2 + c11.P.L.H + c12.L3
+ c13.L.P2 + c14.L.H2 + c15.L2.P + c16.P3 + c17.P.H2 + c18.L2.H + c19.P2.H + c20.H3
DenL(P,L,H) = 1 + d2.L + d3.P + d4.H + d5.L.P + d6.L.H + d7.P.H + d8.L2 + d9.P2 + d10.H2 + d11.P.L.H + d12.L3
+ d13.L.P2 + d14.L.H2 + d15.L2.P + d16.P3 + d17.P.H2 + d18.L2.H + d19.P2.H + d20.H3
NumS(P,L,H) = e1 + e2.L + e3.P + e4.H + e5.L.P + e6.L.H + e7.P.H + e8.L2 + e9.P2 + e10.H2 + e11.P.L.H +
e12.L3 + e13.L.P2 + e14.L.H2 + e15.L2.P + e16.P3 + e17.P.H2 + e18.L2.H + e19.P2.H + e20.H3
DenS(P,L,H) = 1 + f2.L + f3.P + f4.H + f5.L.P + f6.L.H + f7.P.H + f8.L2 + f9.P2 + f10.H2 + f11.P.L.H + f12.L3 +
f13.L.P2 + f14.L.H2 + f15.L2.P + f16.P3 + f17.P.H2 + f18.L2.H + f19.P2.H + f20.H3
There are 78 rational polynomial coefficients
u = [1 L P H LP LH PH L2 P2 H2 PLH L3 LP2 LH2 L2P P3 PH2 L2H P2H H3]
c = [c1 c2 c20]T; d = [1 d2 d20]T; e = [e1 e2 e20]T; f=[1 f2 f20]T

The denormalized RPC models for image j are given by:


Line = p(,,h) = f(,,h) . LINE_SCALE + LINE_OFF
Sample = r(,,h) = g(,,h) . SAMPLE_SCALE + SAMPLE_OFF
03/06/16

Virtual Environment Lab, UTA

112

RPC Block Adjustment Model


The RPC block adjustment math model proposed is defined in the image space.
It uses denormalized RPC models, p and r, to express the object-space to imagespace relationship, and the adjustable functions, p and r, which are added to the
rational functions to capture the discrepancies between the nominal and the measured
image-space coordinates.
For each image point i on image j, the RPC block adjustment math model is thus
defined as follows:
Linei(j) = p(j) + p(j)(k,k,hk) + Li
Samplei(j) = r(j) + r(j)(k,k,hk) + Si
where

(Li and Si are random unobservable errors,


p(j) and r(j) are the given line and sample,
denormalized RPC models for image j)

Linei(j) and Samplei(j) are measured (on image j) line and sample coordinates of the ith
image point, corresponding to the kth ground control or tie point with object space
coordinates (k,k,hk)
p(j) and r(j) are the adjustable functions expressing the differences between the
measured and the nominal line and sample coordinates of ground control and/or tie
points, for image j
03/06/16

Virtual Environment Lab, UTA

113

RPC Block Adjustment Model


The following is a general polynomial model defined in the domain of
image coordinates to represent the adjustable functions, p and r:
p = a0 + aS.Sample + aL.Line + aSL.Sample.Line + aL2.Line2 +
aS2.Sample2+
r = b0 + bS.Sample + bL.Line + bSL.Sample.Line + bL2.Line2 +
bS2.Sample2+
The following truncated polynomial model defined in the domain of
image coordinates to represent the adjustable functions is proposed to
be used:
p = a0 + aS.Sample + aL.Line
r = b0 + bS.Sample + bL.Line

03/06/16

Virtual Environment Lab, UTA

114

RPC Block Adjustment


Algorithm
Multiple overlapping images can be block adjusted using the
RPC adjustment.
The overlapping images, with RPC models expressing the
object-space to image-space relationship for each image, are
tied together by tie points
Optionally, the block may also have ground control points with
known or approximately known object-space coordinates and
measured image positions.
Because there is only one set of observation equations per
image point, index i uniquely identifies that set.

03/06/16

Virtual Environment Lab, UTA

117

RPC Block Adjustment


Algorithm

For the kth ground control being the ith tie point on the jth image, the RPC block adjustment
equations are:

FLi Linei( j ) p ( j ) p ( j ) (k , k , hk ) Li 0
FSi Sample

( j)
i

with:

p
r

( j)

( j)

( j)

r (k , k , hk ) Si 0
( j)

( j)

( j)
0

a .Sample i a .Line

( j)
0

b .Sample i b .Line

( j)
S

( j)
S

( j)

( j)
L

( j)
L

(Observation equations)

( j)
i

( j)
i

Thus, observation equations are formed for each image point i.


Measured image-space coordinates for each image point i (Line i(j) and Samplei(j)) constitute the
adjustment model observables, while the image model parameters (a 0(j), aS(j), aL(j), b0(j), bS(j), bL(j)) and
the object space coordinates (k, k, hk) comprise the unknown adjustment model parameters.
( j)

( j)

Line i and Sample i

are approximate fixed values for the true image coordinates.

Since true image coordinates are not known, values of the measured image coordinates are used
instead.
Effect of using approximate values is negligible because measurements of image coordinates are
performed with sub-pixel accuracy.
03/06/16

Virtual Environment Lab, UTA

118

RPC Block Adjustment


F
The observation equations can be written as: F
Algorithm

Li

Si

Applying Taylor Series expansion to the RPC block adjustment observation


equations results in the following linearized model: F dF 0
i0

Fi0 dFi wPi

where:

Linei( j ) a0( 0j ) aS( 0j ) .Samplei( j )

FLi0 a .Line p (k 0 , k 0 , hk 0 )
Fi0

( j)
( j)
( j)
( j ) wPi
FSi0 Samplei b0 0 bS 0 .Samplei
b ( j ) .Line ( j ) r ( j ) ( , , h )
L0
i
k0
k0
k0

( j)
L0

( j)
i

( j)

And

03/06/16

Virtual Environment Lab, UTA

119

RPC Block Adjustment


Algorithm
F

F
F

Li
T

dFLi x
dFi

dFSi FSi
xT

x0

dx

x0

Li
T
A x0

FSi
T
x A

x0

Li
T
G x0

FSi
xGT

dx A
dx AAi
G

x0

dx A

dx
G

AGi

x A0
dx A
T
dx
;
x

;
dx

dh
...
d

dh

0
G
1
1
1
m

p
m

p
m

dxG
xG0

dx A da0(1) daS(1) daL(1) db0(1) dbS(1) dbL(1) ... da0( n ) daS( n ) da L( n ) db0( n ) dbS( n ) dbL( n )

dx = x -x0 is the vector of unknown corrections to the approximate model parameters, x 0,


dxA is the sub-vector of the corrections to the approximate image adjustment parameters for n images
dxG is the sub-vector of the corrections to the approximate object space coordinates for m ground control
and p tie points
x0 is the vector of the approximate model parameters
is a vector of unobservable random errors
03/06/16

Virtual Environment Lab, UTA

120

RPC Block Adjustment


Algorithm
0
0
C

As a consequence of the previous reductions, the RPC block adjustment model in matrix form reads

AA
I

AG
wP
dx A
0
wA

dx
wG
I G

or, A dx w

0...0 1 Samplei( j )
AAi
0
0...0 0

0...0

AGi

0...0

FLi
k
FSi
k

x0

FLi
k

x0

FSi
k

Linei( j )
0

x0

FLi
hk

x0

FSi
hk

Cw 0
0

0
0
1 Samplei( j )

CA
0

0
CG

0
Linei( j )

0...0

0...0

AA

AA1

AAi

AG1

AG
AGi

0...0

x0

0...0
x0

Cw: The a priori covariance matrix of the vector of misclosures, w,


AA: The first-order design matrix for the image adjustment parameters
AG: The first-order design matrix for the object space coordinates

03/06/16

Virtual Environment Lab, UTA

123

RPC Block Adjustment


Algorithm
w is the vector of misclosures for the image-space coordinates,
P

wPi is the sub-vector of misclosures for the image-space coordinates of the ith
image point on the jth image
wP1

Linei( j ) a0( 0j ) aS( 0j ) .Samplei( j ) aL( 0j ) .Linei( j ) p ( j ) (k 0 , k 0 , hk 0 )
; wP
wP

( j)
( j)
( j)
( j)
( j)
( j)
( j)
i
wPi
Sample

b
.
Sample

b
.
Line

r
(

,
h
)
i
00
S0
i
L0
i
k0
k0
k 0


wA=0 is the vector of misclosures for the image adjustment parameters,
wG=0 is the vector of misclosures for the object space coordinates,
CP is the a priori covariance matrix of image-space coordinates,
CA is the a priori covariance matrix of the image adjustment parameters,
CG is the a priori covariance matrix of object-space coordinates
03/06/16

Virtual Environment Lab, UTA

124

A Priori Constraints
This block adjustment model allows the introduction of a priori information using the
Bayesian estimation approach, which blurs the distinction between observables and
unknowns both are treated as random quantities.
In the context of least squares , a priori information is introduced in the form of weighted
constraints. A priori uncertainty is expressed by CA, CP, and CG.
CA: uncertainty of a priori knowledge of the image adjustment parameters.
In an offset only model, the diagonal elements of CA (the variances of a0 and b0), express the
uncertainty of a priori satellite attitude and ephemeris.
CP: prior knowledge of image-space coordinates for ground control and tie points.
Line and sample variances in CP are set according to the accuracy of the image
measurement process.
CG: prior knowledge of object-space coordinates for ground control and tie points.
In the absence of any prior knowledge of the object coordinates for tie points, the
corresponding entries in CG can be made large (like 10,000m) to produce no significant bias.
One could also remove the weighted constraints for object coordinates of tie points from the
observation equations. But being able to introduce prior information for the object
coordinates of tie points adds flexibility.
03/06/16

Virtual Environment Lab, UTA

125

RPC Block Adjustment


Since
the math model is non-linear, the least squares solution needs to be
Algorithm
iterated until convergence is achieved. At each iteration step, application of the
least squares principle results in the following vector of estimated corrections
to the approximate values of the model parameters.
At the subsequent iteration step, the vector of approximate model parameters
x0 is replaced by the estimated values:

A C
dx
T

1
w

AT C w1 w

x0 dx

x
The least squares estimation is repeated until convergence is reached.
The covariance matrix of the estimated model parameters is:

C x AT C w1 A
03/06/16

Virtual Environment Lab, UTA

126

Experimental Results

Project located in Mississippi, with 6 stereo strips and 40 well-distributed GCPs.


Each of the 12 source images was produced as a georectified image with RPCs.
The images were then loaded onto a Socet SET workstation running the RPC block adjustment
model.
Multiple well-distributed tie-points were measured along the edges of the images.
Ground points were selectively changed between control and check points to quantify block
adjustment accuracy as a function of the number and distribution of GCPs.
The block adjustment results were obtained using a simple two-parameter, offset-only model with a
priori values for a0 and b0 of 0 pixels and a priori standard deviation of 10 pixels.
GCP

Average Error
Longitude (in m)

Average Error
Latitude (in m)

Average Error
Height (in m)

Standard
Deviation
Longitude (in m)

Standard
Deviation Latitude
(in m)

Standard
Deviation Height
(in m)

None

-5.0

6.2

1.6

0.97

1.08

2.02

1 in center

-2.0

0.5

-1.1

0.95

1.07

2.02

3 on edge

-0.4

0.3

0.2

0.97

1.06

1.96

4 in corners

-0.2

0.3

0.0

0.95

1.06

1.95

All 40 GCPs

0.0

0.0

0.0

0.55

0.75

0.50

When all 40 GCPs are used, the ground control overwhelms the tie points and the a priori constraints, thus,
effectively adjusting each strip separately such that it minimizes control point errors on that individual strip.

03/06/16

Virtual Environment Lab, UTA

128

RPC - Conclusion

RPC camera model provides a simple, fast and accurate representation of


the Ikonos physical camera model.

If the a-priori knowledge of exposure station position and angles permits a


small angle approximation, then adjustment of the exterior orientation
reduces to a simple bias in image space.

Due to the high accuracy of IKONOS, even without ground control, block
adjustment can be accomplished in the image space.

RPC models are equally applicable to a variety of imaging systems and so


could become a standardized representation of their image geometry.

From simulation and numerical examples, it is seen that this method is as


accurate as the ground station block adjustment with the physical camera
model.

03/06/16

Virtual Environment Lab, UTA

129

Finally, lets review all the topics that we have covered

03/06/16

Virtual Environment Lab, UTA

130

Summary
The mathematical concepts covered today were:
1.

Least squares adjustment (formulating observation equations and reducing


to normal equations)

2.

Collinearity condition equations (derivation and linearization)

3.

Space Resection (finding exterior orientation parameters)

4.

Space Intersection (finding object space coordinates of common point in


stereopair)

5.

Analytical Stereomodel (interior, relative and absolute orientation)

6.

Ground control for Aerial photogrammetry

7.

Aerotriangulation

8.

Bundle adjustment (adjusting all photogrammetric measurements to ground


control values in a single solution)- conventional and RPC based

03/06/16

Virtual Environment Lab, UTA

131

Terms
A lot of the terminology is such that can sometimes cause confusion. For
instance, while pass points and tie points mean the same thing, (ground)
control points refer to tie points whose coordinates in the object
space/ground control coordinate system are known, while the term check
points refers to points that are treated as tie points, but whose actual
ground coordinates are very accurately known.
Below are some more terms used in photogrammetry, along with their brief
descriptions:
1. stereopair: two adjacent photographs that overlap by more than 50%
2. space resection: finding the 6 elements of exterior orientation
3. space intersection: finding object point coordinates for points in stereo
overlap
4.
5.

stereomodel: object points that appear in the overlap area of a stereopair


analytical stereopair: 3D ground coordinates of points in stereomodel,
mathematically calculated using analytical photogrammetric techniques

03/06/16

Virtual Environment Lab, UTA

132

Terms
6.

interior orientation: photo coordinate refinement, including corrections


for film distortions, lens distortion, atmospheric refraction, etc.

7.

relative orientation: relative angular attitude and positional


displacement of two photographs.

8.

absolute orientation: exposure station orientations related to a ground


based coordinate system.

9.

aerotriangulation: determination of X, Y and Z ground coordinates of


individual points based on photo measurements.

10.

bundle adjustment: adjusting all photogrammetric measurements to


ground control values in a single solution

11.

horizontal tie points: tie pts whose X and Y coordinates are known.

12.

vertical tie points: tie pts whose Z coordinate is known

03/06/16

Virtual Environment Lab, UTA

133

Software Products Available

There is a variety of software solutions available in the market today to perform all the
functionalities that we have seen today. The following is a list of a few of them:
1.

2.

3.

4.

5.

6.

ERDAS IMAGINE (http://gi.leica-geosystems.com): ERDAS Imagine photogrammetry suite


has all of the basic photogrammetry tools like block adjustment, orthophoto creation, metric
and non-metric camera support, and satellite image support for SPOT, Ikonos, and others. It is
perhaps one of the most popular photogrammetric tools currently.
ESPA (http://www.espasystems.fi): ESPA is a desktop software aimed at digital aerial
photogrammetry and airborne Lidar processing.
Geomatica (http://www.pcigeomatics.com/geomatica/demo.html): PCI Geomatics Geomatica
that offers a single integrated environment for remote sensing, GIS, photogrammetry,
cartography, web and development tools. A demo version of the software is also available at
their website.
Image Station (http://www.intergraph.com): Intergraphs Z/I Imaging ImageStation comprises
modules like Photogrammetric Manager, Model Setup, Digital Mensuration, Automatic
Triangulation, Stereo Display, Feature Collection, DTM Collection, Automatic Elevations,
ImageStation Base Rectifier, OrthoPro, PixelQue, Image Viewer, Image Analyst.
INPHO (http://www.inpho.de): INPHO is an end-to-end photogrammetric systems supplier.
INPHOs portfolio covers the entire workflow of photogrammetric projects, including aerial
triangulation, stereo compilation, terrain modeling, orthophoto production and image capture.
iWitness (http://www.iwitnessphoto.com): iWitness from DeChant Consulting Services is a
close-range photogrammetry software system that has been developed for accident
reconstruction and forensic measurement.

03/06/16

Virtual Environment Lab, UTA

135

Software Products Available


7.

8.

9.

10.

11.

12.

(Aerosys) OEM Pak (http://www.aerogeomatics.com/aerosys/products.html): This free package


from Aerosys offers the exact same features as its Pro Version, except that the bundle adjustment
is limited to a maximum of 15 photos.
PHOTOMOD (http://www.racurs.ru/?page=94): PHOTOMOD, a software family from Racurs,
Russia, comprises of products for photogrammetric processing of remote sensing data which allow
to extract geometrically accurate spatial information from almost any commercially available type of
imagery.
PhotoModeler (http://www.photomodeler.com/downloads/default.htm): PhotoModeler, the software
program from Eos Systems, allows you to create 3D models and measurements from photographs
with export capabilities to 3D Studio 3DS, Wavefront OBJ, OpenNURBS/Rhino, RAW, Maya Script
format, and Google Earths KML and KMZ, etc.
SOCET SET (http://www.socetgxp.com): This is a digital photogrammetry software application from
BAE Systems. SOCET SET works with the latest airborne digital sensors and includes innovative
point-matching algorithms for multi-sensor triangulation. SOCET-SET used to be the standard by
which all other photogrammetry packages were measured against.
SUMMIT EVOLUTION (http://www.datem.com/support/download.html):Summit Evolution is the
digital photogrammetric workstation from DAT/EM, released in April 2001 at the ASPRS
Conference. The features of the software include subpixel functionality, support for different
orientation methods and various formats.
Vr Mapping Software (http://www.cardinalsystems.net): Vr Mapping Software Suite includes
modules for 2D/3D collection and editing, stereo softcopy,orthophoto rectification, aerial
triangulation, bundle adjustment, ortho mosaicing, volume computation, etc.

03/06/16

Virtual Environment Lab, UTA

136

Open Source Software


Solutions
There are three separate modules for relative orientation (relor.exe), space
resection (resect.exe) and 3D conformal coordinate transformation
(3DCONF.exe) available at: http://www.surv.ufl.edu/wolfdewitt/download.html

Another open source program is DGAP, a program for General Analytical


Positioning that can be found at:
http://www.ifp.uni-stuttgart.de/publications/software/openbundle/index.en.html

03/06/16

Virtual Environment Lab, UTA

137

References
1.

Wolf, Dewitt: Elements of Photogrammetry, McGraw Hill, 2000

2.

Dial, Grodecki: Block Adjustment with Rational Polynomial


Camera Models, ACSM-ASPRS 2002 Annual Conference
Proceedings, 2002

3.

Grodecki, Dial: Block Adjustment of High-Resolution Satellite


Images described by Rational Polynomials, PE&RS Jan 2003

4.

Wikipedia

5.

Other online resources

6.

Software reviews from:


http://www.gisdevelopment.net/downloads/photo/index.htm and
http://www.gisvisionmag.com/vision.php?article=200202%2Freview
.html

03/06/16

Virtual Environment Lab, UTA

138

Anda mungkin juga menyukai