Anda di halaman 1dari 10

LECTURES 31 32, APRIL 8 AND 15, 2004 REMOTE SENSING Remote sensing is a process of obtaining information about an object

t or a phenomenon with sensors at a distance from that object or the location of the phenomenon. The platforms upon which the sensors are placed include a satellite or an aircraft. Remote sensing with satellite data is called orbital remote sensing, while other categories of remote sensing are sub-orbital. RADIANT ENERGY IN THE ELECTROMAGNETIC SPECTRUM The remote sensing procedures used in engineering usually measure the signature of an object in different bands of the electromagnetic spectrum. Visible light (0.4 to 0.7 m wavelength) from a terrestrial object is usually the deflection of sun rays from the object. Deflection of sun rays depend on the size of the object with the minutest objects deflecting the shortest wavelength (blue light) and larger objects deflecting longer wavelengths. Infrared (0.7 to 20 m wavelength) radiation is essentially reflected heat (Near Infrared: 0.7 to 1.3 m wavelength) or actual temperature-related radiation (3.0 to 14.0 m wavelength) heat radiation. The health of vegetation relates to the relative intensity of radiation in different portions of the infrared waveband. As a result color infrared imagery is used extensively in forestry. Clean water absorbs infrared radiation and appears as a dark object in a color infrared photograph. Increasing sediment load makes it appear lighter. The microwave (1 m wavelength) region of the electromagnetic spectrum and radio waves (> 1 m wavelength) are also used in remote sensing. Multispectral imaging involves collection of reflected, emitted and backscattered energy from an object of interest in a few selected bands, usually up to 20, of the electromagnetic spectrum. Hyperspectral imaging involves collection of electromagnetic energy in up to 100 wavebands and ultraspectral imaging involves collection of electromagnetic energy in more than 100 wavebands. One such ultraspectral scanning sensor is NASAs AVIRIS (Figure 1). It has been used extensively in vegetation health monitoring. The Operational wavebands of common remote sensing systems are shown on Figure 2. SENSORS Satellite-based sensors are usually digital sensors placed in arrays (One- or TwoDimensional Arrays). Alternatively, sensors scan a small portion of the target area. Different types of sensor systems are shown schematically on Figure 3. The sensors used in remote sensing could be an active sensor if it is capable of providing its own source of energy, or it could be passive if the sensor does not include an energy source. The spectral resolution of a sensor is the number of specific wavelength intervals that the sensor is sensitive to. The spatial resolution of the sensor is the smallest spatial (linear or angular) dimension that the sensor can distinguish. The temporal resolution is how often the sensor records the imagery of a certain location. The radiometric resolution is the sensitivity of the sensor to the differences in signal strength. STATE-OF-THE-ART OF SATELLITE IMAGERY The state-of-the-art sensors have a radiometric resolution of 11 bit (brightness values from 0 to 211-1). Such radiometric resolutions are better than what is required in most engineering applications. The spectral and temporal resolution of satellite-based remote sensors is also better than what is needed in most engineering applications.

Figure 1. A Hyperspectral Remote Sensing System (Courtesy Jensen, 2001)

Figure 2. Wavebands for Common Remote Sensing Systems

Figure 3. Types of Satellite-Based Remote Sensors (courtesy Jensen 2001)

However, the present-day state-of-the-art satellite-based remote sensing systems have a spatial resolution of 1 m 1 m for panchromatic data and 4 m 4 m for multi-spectral images. Surveying in many engineering projects calls for a spatial resolution smaller than 1 meter. As a result, satellite imagery is still unsuitable in many engineering projects. SUB-ORBITAL REMOTE SENSING: LIDAR AND IFSAR The sub-orbital methods usually involve use of an aircraft-based sensor system. Aerial photography is an example of remote-sensing. In the following we would consider two active sub-orbital remote sensing methods that are being used frequently in geomatics: LIDAR (Light Detection and Ranging) and IFSAR (Interferometric Synthetic Aperture Radar). LIDAR DEM construction from LIDAR data involves estimation of coordinates of a ground point using two-way travel time of laser pulse between an airborne platform (usually a fixed-wing aircraft) and a target on the ground, flying height, and location, heading, pitch and roll of the aircraft. The procedure involves emission of an active laser pulse between a transmitter on board an aircraft and a target and measuring the time it takes before returning to a sensor onboard the aircraft. As the aircraft moves forward a scanning mirror directs the laser pulse back and forth in an elliptical spiral as shown on Figure 4. This results in a closely-placed array of spots of laser shots along the flight path. Spot density depends on the height of line of flight, number of pulses that can be emitted per unit time, scanning angle, and forward speed of the aircraft.

Figure 4. Schematics of LIDAR Measurement Since LIDAR scanning uses with an Inertial Measurement Unit (IMU) to keep track of the pitch, roll and heading of the aircraft and an ABGPS to monitor the precise location of the aircraft and keeps a precise record of the scanning mirror during the survey, the accurate location of each individual mass point scanned during a LIDAR survey is precisely known. Consequently, LIDAR survey generates accurate three-dimensional, geo-referenced terrain model. Typical LIDAR sensor capabilities are listed in Table 1.

Table 1. Typical LIDAR Sensor Parameters


Parameter Typical Values Parameter Typical Values (1) Vertical Accuracy 0.15 m Beam Divergence 0.2 to 3 mrad (2) Horizontal Accuracy 0.2 to 1.0 m Pulse Rate 5 to 33 kHz 0.25 to 2.0 m from Flying Height 200 to 6000 m Footprint Diameter 1000 m Scan Angle Up to 75 Scan Rate Up to 40 Hz Spot Density 0.25 to 12 m Notes. 1. In comparison, the RMSE of aerial photogrammetry is 1/4000th to 1/9000th of flying height. The maximum value of error is 3 times the RMSE. 2. In comparison, the RMSE of aerial photogrammetry is 1/6000th to 1/10000th of flying height. The maximum value of error is 3 times the RMSE.

Penetrability of LIDAR decreases as the scanning angle increases in a vegetated area. The vegetation cover can be penetrated to a certain extent by LIDAR pulses. As a result, a bare-earth terrain model can be constructed in a vegetated area using a LIDAR system with a capability to detect back-scattered signal from the canopy of vegetation cover as well as the signal returning from the ground. Nevertheless, LDAR DTM is less accurate in vegetated area compared to that covering an area with a minimal vegetation cover. Inaccuracies in a DEM constructed from IFSAR data arises from (a) atmospheric refraction, (b) imprecision in mirror attitude, (c) ABGPS and IMU resolution, (d) flying height and (e) availability of ground control information. Among the advantages of LIDAR is its capability to penetrate vegetation cover albeit at the expense of accuracy. Secondly, LIDAR survey generates digital data amenable to automated processing. As a result, LIDAR survey and data processing is relatively economical compared to aerial photogrammetry. Also, unlike aerial photogrammetry LIDAR survey can be conducted at any time of the day, even during the night. Among the disadvantages of LIDAR is its incapability to penetrate cloud cover. As a result, LIDAR survey is not useful tool if measurements are required during bad weather, e.g., loss survey following a large storm event. IFSAR DEM construction from IFSAR data involves estimation of coordinates of a ground point using phase difference between the radar signals sent to a ground point from two antennas on an aircraft, flying height, and location, heading, pitch and roll of the aircraft. The positional information of the aircraft is usually obtained from an ABGPS. The heading, pitch and roll of the aircraft are obtained from an IMU. The resolution of a radar antenna, Ry, is proportional to its size and inversely proportional to the antenna-to-source distance, r, according to:
B=

r
2 Ry cos 2

(1)

where B is the antenna size, is the wavelength and is the angle between the nadir point of the antenna and the target, i.e., the look angle. As a result, Real Aperture Radar (RAR), or the incoherent radar, is usually too large to be portable. Synthetic Aperture Radar (SAR), or the coherent radar, overcomes the antenna size problem by superposing the reflections corresponding to a series of pulses illuminating an object electronically.
4

To explain the principles of IFSAR we consider two interferometric antennas O1 and O2, and an elemental scatterer of height z shown on Figure 5. Either or both the antennas emit discrete radar pulses separated in time and the echo that returns from the footprint (pixel) is recorded at both the antennas. If one of the antennas emits radar pulses with echo recordings at both antenna locations, the mode of operation is called non ping-pong (npp). If both antennas emit signals sequentially with echo recordings at both antenna locations, the mode is called ping-pong mode. The topographic and orthographic parameters of the ground point are extracted from the interference pattern of the two images. Typically, IFSAR measurements are made in the X (30 mm), L (24 mm) band or C (60 mm) bands. The X-band signal reflects from the top of the canopy and the L-band reflects off the ground surface. In Figure 34, the ground-track of the airborne platform is aligned with the x-axis and the flight-tracks of the antennas are represented by red and green arrows.

Figure 5. IFSAR Principles The topographic height z is given by:


z = H r1 cos 2

(2)

where 2 is the look angle and H is the elevation of the signal source. The phase difference between two radar signals received from the same elemental scatterer can then be calculated from:

2 p

(r1 r2 ) = 2 p (By sin 2 Bz cos 2 )

(3)

where By and Bz are the components of baseline B in the y- and z- directions, respectively, and p = 1 for the npp mode and p = 2 for the ping-pong mode. Because of the presence of several elemental scatterer within a single pixel, the phase difference estimation of backscattered signals is not as straightforward as that between two sinusoidal signals. Statistical methods are used to estimate the phase difference between backscattered images. Since the phase difference can only be between 0 and 2, there is an ambiguity in inference drawn from signals received at the antennas. This ambiguity is resolved by establishing ground control.

Inaccuracies in a DEM constructed from IFSAR data arises from (a) atmospheric refraction, (b) incoherence of the radar pulse (the pulse usually contains a range of frequencies), (c) the capabilities of the sensor to resolve phase difference, (d) ABGPS and IMU resolution, (e) flying height and (f) availability of ground control information. The phase data of satellite based systems are additionally affected by the ionosphere. Typical vertical accuracy of an IFSAR is smaller than 1 meter with appropriate ground control network. Among the attractions of radar-based remote sensing methods is the capability of radar to penetrate cloud cover. Consequently, these methods have all-weather capability. Secondly, the correlation (or lack of it) of the radar echo recorded at the antenna locations can be correlated to land-use. Time-lapse interferometry has been used to estimate fault displacement in earthquake engineering, glacier movement and monitoring ocean currents. Secondly, IFSAR survey generates digital data amenable to automated processing. As a result, IFSAR survey and data processing is relatively economical compared to aerial photogrammetry. Also, unlike aerial photogrammetry IFSAR survey can be conducted at any time of the day, even during the night.

LECTURE 33, APRIL 15, 2004 GLOBAL POSITIONING SYSTEM The US Department of Defense operates a constellation of Navistar (Navigation Satellite Timing and Ranging) satellites. In Global Positioning System (GPS), the low-power radio signals from visible Navistar satellites received with a GPS receiver are utilized to estimate the position of the receiver. There are three basic components to GPS positioning:

Navistar Satellites: Twenty-four Navistar satellites are in 20183 km high orbits revolving around the earth once every 12 hours (Figure 6). There are three orbits with orbital plane making an angle of 55 to the equatorial plane of the earth with eight operating satellites are on a given orbit. There are three spare satellites to account for instrument malfunction. Five to eight satellites are visible at a given time from a certain location on ground. These satellites transmit a pseudo random code (satellite identifier), ephemeris data (date and time the signal was sent and satellite health information) and almanac data (orbiting position).

Figure 6. Constellation of Navistar Satellites (courtesy www.dod.gov)

Ground Control: The orbiting position (ephemeris) and clock data of the satellites are monitored from five monitoring stations spread across the world in Colorado Springs (USA), Ascension (Atlantic Ocean), Diego Garcia (Indian Ocean) and Kwajalein (South Pacific). The ephemeris and clock corrections are periodically uploaded to the satellites from monitoring stations at Ascension, Diego Garcia, Kwajalein and Hawaii. GPS receivers: The receivers are capable of receiving the radio signals from the satellites above elevation mask (the elevation below which satellite signals are not received) in L1 (1575.42 MHz) and L2 (1227.60 MHz) bands. They are also equipped with a quartz clock.

PROCEDURE The distance from the receiver to each individual satellite above elevation mask is computed from the time difference between when the signal was sent (from ephemeris data) and that when the signal was received. The receiver location is estimated using trilateration from the time difference and the velocity of electromagnetic waves.

At least three satellites are needed to estimate the terrestrial position of the GPS receiver in three dimensions. Signal from one additional satellite is used to eliminate the ranging error

due to clock imprecision as discussed in the following section in more detail. Figure 7 is a simplified summary of this procedure.

Figure 7. GPS Data Analysis Procedure (Courtesy www.educatorscorner.com)

Irrespective of these minimum requirements GPS survey is undertaken when maximum number of satellites is available above elevation mask and dilution of precision (see below) is the least. To ensure these, GPS mission planning software packages and Notice Advisory to Navistar Users (NANU) are consulted before undertaking GPS survey.
GEOMETRIC DILUTION OF PRECISION (GDOP) The accuracy of the position estimate of the receiver deteriorates with the volume of the shape described the unit vectors from the receiver to the satellites (Figure 8). Most optimal satellite location is when one satellite is near the zenith and the other satellites are near elevation mask. GDOP is equal to the square-root of sum of the diagonal elements of ( A T A) 1 . Matrix A, in a four-satellite problem becomes:
(X1 U x ) ( X U ) 2 x A= ( X 3 U x ) ( X 4 U x ) R1 R2 R3 R4

(Y U (Y U (Y U (Y U
1 2 3 4

y y y y

)R )R )R )R

1 2

3 4

(Z1 U z ) (Z 2 U z ) (Z 3 U z ) (Z 4 U z )

R1 R2 R3 R4

1 1 1 1

(4)

where Xi are the x-coordinates of satellite i (i = 1, 2, 3, 4), Yi are the corresponding y-coordinates, Zi are the corresponding z-coordinates, and Ri are the user to satellite distances or ranges (not pseudo ranges defined in Figure 7). It should be noted that Earth-Centered Earth-Fixed (ECEF) Cartesian coordinate system has been utilized in the matrix expression for A, in which the x-axis is aligned with the prime meridian for a given location. Only those satellites, signals from which are not affected by physical obstacles on the signal-path, are considered in GDOP estimation. Typically a value of GDOP of around 4 is considered good. The larger the value of GDOP, worse is the precision of user position estimate.

Figure 8. Geometric Dilution of Precision

GDOP is often estimated from almanacs (a summary of satellite location that would be available at the time of GPS survey) for various locations within the project area before scheduling a GPS survey without accounting for obstructions on the signal path. As a result, these estimates cannot often be achieved during actual survey.
SOURCES OF ERROR Typical values of RMSE in GPS measurements are about 15 m. The GPS positioning error are comprised of the following systematic error components:

Satellite error: These include errors in the modeling of the satellite clock offset and drift using a second order polynomial, and those in the Keplerian representation of the satellite ephemeris information. Tracking data for all observed satellites recorded at the GPS Monitor Stations is sent to the Master Control Station, which uses these data to predict the parameters for the future. These predictions are then returned to the uplink stations where they are transmitted to the satellites. The latency of the tracking data and the prediction procedures directly affect the satellite system errors. Satellite clock error may translate into a range error of up to 1 m. Satellite ephemeris error may also translate into a range error of up to 1 m. Imprecise time measurement: The clocks on board the satellites are typically atomic clocks accurate to ~100 nanosecond accuracy. However, that on the GPS receiver is usually a quartz clock of limited accuracy. As a result the receiver clock is usually not in perfect adjustment with the satellite clocks. This lack of adjustment is called the clock bias. Ionospheric delay: The radio signals are delayed by as much as 70 ns due to the presence of charged particles in the ionosphere (70 km to 1000 km). This error is frequency dependent and can thus be eliminated using a dual-frequency (L1 and L2) receiver. Alternatively, it can be partially accounted for by the Klobucher Model. Eight parameters of this model are broadcast from the satellites. The remaining parameters are determined depending on the time of day and the geomagnetic latitude of the receiver. Unmodeled residual can translate into a range error of as much as 10 m. Troposphereic delay: This error depends mainly on atmospheric refraction and humidity within the troposphere (from ground level to a height between 8 and 13 km) and is not particularly frequency dependent within the L-band frequencies. Values for temperature, pressure and relative humidity are used to model the vertical tropospheric delay, along
9

with the satellite elevation angle. Tropospheric delay may translate into a range error of up to 1 m. Multipath: The receiver may receive signals reflected by terrestrial or airborne objects, e.g., buildings, utilities even satellite solar panels. It is difficult to model this systematic error and is generally ignored in data analysis. Multipath may cause a range error of up to 0.5 m.

DIFFERENNTIAL GPS In Differential GPS (DGPS) the systematic GPS errors are estimated from observations from a ground station with known coordinates. This essentially involves estimation of range corrections for all visible satellites and passing on the information, often in real-time, to a remote receiver at an unknown location. These corrections are utilized to correct the GPS position of a remote receiver in the vicinity of the ground station. The corrections can also be applied in the office following field work in the absence of real-time communication between the ground station and the remote receiver. Typical error estimates in DGPS are summarized in Table 6. Also included in Table 2 are GPS error estimates for comparison. Table 2. Typical GPS Positioning Error
Source Troposphere Delay Ionosphere Delay Orbital Error Clock Error Carrier Noise Carrier Multipath Code Noise Code Multipath DGPS (PPM of Baseline) 0.1 to 1.0 0.5 to 2.0 0.1 to 0.5 GPS (meter) 2 to 30 2 to 50 5 to 10 0.1 to 3.0 0.001 to 0.006 0.001 to 0.02 0.1 to 3.0 0.1 to 100

Source: Leick, A. 1995. GPS Satellite Surveying. Toronto: John Wiley.

CARRIER PHASE TRACKING Surveying accuracy is greatly enhanced by tracking of phase tracking of the carrier waves. For instance, the wavelength of L1 waves is 190 mm. By tracking the phase of the L1 waves a positional accuracy on the order of millimeters can be achieved over a baseline between two GPS receivers if the two receivers are close enough that ionospheric delay at the two receivers are similar. Often a distance of 30 km is considered an upper-bound for such a measurement if the receivers are not capable of receiving both L1 and L2 frequencies. An accuracy of 10 to 50 mm can be achieved for a baseline established in this manner with measurements continuing over 1-hour duration. Similar accuracy can be achieved for a shorter baseline of 10 m length by taking observation over 15-minute duration.

10

Anda mungkin juga menyukai