Anda di halaman 1dari 52

Two major emphases of geophysics:

1. "pure"
2. "applied"
1. pure geophysics - study of the physics of the Earth
Examples:

variations in temperature with depth


causes of reversals in Earth's magnetic field

2. applied geophysics (also called exploration geophysics) - to find economic deposits


All methods depend fundamentally on the presence of bodies with contrasting physical
properties, such as density, magnetic susceptibility, heat conductivity, elastic constants, etc.

Active methods - stimulate response (ex. - setting off dynamite blast)


Passive mehtods - simply measure property (ex. - density)

Part 1: Gravity
Assume Earth does not rotate and has uniform density distribution.
Determine acceleration of gravity (usually just called "gravity" by geophysicists) at point on
Earth's surface.
Law of Universal Gravitation:
GMeM
F = ------R2

G = Universal Gravitational Constant = 6.673 x 10-8 dyne cm2/gm2 +/- 0.003 (dyne = 1 gm
cm/sec2)
Newton's 2nd Law: F = Ma
for earth, use symbol "g" instead of "a," so F = Mg
GMeM
GMe
Since F = F; then Mg = ------ and g = ---R2
R2

g = approximately 980 cm/sec2 (or 9.8 m/sec2)


1 cm/sec2 is called a gal.
Normally use milligals (1/1000 gal or about 1 millionth g) or gravity units (g. u.; 0.1 mgal)

Complication #1:
Earth rotates
Result: Earth not round but bulges at equator and is flattened at poles.
Equatorial radius is 21 kilometers greater than at poles.
Complication #2:
Earth's mass is not symmetrical about the equatorial plane - Earth is "pear-shaped."
Complication #3:
The equator isn't perfectly spherical but only varies by a few meters.
The regular surface which most nearly approximates the surface of the actual Earth is a surface
called the geoid.
The geoid surface is everywhere perpendicular to a plumb bob.
The geoid corresponds to mean sea level.
In land covered areas, the geoid is the surface that would be determined by the level to which
water would rise in narrow canals cut through the continents.
Since g depends on distance from center of Earth (radius), g varies with latitude.
International Gravity Formula can be used to determine g at a particular latitude:
g = 9.780318 (1 + .0053024 sin2 - 0.0000059 sin2 2 ) where is the latitude; units are m/sec2
Calculated value for g "corrected" for latitude is called the theoretical gravity and abbreviated gt
Now measure actual value of gravity at any spot.
1. can use pendulum

(formula from physics:

where L is length of pendulum and T is period)

Accuracy = 1.5 mgal; takes about 30 minutes per measurement


2. can experimentally measure acceleration of object dropped at Earth's surface
Accuracy = 0.1 mgal; measuring apparatus not portable (although one of the latest models
available is said to be portable because it weighs less than one ton)
3. most commonly measure differences in gravity from place to place by using a "gravity
meter" (Mass suspended from spring).
Accuracy = .01 mgal
Average density of Earth is 5.52 gm/cm3.
Average density of surface rocks is much less.
Therefore interior of Earth must be of much higher density than surface rocks.

Can get some idea of Earth's density distribution from study of its angular momentum:
Angular Momentum = Moment of Inertia x Angular Velocity
The moment of inertia of any object depends on its mass distribution.
Examples:

solid cylinder revolving about its axis, I = 0.5 MR2; where M is mass and R is radius of
cylinder
sphere, I = 0.4 MR2

spherical shell, I = 0.67 MR2

Earth's moment of inertia = 0.3307 MR2


Best fitting model is series of nested ellipsoids of different densities, but generally denser toward
center.
Measured value of g (called "actual" value and abbreviated ga) is not usually the same as gt.
Difference in ga and gt called a gravity anomaly.
Actual not same as theoretical because:
1. actual not measured at sea level where theoretical is calculated
2. actual not measured on a flat surface
3. solid Earth has tides of 7-14 cm
4. density distribution in Earth not uniform
To adjust for difference #1, we apply two "corrections" to the measured value before comparing
it to the theoretical value:

1st : adjust for elevation (distance from center of Earth, h)


called the Free Air Correction; = 0.3086 h when h is in meters
2nd: remove that portion of g due to the mass between sea level and the point where
measurement made
called the Bouguer Correction = -0.0419 h ( is density in kg/m3)

To adjust for difference #2, we then add another "correction" to the measured value before
comparing it to the theoretical value by removing the influence of nearby mountains and valleys.
called the Topographic or Terrain Correction
Since this correction rarely exceeds 1 mgal except in mountainous areas, it is frequenty ignored.
To adjust for difference #3, formulas are available to determine the necessary correction. This
tidal correction is very frequently ignored.
Finally, any difference between the "corrected" values of actual gravity and theoretical
gravity should be due to density variations (#4).

Higher than average density rock will cause the measured value of g to be greater than the
theoretical value and produces a positive "anomaly" while less dense rock produces a
negative anomaly.
Consider a plumb bob hanging near a tall mountain.
The mass of the mountain pulls it sideways.
Knowing the density and volume of the mountain allows us to calculate its mass and enables us
to determine how much force it should exert on the plumb bob.
Measurements show mountains exert only about 1/3 of the expected amount.
Question: Why?
Mountain supposedly has low density "roots."
Theory of Isostasy - the total mass of rock (and sea) in any vertical column of unit cross section
is constant
Various models have been developed to describe this root (Airy, Pratt, etc.)
Where sedimentation occurs, the weight of the sediment may cause the crust below
to sink. Similarly, where erosion occurs the crust may rebound.

Questions:

Are roots permanent features?


Why do mountains have roots?

Large scale gravity anomalies are called regional anomalies.


Usually due to density variations in lower crust or variation in thickness of crust.
Make it hard to recognize small or shallow features.
Often "removed" by various processes.
Process so subjective that I have sometimes thought that "the regional anomaly is what you take
out in order to make what's left look like what you want it to."
Small scale anomalies (often called residual anomalies) produced by ore bodies or geologic
structures.
Seldom more than a few milligals in size.
Use trial and error to find a body of the right location, shape, size and density to produce the
anomaly.
Example of a spherical ore body:
For a sphere, g at a location x

where R is the radius of the sphere, z is the depth to the center of the sphere, x is measured from
a point on the surface directly above the center of the sphere to the location, and
is the density
contrast (difference in densities of body and surrounding material).
There is usually assumed to be a constant density difference between an ore body and its
surroundings and a sharp, well- defined boundary separating them.
Neither assumption is likely to be correct.
Finding the density contrast to use in the formula is very difficult if you don't know what lies
below ground. (And if you knew what was down there, why bother with exploration methods
like gravity surveys?)
Other shapes can be modeled with similar but more complex formulas.
Complex forms can be thought of as combinations of simple forms.
Usually use computers.
Some general rules have been found.
Circular anomalies produced by:

compact mineral body


salt dome (gravity low with small high due to dense cap rock in center)

Elongated anomalies produced by:

graben
buried folds

buried channels

subduction zones

oceanic ridges

Negative anomalies:

Less dense rock such as in sedimentary basins, batholiths, subduction zones, oceanic
ridges

Positive anomalies:

More dense rock such as ultramafic masses


Uplifts of denser rock in structures such as anticlines or reverse faults.

The deeper the body, the broader and lower in amplitude will be the anomaly profile.
Rapid change in amplitude or gradient should suggest density change in subsurface - such as a
fault or edge of a buried basin.

There is no unique answer.


Several models can produce exactly the same anomaly.
Very important to use knowledge of area's geology to limit possible solutions.

Part 2: Radioactivity, Radiometric Dating and Natural Gamma Methods


Geochronology - concerned with determining age and history of geologic materials by studying
their isotopes
Radioactivity
Discovered in 1896
Natural change from one element to another by emission of particles from nucleus or addition of
particles to nucleus
Particles include:

helium nuclei (alpha particles)


electrons (beta particles)

high energy electromagnetic waves (gamma rays)

Decay occurs at constant rate and is not affected by temperature, pressure, chemical combination
or any other known thing
Radioactive isotopes - an element capable of spontaneously changing into another element by
the emission or addition of particles to its nucleus
Stable isotopes - an isotope which is not radioactive
Radiogenic isotopes - an isotope produced by radioactive decay
Non-radiogenic isotopes - an isotope not produced by radioactive decay
Half-life - time for half of element to decay
Parent - the radioactive element which decays
Daughter - an element formed from another by radioactive decay
T (half life) = ln 2/ = 0.6931/
The equation which represents radioactive decay is (derived in most geophysics texts for those
who are interested and know a little calculus):

solved for t (age of rock):

Assumptions made in radiometric dating:

no loss or gain of parent or daughter


decay rate constant

half life known

can measure amounts of parent and daughter accurately (usually use mass spectrometer)

RbSr dating
Rb87 -> Sr87 (could also write 87Rb, etc.)
Rb commonly substitutes for K in minerals; so method used on K-bearing minerals or rocks
which contain them
Decay equation reads:

(Subscript m stands for measured, or in other words, now; o stands for original)
It is easier to measure ratios of atoms rather than absolute numbers so expression usually written:

Could solve for t (age of mineral):

Now measure Sr87/Sr86 and Rb87/Sr86 ratios and for reaction ( = 1.39 x 10-11/yr)
Then estimate (Sr87/Sr86)o (Can measure this ratio in coexisting undisturbed minerals which
contain no Rb)
Note: Sr86m = Sr86o since Sr86 is stable and non-radiogenic
Sr86, Sr84, and Sr88 are all stable and non-radiogenic.
Any could be used; Sr86 most abundant and therefore most often used.

Easier mathematics and more accurate way of determining (Sr87/Sr86)original:


Equation for straight line is y = ax + b, (where a is slope, b is intercept on y axis)
Equation
is in that form (actually y = b + ax) when t is
constant (for several minerals in a rock or several rocks of the same age)
If we plot (Sr87/Sr86)m vs (Rb87/Sr86)m, the values should be different for different rocks and
minerals because they would have different initial amounts of Rb.
The slope of the line obtained by connecting these points is
-1 and the intercept is (Sr87/Sr86)o
Thus we can obtain both the age of the suite and the initial strontium ratio.
These plotted lines are called isochrons.
Isochrons can also be used to determine age of metamorphism.
If whole rock hasn't lost Rb or Sr, but minerals have passed them around during metamorphism,
two ages will be obtained - one from dating whole rock and one (metamorphism age) from
dating individual minerals in rock.
Another Sr isotope use:
First must know Sr87/Sr86 in material that made up primitive Earth.
Usually assume it was same as non-Rb87 bearing meteorites or about 0.699
During differentiation of crust, behavior of Rb and Sr would be different (different charge,
different size).
Rb concentrated in crust, Sr evenly distributed between crust and mantle.
Production of Sr87 should thus be faster in crust than in mantle and Sr87/Sr86 ratios should be
higher for crustal material.
Difference in Sr87/Sr86 ratios, then, is a means for distinguishing igneous rocks that have
formed by partial melting of crustal rocks from those that have their origin in
differentiation or partial melting of mantle material
Present Sr87/Sr86 ratio for mantle rock estimated from analyses of recent basalts and gabbros from
oceanic environments (direct origin from mantle assumed and no contamination by continental
material)
Value is about 0.704
Extrapolation between 0.699 and 0.704 gives reasonable estimate for ratio in mantle at any time
in past.
Look at Sr87/Sr86 ratios for rocks when they formed to determine origin.
(ratio above or consistent with expected mantle ratio?)
(Remember can get Sr87/Sr86 ratios from isochrons.)

Uranium, Thorium - Lead dating:


U238 -> Pb206

U235 -> Pb207


Th232 -> Pb208
Commonly use ratio with stable Pb204
One equation might be written:

or:
Must determine ratio (Pb206/Pb204)o and .
Can find original ratio from associated lead minerals (such as galena) or can use mineral for
study that wouldn't have had any original lead (zircon, uraninite, sphene, apatite, monazite, etc.)
By using U238, U235, and Th232, theoretically you get three age determinations and they should
agree (concordant ages).
If disagreement, ages are said to be discordant.
This is probably due to gain or loss of material.
Lead-lead method
If equation for U235 is divided by equation for U238, we get another equation:

Use of this equation called lead-lead method.


Handy because U235/U238 ratio known, as are decay constants.
Can't solve remaining equation directly for t but ages corresponding to different isotope ratios
have been plotted and can be obtained from published graphs or tables
Use of Pb-Pb method is good check on U235, U238, and Th methods because if lead lost, the ratio
of isotopes of remaining lead should not be changed and valid age should still be given.

Can also directly use ratio:


These two quantities increase with time at different rates and if plotted against each other, a
curved line is formed (called a concordia curve because all points on the curve have concordant
U238/Pb206 and U235/Pb207 ages).
If a rock sample has lost no Pb, calculated ages from U238 and U235 would be concordant and a
point representing the ratio of the above quantities would lie on the concordia curve.
If Pb has been lost, the ages will be discordant and the point representing the ratio will lie below
the curve.
Since lead loss would presumably be different for different areas in the sample, several different
analyses from different locations in the sample should give several different ratios and thus
several different points below the concordia curve.
It can be determined mathematically that these several points will lie on a straight line (called a
discordia).
If the discordia line is extended to intersect the concordia curve, upper intersection gives age of
rock.
Lower intersection supposedly gives time lead lost but almost never accurate since lead almost
never lost all at once but gradually over long time.
Technically could use U238 -> He4, U235 -> He4, or Th232 -> He4
But, helium may be lost since a gas.
Assume that any He present when rock was molten escaped
Therefore, any He present now formed from U or Th after solidification.
He ages thus give solidification ages
(Example: how long it takes for granite batholith to solidify).

Other Pb uses
1. Can measure average amounts of U238 and Pb206, or U235 and Pb207 in rocks at the Earth's
surface (usually use recent marine sediments).
Assume no radiogenic lead to start with, can calculate age of Earth's outer portion.
2. Begin with primeval lead (lead present when Earth formed): Pb204, Pb206, Pb207, Pb208 in certain
ratios for Earth as whole (usually assume this to be same as ratios in meteorites without U, Th).

With time, radiogenic lead increases, thus higher Pb206/Pb204, etc., ratios with time.
Can get age of Earth (4550-4750 my).
3. (variation on 2)
After a time, ore might form (example: galena).
This ore would "sample" the lead at time of formation, which would consist of the primeval lead
plus all radioactive lead formed before the time of ore formation (total lead called the common
lead).
Thus, age of ore can be determined by comparing its lead ratios to the ratios which would have
existed at various times.
4. Stable nuclei atomic weight about 40 and above are present in about same abundance.
Assume when elements formed, same rule applied to unstable elements.
Now U238 is 140 times as abundant as U235.
If both once equally abundant, would take 6 billion years to reach present proportion.
Age of Universe? of our part of Universe? of our Solar System nebula?

Fission-track dating:
U238 spontaneously breaks down by fission (splits into two large parts).
This is a rare occurrence.
These fission particles pass through the surrounding material with very high energy and leave
tube-shaped damage tracks.
These tracks can be counted (etch mineral with HFl, look at under microscope) and thus the
number of spontaneous fissions may be counted.
This gives amount of daughter product in sample.
Can determine (generally from measurement of amount of radiation being emitted) current U238
content in sample.
Essentially have number of daughter atoms and number of remaining parent atoms and can
thus determine age.
Useful because can be used on wide variety of substances of wide range of ages.
Disadvantage which turns out to be an advantage:
Fission tracks are "healed" by prolonged heating (millions of years).
Temperature at which healing occurs is different for each mineral.
Each different mineral thus can yield a different age (apparent disadvantage) because each
mineral has its clock "restarted" by healing at different temperatures and thus different times.
But temperature history of sample can be determined by comparing different minerals in
sample.

Potassium-Argon dating:
K40 undergoes 2 principal kinds of decay, to Ca40 and to Ar40.
Decay to Ca40 not useful, because Ca40 most common isotope of Ca and small amount produced
radiogenically would be undetectable.
Therefore, use K-Ar.
Since 2 separate decay types are possible, decay equation somewhat more complicated.
Let be total decay constant, Ar be decay constant for K-Ar reaction, and
for K-Ca reaction.
Then decay equation can be written:

Ca

be decay constant

Ar40original = 0 for all but very exotic minerals (original Ar a gas, wouldn't survive formation except
under very unusual circumstances, such as enormously high pressures).
Therefore, substituting 0 for original Ar and also substituting decay constants:
t = 1.88 x 109 ln (1+ 9.07 Ar40/K40)

If metamorphism occurs, Ar40 already formed will probably be lost and clock reset.
K-Ar methods can therefore be used to date metamorphic events.
Disadvantage to method:

Ar is gas and will often escape

Advantages to method:

can be applied to very common and abundant rocks and minerals, since K one of major
elements in Earth's crust
Glauconite in sedimentary rocks can be used and other methods not generally useful for
sedimentary rocks

schists and slates can be dated

since Rb usually found with K, 2 independent ages can usually be obtained from same
sample and compared

wide range of ages because of length of halflife (from age of Earth to about 5000 years
old); no other methods allow dating of rocks a few tens of thousands of years old
(important for establishing chronology of recent magnetic reversals)

Samarium-Neodymium dating:
Techniques same as for Rb-Sr or K-Ar.
Has advantage that both elements are members of rare-earth group and have virtually identical
chemical properties.
Both similarly affected by weathering and metamorphic processes.
Sm/Nd ratios would remain unchanged, giving reliable date for original crystallization.

Carbon dating:
Carbon 14 dating (also called radiocarbon dating)
C14 formed in upper atmosphere by reaction of N2 with neutrons produced by cosmic rays.
Reaction is: 0N1 + 7N14 -> 6C14 + 1H1
then C14 decays -> 7N14 + -1 0
Thus, total amount of C14 in atmosphere constant.
Carbon in organism has same C14/C12 ratio as air or water does as long as organism alive.
When organism dies, C14 not replenished, disappears, and C14/C12 ratio decreases to zero.
C14/C12 ratio thus gives age since death.
Limited to very young samples (less than 70,000 years) because of short half-life (5730 years).
Instead of measuring C14/C12 ratio in material directly, normally we compare C14 in sample to C14
in air by comparing radioactivity of the 2 samples (number of decays per minute per gram of
carbon).
A is activity of C14 in material to be dated and Ao is activity of air.
(Age of sample) t = 19,035 log Ao/A.
Is % C14 really constant?
Known that C14 content of atmosphere increased 10 % in period 6000 to 2000 years ago.
Found by studying tree rings.
Cause not known.
Now changing because of:

burning of fossil fuels


nuclear explosions

possibly through changes in intensity of Earth's magnetic field

Natural Gamma
Concentrations of radioactive substances such as uranium and thorium can be detected by
measuring the products of their decay, especially gamma rays.
Other minerals such as titanium and zirconium are often associated with radioisotopes so
radioacivity surveying may also be used in their search. Nonradioactive minerals (especially
those formed by mineral replacement processes) are sometimes associated with depletions as
well as with concentrations of radioisotopes.
Measurements may be made from the air, along a ground traverse or in boreholes.
Different rocks often have different radioactivity and these differences can be utilized in
geologic mapping.
Radioactivity is often concentrated along faults.
Radioactivity lows are sometimes associated with oilfields but the reason is not known.
Part 3: Heat
Heat flows from points of high temperature to points of low temperature.
Methods of heat transfer:

radiation (may occur in Earth's core)


conduction

convection

Heat flow due to conduction = K x temperature gradient


where K is coefficient of thermal conductivity of substance and temperature gradient =
T/thickness.

The thermal diffusibility of a substance


where is the density and Cp is the specific heat of the substance at constant pressure.
Thermal conductivity determined by:

composition (most important)


whether saturated with water (open cracks don't conduct)

pressure (closes cracks)

If K is large, then material is a good conductor of heat.


Quartz is the best conductor of heat among minerals usually encountered.
Heat travels extremely slowly through soil and rocks by conduction.
Typical values would be 15-60 km2 per million years.
If transfer due to conduction alone, a thermal event originating at a depth of 100 km will not be
perceptible near the surface for 10 million - 100 million years
Examples:

50 cm below surface - daily changes are seldom more than 1 degree and are 1/2 to 1 day
late
few meters down - only seasonal changes detectible and arrive months late

few thousand meters down - effects of last ice age still detectible

Pliocene and Pleistocene lavas are warmer than the average lava

Temperature at Earth's surface depends mainly on radiation from Sun.


Heat flow from interior is 1/1000 as much as that from Sun.
Temperature in Earth rises with depth.
Temperature gradient near surface is about 10-50 oC/kilometer but decreases with depth.
Can use mantle/core boundary conditions to estimate internal temperature.
Temperature on both sides must be same.
Material at bottom of mantle solid; material at top of core liquid.
Considering all possible materials, maximum is 2700oK.
Some sources of Earth's internal heat:

radioactivity (by far most important)


left over potential energy from formation

recrystallization

heat of fusion if outer core solidifying

chemical reactions, including oxidation at surface and exothermic reactions between sea
water and basalt

compression of rocks and friction along fault planes

Heat flow about the same all over the Earth; average heat flow for continents same as that for
oceans.
However, continental materials much richer in radioactive materials and thus should give off
more heat.

Explanation: Some heat flow in ocean basins due to conduction.


Total surface heat flow:

Oceans - small amount due to conduction; large amount due to convection


Continents - mostly due to conduction

Interesting speculation: Is it a coincidence that oceanic heat flow equals continental heat flow?
Examples of large scale anomalies:
1. lower than average heat flow:

continental shields (1.2 x 10-6 cal/cm2 sec)


due to low concentrations of radioactive elements? or cold underlying upper mantle?
seaward of oceanic trenches

2. higher than average heat flow

island arcs (1.8 x 10-6 cal/cm2 sec)


oceanic ridges (1.5 x 10-6 cal/cm2 sec)

other areas of recent volcanic activity (as high as 7 x 10-6 cal/cm2 sec)

young orogenic regions


as a result of crustal thickening?

Examples of local heat anomalies useful for prospecting:

chemical reactions which give off heat (ex. - oxidation of sulfide ores produces detectable
heat)
presence of local radioactive heat sources (ex. - granite intrusions)

differences in heat conductivity of rocks (ex. - salt is highly conductive)

presence of volcanic and hydrothermal sources

Part 4: Magnetism

Simplest magnetic structure is called a dipole.


A dipole consists of 2 poles of equal strength and opposite sign separated by a small distance.
Electrons and nuclei are dipoles.

Speculation:
Do poles always exist in pairs?
Earth is a magnet.
North-seeking pole of a magnet (also called positive) is one that is attracted to the Earth's north
pole.
Earth's north pole is a south-seeking pole.
The Earth's magnetic field is defined by giving its strength and direction.
The magnetic field strength (H) at a point in the field of a magnet is the force per unit of pole
strength which would be exerted on a pole at that point.
Magnetic field strength is also sometimes given in terms of the density of imaginary lines of
force representing the field.
1 Oersted = 1 line of force per cm2 (called a gauss)
Typical laboratory magnet has field strength of 10,000 Oersteds
The field strength of the Earth varies from about 0.3 Oersteds at the equator to about 0.6
Oersteds at the poles.
Direction given by specifying declination and inclination.
Declination - deflection of a north-seeking pole from geographical north; positive if toward east
Inclination or dip - deflection of north-seeking pole from horizontal; positive if down
Some terminology:

Magnetic equator - curve around the Earth connecting points where inclination is
horizontal
Magnetic dip poles - points on the Earth's surface where inclination is vertical (several
in polar region; also occur where strong local fields exist)

Isomagnetic charts - plots of Earth's magnetic field

Isodynamics - contours of equal intensity

Isogonics - contours of equal declination

Isoclinics - contours of equal inclination

Component's of the Earth's field:

internally generated (99% of total); called the dipole component


externally generated (1% of total); called the non-dipole component

Internal field can be mostly accounted for by a fictitious magnetic dipole displaced from the
center of the Earth about 400 kilometers southward (toward Indonesia) and tilted 11 1/2 degrees
with respect to the axis of rotation.

Question: Where does Earth's internal field originate?


Since a uniformly magnetized sphere gives the same magnetic field as a dipole at center; there
are two possibilities:
1. Whole earth is magnetized
2. Field comes from Earth's center
If #1, Field strength should decrease with depth
If #2, Field strength should increase with depth.
Experimental evidence supports #2
Question: How is Earth's internal field produced?
Two possibilities:
1. permanently magnetized material (will discuss process later)
2. electric currents
Problem with possibility #1:
All materials lose their ability to become permanently magnetized at temperatures which are
reached in the lower crust.
Support for possibility #2:
Experimental studies show that relatively simple motions of a conducting fluid (such as a nickleiron alloy) can produce a magnetic field.
Michael Faraday's experiment:
Conducting disk, spinning about an axle in a magnetic field.
Result is voltage difference between axle and rim of disk.
If we connect wire from axle to rim, a current will flow.
The current in the wire generates its own magnetic field which can add to the original.
Now remove original magnetic field.
If disk continues to spin quickly enough, the current keeps flowing through the wire and a
magnetic field still exists.
Called a self-exciting dynamo.
Notice 2 things necessary:

must supply energy continually to spin disk


must have small initial applied magnetic field

Possible initial field for Earth's dynamo?

some kind of primitive battery action produced by variations in chemical composition


and temperature in Earth's interior?
the Sun?

Source of energy to keep dynamo "spinning"?

thermal convection?
If so, source of heat?
Why doesn't the convection disturb the layering of the outer core (called fine structure)?
solification of inner core?
rocking of Earth as it moves around Sun (precession) setting liquid in outer core in
motion
try rocking a bottle of liquid to see similar effect

Magnetic fields which will spontaneously reverse polarity can be produced by a combination
of disk generators.
(Will examine significance of this fact later)
Source of external field is mostly circulating electric currents in the ionosphere.
Earth's magnetic field not constant.
Changes:
1. magnetic storms
2. diurnal changes
3. secular variation
4. westward drift
5. reversals
Continuous recordings of changes are called magnetograms.
1. Magnetic storms:

last several days


change of about 1000 gamma (1 gamma = 10-9 Oersteds)

produced by charged particles emitted by the Sun.

2. Diurnal changes:

last about a day


change of about 25 gamma

produced by:
o

effect of radiation from Sun on ionosphere (varies with latitude)

tidal pulls of Sun and Moon on atmosphere

3. Secular variation:

regional changes
occur over decades or centuries

possible cause?
variations in core motions, especially eddies near the core boundary

4. Westward drift:

entire magnetic field "drifts" around Earth in period of about 2000 years
possible cause?
core rotates slower than rest of Earth

5. Magnetic reversals:
North magnetic pole becomes a south pole and vice versa.
There are no reasons why the Earth's field should have a particular polarity and there is no
fundamental reason why its polarity should not change.
Magnetic reversals are known to occur in the Sun and have been observed in other stars.
Major groupings of normal and reversed sequences are called magnetic epochs.
Briefer fluctuations in polarity are called events.
Average of three reversals per million years.
Reversals occurred in the preCambrian and have been found in all subsequent periods except the
Permian.
Question: Why were there no reversals in the Permian?
The most recent period of reversed polarity was about 8000 - 20000 years ago.
Reversal process takes about 5000 years.
In one area in southeastern Oregon, a gradual transition from normal to reverse magnetization
can be observed across a section made up of 6 individual flows.
During a reversal, the dipole field strength decreases to near zero.
The strength is currently dropping 5% per century and has been dropping for the past 2000 years.
We may be approaching a reversal.
Earth's magnetic field shields surface from cosmic radiation.
Cosmic radiation produces mutations.
In general, there is a rough agreement between faunal extinctions and reversals.
The probability of a correlation occurring by chance is 1 in 700.
Other correlations found:

Higher magnetic field strengths correlate with colder climates.


Question: Could climatic changes cause extinctions?
Reversals correlate with tektite increases in deep sea sediments.
Question: Do violent meteorite impacts produce reversals?

Lenz's law:
When a substance is placed in a magnetic field, little extra currents are generated inside the
atoms by a process called induction.
These currents produce a magnetic field opposite in direction to the applied field.
(For details, look up Larmor precessions in a quantum mechanics book.)
This induced field is called the Intensity of Magnetization (I) and is proportional to the applied
field: I = kH
k is called the magnetic susceptibility of the substance
Examples of direct uses of magnetic susceptibility measurements:

maximum in direction of bedding planes and foliation planes


earthquake prediction (will discuss later)

The total new field in the substance is the applied field plus the induced field.
This is called the Magnetic Induction (B): B = H + I
B is usally given in Tesla (104 Oersteds).
Gamma (or nonotesla, 10-9 Oersteds) are usually used in exploration geophysics.
Motions of electric particles (including electron spin and orbital motion) produce magnetic
fields.
Three types of magnetic behavior:
1. diamagnetic
2. paramagnetic
3. ferromagnetic
1. In diamagnetic substances, small magnetic fields produced by particle motions are randomly
oriented and cancel each other out, leaving atoms and ions with no net magnetic field.
Examples: salt, gypsum, marble, quartz, graphite
2. In paramagnetic substances (which include most substances), the small fields don't cancel
each other out but leave the atoms or ions with net magnetic fields.
However, since the atoms are randomly arranged, the substance as a whole has no net magnetic
field.
3. In ferromagnetic substances, the atoms have net magnetic fields and the atoms are arranged
in regions called domains in such a way that each domain has a magetic field.

(Domains can only be explained by using quantum theory.)


However, normally the domains are randomly oriented and there is no net magnetic field in the
substance.
Examples: iron (which is technically ferrimagnetic), magnetite, hematite (technically canted antiferrimagnetic), ilmenite, pyrrhotite, goethite, many other iron compounds
When each of these kinds of substances is placed in an external magnetic field (like the Earth's
field, for example), additional small magnetic fields are induced.
1. Diamagnetic substances:
Small induced field produced opposite to applied field.
Thus total field is slightly less than the applied field.
Produces small negative magnetic anomaly.
Remove applied field; induced field disappears.
2. Paramagnetic substances:
Two effects occur:
1. Small induced field produced opposite to applied field.
2. Small magnetic fields already existing are partially lined up in same direction as applied
field.
Don't line up completely because of thermal agitation; so the lower the temperature, the stronger
the effect
Effect 2 is greater.
Net effect is total field larger than applied field.
Produces small positive magnetic anomaly.
Remove applied field; induced field disappears, thermal agitation randomly distributes the atoms
3. Ferromagnetic substances:
Three effects:
1. Small induced field produced opposite to applied field.
2. Domains which are oriented in a favorable direction grow larger.
3. Domains may rotate to a more favorable direction.
Effects 2 and 3 are very large effects.
Result is a total field is considerable larger than applied field.
Remove applied field,

effect 1 disappears
effect 3 disappears because of thermal agitation

effect 2 remains and substance becomes "permanently magnetized"

Exceptions:

When temperature of substance is above the Curie Temperature, domains break down;
substance becomes paramagnetic.
Can also remove "permanent" magnetization by reversing applied field.
The strength of the reversed field necessary to reduce the magnetization to zero is called
the coercive force.

The effects of an applied external magnetic field on a ferromagnetic substance are usually shown
by using a plot called a hysteresis curve.
Magnetism remaining in a rock when the applied field is removed is called natural remanent
magnetization (NRM) or paleomagnetism.
Types include:

Thermoremanent magnetization
Depositonal remanent magnetization

Chemical remanent magnetization

Isothermal remanent magnetization

Viscous remanent magnetization

Example of thermoremanent magnetization (TRM):


when lava cools and freezes, it will acquire a TRM dependent on the strength and orientation of
the Earth's field at that time.
Example of depositional remanent magnetization (DRM):
small grains of magnetic minerals, when settling or while a sediment is still wet and
unconsolidated, will align themselves with the direction of the Earth's magnetic field.
Example of chemical remanent magnetization (CRM):
acquired during growth or recrystallization of mineral grains; such as iron oxidizing
Example of isothermal remanent magnetization (IRM):
exposure to strong magnetic field for short time at relatively low temperature; such as field from
lightning strike
Example of viscous remanent magnetization (VRM):
on exposure to a magnetic field for a long time, thermal fluctuations gradually favor direction of
applied field.
One problem in interpreting paleomagnetic data is in deciding how much the magnetization has
been altered by later changes.
Examples of uses of paleomagnetism:

1. relative dating
Example: preCambrian dikes in one part of the Canadian Shield all have the same orientations
but 3 different remanence directions, indicating that they are of 3 different ages.
2. Did Japan "bend" during Tertiary?
Tertiary and Quaternary declinations for the north and south ends are the same; pre-Tertiary
declinations vary.
3. Has Spain rotated with respect to Europe?
Late Paleozoic rocks have a declination 35o different from Europe; less difference with time
4. Paleomagnetic correlation of deep-sea cores
5. Paleomagnetic inclinations allow the determination of past latitudes
Examples:

trace India's path


distinguish among terrains

6. Determine former fit of continents and time of plate break-up by use of "polar wandering"
curves which are identical until the time of break-up and then diverge (or convergence of plates
if curves merge)
7. Marine anomalies (will examine later)
Earth's magnetic field shows little relationship to broad features of geography and geology;
no obvious relationship to mountains, oceanic ridges, continents or oceans
However, field strength varies from place to place due to magnetization of rocks beneath the
surface
Can produce local disturbances of 3 Oersteds or more
(remember, Earth's average is much less)
Anomalies due to:

variation in distance to magnetic body (including relief in basement rocks)


difference in magnetic susceptibility (how easily rocks magnetized)
Magnetic susceptibility is very low for most materials; only high for ferromagnetic
substances.
Susceptibility of rocks is primarily controlled by the amount of ferromagnetic minerals in
the rock and is extremely variable.

difference in NRM

Magnetic methods involve looking for these anomalies.


More complicated than gravity anomalies because strength and direction must be determined

and because they are bipolar (have associated highs and lows).
However, no major "corrections" are made.
Note: sedimentary rocks usually produce no significant magnetic effect.
Examples of use:
1. depth to basement
measurements close to anomalous bodies show sharp anomalies; distant bodies produce smaller,
broader and smoother anomalies
On maps, the closer the contours, the shallower the source.
2. (Variation on 1) map structural features on basement
sedimentary basins are characterized by smooth contours and low magnetic relief
uplifted areas have steep gradients and high magnetic relief
3. prospect for magnetic minerals or non-magnetic minerals often found associated with
magnetic minerals
(Example: diamonds in kimberlite pipes)
Note: salt (which is diamagnetic) produces negative anomalies
4. Map rock bodies whose magnetic properties are very different from those of surrounding
rocks.
5. (Variation on 4) presence of magnetic anomalies generally means lack of sediments
6. Locate faults
A sudden change in spacing of contour lines suggests a discontinuity at depth.
Offsets of magnetic anomalies may indicate strike-slip faults which extend below the
sedimentary cover.
Magnetic anomalies are commonly interpreted qualitatively.
Sometimes individual magnetic anomalies are found which stand out so clearly that they can
easily be separated from neighboring effects and which are so simple in appearance that they
seem to be due to a single, magnetized body.
In these situations, quantitative methods can be used.
Example of sphere studied in profile:
The vertical component of the magnetic field strength (V) at a location x

where R is the radius of the sphere


I is the Intensity of Magnetization
Z is the depth to the center of the sphere
x is measured from a point on the surface directly above the center of the sphere to the location
Other formulas can be used for horizontal cylinders (useful for veins), horizontal sheets (for
dikes or layers faulted by vertical faults), etc., but are considerably more complicated.
All the formulas assume susceptibility known, Earth's field is vertical and magnetization is
in the directions of Earth's field, none of which is usually true.
Marine anomalies:
Due to thermoremanent magnetization of basalt, which is injected along the central rifts in
oceanic ridges, magnetized in the direction of the Earth's field, and then conveyed away in either
direction from the ridge.
Reversals result in parallel, linear, alternating positive and negative anomalies which are
symmetrical about the ridge axis.
Age of reversals and distance from ridge can be used to determine rate of spreading.
Varies from 1-8 cm/year.

Part 5: Electrical Methods


Most commonly used in searching for metals.
Increasingly used for finding depth to basement, in the study of groundwater, and in geothermal
exploration.
Types of methods:
1. Self Potential Methods
2. Resistivity Methods
3. Well Logging
4. Electromagnetic Methods
1. Self- Potential Methods:
Uses Potential Difference or Voltage - the difference in electrical potential energy between two
places. Unit is volt.
Potential differences occur naturally within the Earth and can be measured.
These potential differences are caused by
a.
ore bodies behaving like natural "batteries" with separation of positive and negative
charge (called Electrolytic Potential)
How this works is not understood.

The most accepted theory for sulfides suggests that the portion of the ore body above the water
table is being oxidized (losing electrons) while the portion below is being reduced, setting up a
flow of electrons from one end of the ore body to the other.
This theory cannot explain anomalies where the ore body is completely below the water table,
explain why a clay overburden prevents a self-potential from forming, or explain how selfpotentials form in poor conductors.
b. differences in salt concentration in water (called Electrochemical Potential)
c. solutions flowing through permeable rocks (called Streaming Potential)
d. electric activity caused by life processes of plants and animals (such as differences
between open ground and bush) (called Bioelectric Potential)
2. Resistivity methods:
Make use of the fact that some materials are good conductors of electricity and some are poor
conductors

where I is the amount of current flowing through a body


A is the cross sectional area through which the current flows
V is the voltage
L is the distance the current flows
is the conductivity of the material of which the body is made
The reciprocal of the conductivity is the resistivity.
Resistivity is measured in ohm cm or ohm m.
Resistance (Resistivity x L/A), in ohms, is more commonly used by physicists.
Poor conductors have high resistivities.
Note: for inhomogeneous bodies, we actually measure a sort of average resistivity along the path
of current flow, called the apparent resistivity.
Good conductors include metals, graphite, most sulfides.
Intermediate conductors (called semi-conductors) include most oxides and porous rocks.
Poor conductors (insulators) include most common rock-forming minerals.
Current in most rocks is carried by ions in fluids in the rock's pores (called electrolytic
conduction).
A small change in water content affects resistivity enormously.
Also, the salinity of the water is highly important in determining conductivity.
The shapes and arrangements of the pores can result in greater current flow in some directions
than in others.
Faults, joints, etc., can produce "structural" conductors.
Procedure:
Current driven through ground using 2 electrodes

Potential distribution mapped with 2nd set of electrodes to determine potential difference pattern
(voltage distribution) and directions of current flow.
Anomalies (conducting bodies, for example) disturb regular patterns that would normally be
produced
Common methods look for:
1. variation of resistivity with depth
2. variation of resistivity horizontally
1. to measure variation of resitivity with depth:
current penetrates to deeper depths with increasing separation of current electrodes
can determine approximate depths to layers but not thicknesses of layers
problem 1- the deeper you go, the wider the electrodes must be spaced and the more powerful
the current supply necessary.
This limits the method to a few hundred feet.
problem 2- a layer with intermediate resistivity between layers of high and low resistivitywill
not show up.
Example - looking for groundwater where layer of wet alluvium lies between layer of dry
alluvium and layer of shale
Often used for basement depth determinations:
sedimentary section generally has range of resistivities substantially lower than basement rocks,
so can be thought of as a 2-layer problem
Quantitative method for first approximations, rough work:
(gives reasonable estimates for shallow depths; does not give good results on thick beds)
sum all apparent resistivity values up to and including present reading and plot vs electrode
spacing
Example: If readings are 100, 200, 300 ohm m for spacings of 10, 20, 30 m; plot 100, 300, 600
ohm m vs 10, 20, 30 m
then draw segments of straight lines through as many readings as possible
cross-overs of segments gives depths to interfaces
2. to measure horizontal variations in resistivity
place current electrodes great distance apart and move closely spaced potential electrodes along
grid between them
plot resistivity vs. locations of potential electrodes
can use map or profile to display data; profiles are most common.
Interpreting maps:
Can use either current lines or equipotential lines on maps
Lines of current flow always perpendicular to equipotential lines (lines along which potential is
constant)

Usually interpret maps qualitatively to simply identify locations of good conductors or good
resistors
Interpreting profiles:

Estimate of depth to conducting body (to +/- 100%) can be made by the shape of the
profile - depth is half of the width of the curve at half its maximum height.
Steep gradients in resistivity curve are characteristic markers of structures with nearvertical boundaries, such as faults, dikes, veins, stream channels, etc.
A lack of symmetry in the profile implies a dipping body, with steeper slope and
positive tail on the downdip side.

3. Well Logging:
In well logging, both potential differences and resistivities are used.
Example:
High resistivity could be due to limestone or oil bearing sand.
A potential difference indicates flow of water into or out of well and/or difference in salt
concentration.
Therefore indicates oil bearing sand.
Main value of well logging lies in the possibility of correlation between wells.
4. Electromagnetic Methods:
a.

Telluric methods
b. Magnetotelluric methods
c. Electromagnetic Induction methods
d. Induced Polarization methods

a. Telluric methods:
Faraday's Law of Induction: changing magnetic fields produce alternating currents.
Changes in the Earth's magnetic field produce alternating electric currents just below the Earth's
surface called Telluric currents.
The lower the frequency of the current, the greater the depth of penetration.
Telluric methods use these natural currents to detect resistivity differences which are then
interpreted using procedures similar to those described earlier under resistivity methods.
b. Magnetotelluric methods:
The changing magnetic fields of the Earth and the telluric currents they produce have different
amplitudes.
The ratio of the amplitudes can be used to determine the apparent resistivity to the greatest
depth in the Earth to which energy of that frequency penetrates.

Typical equation:

apparent resistivity =
where Ex is the strength of the electric field in the x direction in millivolts
Hy is the strength of the magnetic field in the y direction in gammas
f is the frequency of the currents

Depth of penetration =
This methods is commonly used in determining the thickness of sedimentary basins.
c. Electromagnetic Induction methods:
Changing magnetic fields are produced by passing alternating currents through long wires or
coils.
These changing magnetic fields induce electric currents in buried conductors such as ore bodies
which then produce their own induced magnetic field.
There are a huge variety of techniques which use either the induced electric currents or the
induced magnetic field which these currents in turn produce.
This method is especially important in mineral exploration and surveys are easy to conduct form
airplanes.
(Advantages to using an airplane to conduct geophysical surveys:

not necessary to get permits from landowners


straight, evenly spaced survey grid pattern easier to obtain)

d. Induced polarization methods:


When a current is applied to a formation containing metallic minerals, each metallic mineral
grain has a small voltage produced across it in the direction of current flow.
---------> ----------> [ mineral grain ] ---------->
current
negative
negative
charge
charge
added
removed

When the current is turned off, the separation of charge remains for a short time and the voltage
can be measured.
The total voltage for the formation depends on the percentage of metallic minerals it contains.

Part 6: Seismology
Stress - specifies the nature of the internal forces acting within a mineral
Strain - defines the changes of size and shape (deformation) arising from those sources
An elastic substance is one in which stress is proportional to strain (Hooke's Law)
The constants of proportionality are known as the elastic constants and are different for different
kinds of stress (twisting, compressing, stretching) and for different materials.
Examples:

If wire is stretched and becomes thinner, the proportionality constants are E, Young's
modulus and , Poisson's ratio.
If wire twisted, the proportionality constant is , the modulus of rigidity or shear
modulus.
If a sphere is compressed, the proportionality constant is K, the bulk modulus.

In a plastic substance, under a given stress, strain is not constant but is dependent on time.
The Earth is constantly undergoing stress.
The rocks of the Earth sometimes behave elastically and sometimes plastically.
If the stress becomes large enough (the elastic limit is reached), fracturing will occur, suddenly
releasing stress and producing elastic waves which travel through the Earth (earthquake)
Five most important types of waves:

Body waves o compressional (longitudinal, primary or P-waves)


o

transverse (shear, secondary or S-waves)

Surface waves (S-waves) o

Love waves (transverse, horizontal)

Rayleigh waves (circular, reverse of water wave motion)

Free oscillations

P-waves:
usually have the smallest amplitude
Velocity can be calculated from elastic constants of material through which wave is traveling one formula is:

vp =

where is density

S-waves:
If the particles in an S-wave all move in a parallel line, the wave is said to be polarized.
An S-wave with all vertical particle motion is called SV; one with all horizontal motion is SH.
The velocity of S-waves is given by the formula:

Vs =
Question: Why can't S-waves travel through fluids?
In a fluid, rigidity ( ) is zero, therefore Vs must also be zero.
Question: Why are P-waves always faster than S-waves?

Because K and are always positive numbers, the ratio of Vp to Vs will always be greater than 1.
Love waves:
transverse and horizontal
possible only in a low-speed layer overlying a medium in which elastic waves have a higher
speed
Rayleigh waves:
particle motion in circles like water waves, but in opposite direction
travel only along the free surface of an elastic solid
amplitude decreases with depth below surface
slower than Love waves
When there is a low speed layer overlying a much thicker layer of material in which the speed of
elastic waves is higher, the surface wave velocity varies with wavelength.
This variation of velocity with wavelength is called dispersion.
For deep focus earthquakes, surface waves are either non-existent or have very low amplitudes.
Free Oscillations:
motions of the Earth as a whole

Keppler Laws
the rst law are a consequence of the conservation of energy of a planet orbiting the Sun under
the eect of a central attraction that varies as the inverse square of distance. The second
law describing the rate of motion of the planet around its orbit follows directly from the
conservation of angular momentum of the planet. The third law results from the balance between
the force of gravitation attracting the planet towards the Sun and the centrifugal force away
from the Sun due to its orbital speed. The third law is easily proved for circular orbits
The energy of a seismic wave is proportional to the square of its amplitude.
As a wave spreads out from its source, the energy spreads out over a large area and therefore the
amplitude decreases.
There is also a loss of energy due to friction converting the elastic energy into heat, leading to an
additional reduction in amplitude.
The loss of amplitude is called attenuation of the wave.
Need many seismographs to completely record motion of ground during an earthquake,
including one each to record N-S motion, E-W motion and up-down motion.
The relation between the natural period of a seismograph and the period of the waves being
recorded determines whether the instrument will measure the displacement, the velocity or the
acceleration associated with the Earth motion.

If the natural period of a seismograph is much less than that of the earth vibration
(frequency greater), the displacement of the seismograph becomes proportional to the
acceleration of the Earth and the instrument acts as an accelerometer.
If the two periods are approximately equal, the instrument reading will be proportional to
the velocity of the Earth motion.
If the natural period is much greater than the period of Earth vibration, the reading
becomes proportional to the actual displacement of the Earth.

When a wave meets a surface of discontinuity, part of it will be reflected and part refracted
(bent).
Every reflection or refraction generates additional waves, producing an incredibly complex
situation and seismograms which are extremely confusing.
The recognition of the several different arrivals is a skill acquired by long practice.
It is often easier to follow reflected and refracted waves by viewing them as rays moving at right
angles to the wave front.
Review of physics:
When a wave is reflected, the angle to reflection is equal to the angle of incidence.

When a wave is refracted, Snell's Law applies:

where v1 is the velocity in the 1st medium; v2 is the velocity in the 2nd medium;
is the angle of incidence and ' is the angle of refraction.
A wave which strikes the discontinuity at the particular angle when sin = v1/v2 will not
penetrate into the 2nd medium but will travel along the interface. is known as the critical angle
of refraction when this occurs.
Some applications of seismology:
1. determining location of an earthquake
2. determining magnitude of an earthquake
3. determining direction of motion along a fault
4. locating "liquid" layers inside the Earth
5. determining structure and composition of Earth, both on large scale and small scale
(seismic exploration)
1a. determining epicenter:
Since velocity of P and S waves are different, time interval between arrivals increases with
increasing distance, allowing the calculation of the distance between epicenter and recording
station.
Must have 3 stations to fix location.
Can usually be done to within 15 miles for a moderate earthquake and to within 3 miles in a
well-monitored area such as California.
1b. determining depth of focus:
Consider 2 P-waves produced by an earthquake, one traveling directly through the Earth to a
recording station on the opposite side, the other first bouncing off the Earth's surface at the
epicenter and then traveling to the same recording station.
The "bounced" wave has traveled farther than the direct wave by an amount equal to twice the
depth of focus.
Thus the time interval between the arrivals of these 2 waves can be used to calculate the depth of
focus.
2. determining magnitude:
The magnitude of an earthquake is a quantitative measure of its size.
Magnitude scales were originally determined from the amplitudes of the elastic waves generated.
The Richter Magnitude Scale can be described by the following formula:
M = log10 (a/T) + f ( , h) + C

a is the amplitude of the ground motion for surface waves from a Southern California
earthquake recorded on a Wood-Anderson seismograph (in microns, .001 mm)
T is the dominant wave period (in seconds)

is the distance (measured as the angle subtended at the center of the Earth) between the
earthquake and the seismometer

h is the depth of focus

f ( , h) is a term found from a study of many recordings. It is basically an expression for


the attenuation of the waves and has the effect of reducing all observations to a standard
distance

C is a station correction to adjust for local peculiarities of seismometer siting.

The Richter Magnitude Scale did not originally specify which wave type used.
Now we commonly use P-waves for deep focus earthquakes and the horizontal component of
Rayleigh waves for shallow focus earthquakes.
One big problem with the Richter Magnitude Scale is that it doesn't directly measure anything
related to fault mechanics.
A relatively new scale, called the Moment Magnitude Scale, which attempts to address this
problem is now becoming widely used.
The seismic moment is defined as: Mo = A u

is the shear modulus


A is the area of the fault

u is the average displacement on the fault

The Moment Magnitude is: Mw = 2/3 log Mo - 10.7


A formula often used to give the relationship between magnitude and total elastic wave
energy of an earthquake is:
log10 E = 12.24 + 1.44 M (E is in ergs)
3. First Motion Studies:
For simplification, we will choose simple horizontal strike-slip motion and choose axes parallel
and perpendicular to fault. Other cases more complicated.
In 2 of the quadrants, first motion will be away from the epicenter; in other 2 quadrants, 1st
motion will be toward epicenter.
Motion away from the epicenter (and toward the observer) appears as an upward movement on a
seismic record.
At right angles to the fault, the motion would be at a minimum, while at small angles to the fault,
motion would be maximum.
There will be a reversal in the direction of first motion as one crosses the trend of the fault.

Transform faults were found to be different from regular strike-slip faults by looking at their
relative movement as determined by First Motion Studies.
4. locating areas of molten or partially molten rock:
The formulas for the velocities of P and S waves indicate

the lower the rigidity, the lower the velocity


S-waves don't travel through fluids.

Major regions:

the molten outer core


the partially melted zone in the upper mantle (about 100 km down) called the Low
Velocity Zone or asthenosphere

5a. determining depths to discontinuities


Travel times for P and S waves depend primarily on the distance they travel and therefore the
depth to which they penetrate into the Earth.
The velocities of seismic waves depends on rocks' elastic properties and can be determined.
Knowing velocities and timing the arrivals of reflected and refracted waves at known distances
from source allows the calculation of the depths to discontinuities.
Within the Earth, major discontinuities occur at depths of 30 to 60 km (the Mohorovicic
discontinuity), 2900 km (the Gutenberg discontinuity) and 5000 km.
These discontinuities are used to divide the Earth into the crust, mantle, outer core and inner
core.
In addition, there are many minor discontinuties.
Notable ones are:

Crustal layers
Low Velocity Zone in upper mantle (discussed previously)

The Earth can be thought of as being made up of an infinite number of layers, each with greater
density than the one above. This results in an infinite number of refractions and is responsible for
the general curved nature of the paths of seismic waves through the Earth.
Diagrams which trace the paths of seismic waves through the Earth usually use symbols as
follows:

reflection at surface of Earth indicated by succession of chief symbols (ex. PP, PS, SS)
reflection at the outer surface of the core is shown by interposing C (ex. PcP, ScS, PcS)

K is used for a P-wave refracted through the outer core (PKP) and is often abbreviated P'

I is used for a P-wave refracted through the inner core.

J is used for an S-wave refracted through the inner core.

For deep focus earthquakes, a small preceding s or p is used to indicate a wave moving
up from the focus to the surface (ex. pP, pS, pPcP)

5b. determining compositional variations


Knowing the velocities of seismic waves at different locations allows us to determine densities
and elastic properties at those locations.
Exploring the Earth's interior with P and S waves is sometimes called seismic tomography by
analogy with CAT scans (Cathode Applied Tomography) which use x-rays to study the interior of
a human body.
5c. Seismic prospecting methods:
Explosions, vibrations and dropped objects often used to produce artificial earthquakes.
Basic procedure is to set up seismic waves and time their arrivals at known distances.
The waves may travel along direct paths, or may be refracted or reflected.
Almost always use only the first arrivals of P-waves (regardless of the path taken).
Two commonly used types of methods:
1. Seismic refraction methods
2. Seismic reflection methods
1. Seismic refraction:
Can be used to detemine thicknesses and dips of layers and seismic velocities in each layer,
making identification of rock types possible.
Example of one layer case:
Plot time of arrival of waves (T) versus distance to detector (x).
Will obtain a straight line with a slope of dT/dx (which is equal to 1/velocity), allowing
calculation of velocity of P-waves in layer.
Of limited usefulness, obviously.
Example of two layer case:
Waves can travel from source to the detectors directly or by critical refraction along the boundary
between the layers.
Those that travel directly will produce the same type of plot as in the one layer case.
The travel time versus distance plot for refracted waves will also produce a straight line but one
which has an intercept on the T axis.
(The mathematical proof for this statement and the associated calculations can be found in any
introductory geophysics text, generally occupying a number of pages of manipulations of
formulae. Go look it up if you are interested.)

The depth to the boundary,


where Ti is the intercept on the T axis and V2 is the velocity in the lower layer.
The slope of the line is 1/V2.
In reality, since we measure only first arrivals, at distances less than a certain distance (called the
critical distance), the direct wave is recorded and at distances beyond the critical distance, the
refracted wave is recorded.
The plot we obtain is thus made up of segments of two straight lines and allows us to obtain the
velocities in both layers and the depth to the interface.
For multi-layer cases, the procedure is similar but more complicated.
The plot is made up of one line segment for each layer.
Velocities can be read off the graph fairly easily but the equations used to obtain the depths to the
interfaces are horrendous and generally impossible without the use of a computer.
Example of a situation where the higher velocity layer is on top (very rare in nature):
No critical refraction occurs
Layer missed and thickness not accounted for
Leads to depth calculation errors
Example where velocity increases continuously with depth:
Basically the same as a multi-layer case with an infinite number of layers.
Plot will look like a curve with the shape of the curve dependant upon how the velocity varies
with depth.
Example of case of fault:
If a bed is faulted vertically, the plot obtained perpendicular to the strike of the fault will consist
of 2 parallel but displaced linear segments.
The throw (vertical displacement) of the fault can be calculated from the difference between the
T intercepts of the the two linear segments.
Example of dipping layers:
If layers are horizontal, the same plot will be obtained by reversing positions of the energy
source and the detector.
This will not be true if layers dip.
The apparent dip and velocities in the layers can still be determined but the procedure is
extremely complicated. Consult geophysics text if interested.
2. Seismic reflection:
the most widely used and valuable geophysical exploration method and one of the easiest to
interpret qualitatively

Seismic waves traveling down from a source are reflected upward from each interface
encountered.
Interfaces are not necessarily boundaries between layers but could be any of a number of
lithologic changes which cause velocity contrasts.
Reflections from a single shot are usually recorded by groups of geophones - frequently as many
as 96.
When several closely spaced detectors are laid out along a line, each will record a reflection from
each interface.
If the seismograms from these detectors are recorded parallel to each other, the waves
corresponding to a reflection will all line up across the records in such a way that the crests
and troughs on adjacent traces will appear more or less to fit into one another.
To make a record easier to analyse, we usually make a dynamic correction (also called normal
moveout).
The different geophones were at different distances from the shot point and therefore the waves
had longer distances to travel.
The dynamic correction has the effect of mathematically placing all geophones at the same
distance from the shot point.
Other corrections might involve:

elevation variations
removing the effects of the surface layer because it is generally very variable and not of
particular interest

correcting for the fact that we are assuming vertical paths for the incident and reflecting
rays and this would not be true for dipping or irregular surfaces and correcting for
diffraction effects (both corrections called seismic migration)

removing multiple reflections (called deconvolution)

After reflections have been identified, they are timed, using the trough of the 1st wave.
For horizontal beds, where T is the travel time, x is the distance between the shot point and the
receiver, and V is the average velocity in the section above the interface, the depth to the
interface is:

The average velocity in an area is often determined by exploding charges of dynamite in a


shallow drill hole alongside a deep exploratory borehole and recording the arrival times of waves
at detectors at a number of depths in the hole.
The average velocity is simply the total vertical distance divided by the total time.
The difference between the times of a peak or a trough for the same reflection at successive
detector positions gives information about the dip of the reflecting interface.

Changing the distance between the shot point and the geophones gives several readings for the
same reflecting surfaces.
This results in the same reflection signal being recorded but different "noise" signals, enabling
us to remove the noise signals (or at least to minimize them) with the use of various techniques.
Filters used in geophysics can be compared to maps of different scales
One geophysicist's noise is another's music. Rayleigh waves (disparagingly called ground roll)
get in the way of exploration geophysics but are very important in crustal studies.
Noises are due to many things and we could devote an entire course to the techniques used to
deal with them.
Interpretation:
Know thicknesses and know velocities.
Have at least some knowledge of the geology of the area.
In addition to type of rock, several other factors also affect velocity, including porosity and
water content.
Guess a little.

Seismic Tomography
Seismic tomography uses data from hundreds of earthquakes and recording stations to generate
a sort of CAT scan of the Earth in a way that is similar to the whole-body scanning method used
for medical purposes.
The computer modeling methods are very complex. The end result is a three-dimensional
model of the shear-wave velocity within the Earth.
These S-wave variations provide information about temperature conditions and mantle flow.

Earthquake Prediction
Geophysical properties used in earthquake prediction attempts:
1. slowing down of seismic waves

Before an earthquake, the P-wave velocity drops to a minimum and then returns to
normal.
Quake occurs in about 1/10 time that anomaly lasted.

Size of quake correlates to duration of anomaly

Possible explanation: When cracks first begin to open, P-waves slow down because they
don't travel as fast through open

space as they do through solid rock. Ground water then seeps in and P-wave velocity
returns to normal; also rocks are lubricated.

Problems:
o

usually doesn't occur

Sometimes when it occurs, earthquakes don't

2. rock deformation

characterised by tilting or vertical changes

3. increase in electrical resistivity

Possible explanation: air in cracks is not a good conductor

4. local magnetic field changes

Laboratory experiments show that compression in direction of magnetization reduces


susceptibility and remanence; perpendicular compression increases it. Effect probably
due to rotation of magnetic domains.
Could be related to increase in stress before quake or release of stress at time of faulting.

5. electromagnetic "noise"
6. "earthquake lights"

Due to Snells law, the ray is thus bending at the interface if the two velocities are
different. Since this relation holds for all interfaces, the quantity sin(i)/v will be constant
and the ray parameter is constant. The quantity 1/p is also the apparent velocity along

P:

Compressional wave

S: Shear wave
K:
I:

P wave through outer core


P wave through inner core

PP and SS: P or S wave reflected at the surface

PPP:

Reflected 3 times etc.

SP and PS: S reflect as P or P as S at the surface


pP , pS, sS or sP: P or S wave upgoing from the focus and reflected at the surface
c:
Pdif:

Wave reflected at the core-mantle boundary


P wave diffracted along core-mantle boundary

Pg At short distances, either an upgoing P wave from a source in the upper crust or a P wave
bottoming in the upper crust. At larger distances
also arrivals caused by multiple P-wave reverberations inside the whole crust with a group
velocity around 5.8 km/s.
Pb (alt:P*) Either an upgoing P wave from a source in the lower crust or a P wave bottoming in
the lower crust
Pn Any P wave bottoming in the uppermost mantle or an upgoing P wave from a source in the
uppermost mantle
PnPn Pn free surface reflection
PgPg Pg free surface reflection
PmP P reflection from the outer side of the Moho
PmPN PmP multiple free surface reflection; N is a positive integer. For example, PmP2 is
PmPPmP
PmS P to S reflection from the outer side of the Moho
Sg At short distances, either an upgoing S wave from a source in the upper crust or an S wave
bottoming in the upper crust. At larger distances
also arrivals caused by superposition of multiple S-wave reverberations and SV to P and/or P to
SV conversions inside the whole crust.
Sb (alt:S*) Either an upgoing S wave from a source in the lower crust or an S wave bottoming in
the lower crust

Sn Any S wave bottoming in the uppermost mantle or an upgoing S wave from a source in the
uppermost mantle
SnSn Sn free surface reflection
SgSg Sg free surface reflection
SmS S reflection from the outer side of the Moho
SmSN SmS multiple free surface reflection; N is a positive integer. For example, SmS2 is
SmSSmS
SmP S to P reflection from the outer side of the Moho
Lg A wave group observed at larger regional distances and caused by superposition of multiple
S-wave reverberations and SV to P and/or P to
SV conversions inside the whole crust. The maximum energy travels with a group velocity
around 3.5 km/s
Rg Short period crustal Rayleigh wave
MANTLE PHASES
P A longitudinal wave, bottoming below the uppermost mantle; also an upgoing longitudinal
wave from a source below the uppermost mantle
PP Free surface reflection of P wave leaving a source downwards
PS P, leaving a source downwards, reflected as an S at the free surface. At shorter distances the
first leg is represented by a crustal P wave.
PPP analogous to PP
PPS PP to S converted reflection at the free surface; travel time matches that of PSP
PSS PS reflected at the free surface
PcP P reflection from the core-mantle boundary (CMB)
PcS P to S converted reflection from the CMB
PcPN PcP multiple free surface reflection; N is a positive integer. For example PcP2 is PcPPcP
Pz+P (alt:PzP) P reflection from outer side of a discontinuity at depth z; z may be a positive
numerical value in km. For example P660+P is a P

reflection from the top of the 660 km discontinuity.


PzP P reflection from inner side of discontinuity at depth z. For example, P660P is a P
reflection from below the 660 km discontinuity, which
means it is precursory to PP.
Pz+S (alt:PzS) P to S converted reflection from outer side of discontinuity at depth z
The
Pn and Sn travel times are used for processing. The second S-wave fits with both Lg
and Sg and again it is difficult to judge which one it is. If an Sg, a Pg should be
expected where the Pg ? is marked (difference Sg Sn is 1.78 times the difference PgPn), but no clear phase is seen. So most likely the second S-phase is Lg.
What is Migration?
Migration is a tool used in seismic processing to get an accurate picture of
underground layers. It involves geometric repositioning of return signals
to show an event (layer boundary or other structure) where it is being hit
by the seismic wave rather than where it is picked up.
migration methods are: pre-stack and post-stack migration
How to determine focal depth

seismic wave used to determine focal depth is the sP phase - an S wave reflected as a P wave
from the Earth's surface at a point near the epicenter. This wave is recorded after the pP by about
one-half of the pP-P time interval. The depth of an earthquake can be determined from the sP
phase in the same manner as the pP phase by using the appropriate travel-time curves or depth
tables for sP.
If the pP and sP waves can be identified on the seismogram, an accurate focal depth can be
determined.

Coulomb's Law

a force exists between 2 magnetic poles:

where
is the force
is the permeability of free space, =

are the magnetic pole strength

is the distance separating the poles


is the unit radial vector

unlike gravity, poles come in 2 flavors:


o

+ (north-seeking)

- (south-seeking)

like poles repel (F is +, force is outward)

unlike poles attract (F is -, force is inward)

Magnetic Induction, B

as with gravity, we are interested in force Earth exerts on a unit pole


(like acceleration, with g)

or, 'magnetic field intensity'

analogous to gravitational acceleration (but not acceleration units!)

force per unit pole strength (force exerted on unit magnetic pole)

(In our analogy with gravity, m here is the Earth's "monopole" field, which is a
fiction; Stacey incorrectly calls B "magnetic field, which is H)
Magnetic Field Strength, H
o

if we only had to deal with a vacuum (or even air, since it has
negligible magnetic susceptibility), we could always deal with H
(magnetic field strength)..

however, in presence of "magnetizable" material, there is a magnetic


polarization (or, simply, magnetization) of material which produces
an additional field (J) which adds to H

combining the field strength, H, and the magnetic polarization


(magnetization), J, is call the magnetic induction, B

How much bigger and stronger is a 8.7 mag earthquake compared to a 5.7
earthquake

A magnitude 8.7 earthquake is 794 times BIGGER on a seismogram than a magnitude 5.8
earthquake. The magnitude scale is logarithmic, so
(10**8.7)/(10**5.8) =
=
=
OR
=
=
=

(5.01*10**9)/(6.31*10**6)
.794*10**3
794
10**(8.7-5.8)
10**2.9
794.328

Another way to get about the same answer without using a calculator is that since 1 unit of
magnitude is 10 times the amplitude on a seismogram and 0.1 unit of magnitude is about 1.3
times the amplitude, we can get,
10 * 10 * 10 / 1.3 = 769 times
[not exact, but a decent approximation]

The magnitude scale is really comparing amplitudes of waves on a seismogram, not the
STRENGTH (energy) of the quakes. So, a magnitude 8.7 is 794 times bigger than a 5.8 quake as
measured on seismograms, but the 8.7 quake is about 23,000 times STRONGER than the 5.8!
Since it is really the energy or strength that knocks down buildings, this is really the more
important comparison. This means that it would take about 23,000 quakes of magnitude 5.8 to
equal the energy released by one magnitude 8.7 event. Here's how we get that number:
One whole unit of magnitude represents approximately 32 times (actually 10**1.5 times) the
energy, based on a long-standing empirical formula that says log(E) is proportional to 1.5M,
where E is energy and M is magnitude. This means that a change of 0.1 in magnitude is about 1.4
times the energy release. Therefore, using the shortcut shown eartlier for the amplitude
calculation, the energy is,
32 * 32 * 32 / 1.4 = 23,405 or about 23,000

Electronics and seismic instrumentation

Ohms law, use of multimeter, internal resistance, power supply.

- Seismic sensors: Mechanical and electrical constants, how to measure constants, overview of
main types.
Knowledge of approximate values of generator constants and free period for different types of
sensors.

- Signal conditioning: Amplifier general properties, filters.

- Digital instruments: What is an A/D converter, what does number of bits mean, typical units on
the market, anti alias filters, GSN station, SEISLOG station.

- Triggered systems: Detector and data storage.

- Seismic noise: Generally expected shape of noise curves, noise measured as displacement or
power spectral density of acceleration.

A diode is an electrical component acting as a one-way valve for current.


When voltage is applied across a diode in such a way that the diode allows current, the diode
is said to be forward-biased.
When voltage is applied across a diode in such a way that the diode prohibits current, the
diode is said to be reverse-biased.
The voltage dropped across a conducting, forward-biased diode is called the forward voltage.
Forward voltage for a diode varies only slightly for changes in forward current and temperature,
and is fixed by the chemical composition of the P-N junction.
Silicon diodes have a forward voltage of approximately 0.7 volts.
Germanium diodes have a forward voltage of approximately 0.3 volts.
The maximum reverse-bias voltage that a diode can withstand without breaking down is
called the Peak Inverse Voltage, or PIV rating.
Purpose of a Zener Diode

exceeding a normal diode's PIV usually results in destruction of the diode. Special
types of diodes, though, which are designed to break down in reverse-bias mode
without damage (called zener diodes)
Purpose of Rectification

Rectification is the conversion of alternating current (AC) to direct current (DC).


A half-wave rectifier is a circuit that allows only one half-cycle of the AC voltage
waveform to be applied to the load, resulting in one non-alternating polarity across it. The
resulting DC delivered to the load pulsates significantly.

A full-wave rectifier is a circuit that converts both half-cycles of the AC voltage


waveform to an unbroken series of voltage pulses of the same polarity. The resulting DC
delivered to the load doesn't pulsate as much.

Polyphase alternating current, when rectified, gives a much smoother DC waveform


(less ripple voltage) than rectified single-phase AC.

How a rectifier is used with capacitor

A peak detector is a series connection of a diode and a capacitor outputting a DC


voltage equal to the peak value of the applied AC signal.
Clipper , Clamper and Voltage Multiplier Circuits acting as specialized circuits to
produce a certain output A voltage multiplier produces a DC multiple (2,3,4, etc) of
the AC peak input voltage

Diodes can perform switching and digital logic operations. Forward and reverse bias switch a
diode between the low and high impedance states, respectively. Thus, it serves as a switch.
LED: emission of specific-frequency radiant energy whenever electrons fall from a
higher energy level to a lower energy level.

An ohmmeter may be used to qualitatively check diode function. There should be low
resistance measured one way and very high resistance measured the other way. When using an
ohmmeter for this purpose, be sure you know which test lead is positive and which is negative!
The actual polarity may not follow the colors of the leads as you might expect, depending on the
particular design of meter.
Some multimeters provide a diode check function that displays the actual forward voltage
of the diode when its conducting current. Such meters typically indicate a slightly lower forward
voltage than what is nominal for a diode, due to the very small amount of current used during
the check.
Maximum DC reverse voltage
Maximum (average) forward current
Maximum total dissipation
Operating junction temperature
Maximum reverse current
Typical junction capacitance = CJ, the typical amount of capacitance intrinsic to the
junction, due to the depletion region acting as a dielectric separating the anode and
cathode connections.
Reverse recovery time = trr, the amount of time it takes for a diode to turn o
when the voltage across it alternates from forward-bias to reverse-bias polarity.

LEDjunctions glow when forward biased. A diode intentionally designed to glow like a lamp is
called a light-emitting diode, or LED.

Forward biased silicon diodes give off heat as electron and holes from the N-type and P-type
regions

SOLAR CELLS operates in photovoltaic mode (PV) because it is forward biased by the voltage
developed across the load resistance

PIN diodes are used in place of switching diodes in radio frequency (RF) applications, PIN diode
is manufactured like a silicon switching diode with an intrinsic region added between the PN
junction layers. This yields a thicker depletion region, the insulating layer at the junction of a
reverse biased diode. This results in lower capacitance than a reverse biased switching diode.

TRIODE :thermionic diode, the heated cathode (either directly or indirectly by means of a
filament) causes a space charge of electrons that may be attracted to the positively charged plate
(anode in UK parlance) and create a current. Applying a negative charge to the control grid will
tend to repel some of the (also negatively charged) electrons back towards the cathode: the larger
the charge on the grid, the smaller the current to the plate. If an AC signal is superimposed on the
DC bias of the grid, an amplified version of the AC signal appears (inverted) in the plate circuit.
Dierence between AC and DC

Alternating Current

Direct Current

Voltage of DC cannot travel


Safe to transfer over longer city
very far until it begins to lose
distances and can provide more power.
energy.
Cause of the direction of
Steady magnetism along the
Rotating magnet along the wire.
flow of electrons
wire.
The frequency of alternating current is
The frequency of direct current
Frequency
50Hz or 60Hz depending upon the
is zero.
country.
It reverses its direction while flowing It flows in one direction in the
Direction
in a circuit.
circuit.
It is the current of magnitude varying It is the current of constant
Current
with time
magnitude.
Electrons keep switching directions - Electrons move steadily in one
Flow of Electrons
forward and backward.
direction or 'forward'.
Obtained from
A.C Generator and mains.
Cell or Battery.
Passive Parameters
Impedance.
Resistance only
Power Factor
Lies between 0 & 1.
it is always 1.
Types
Sinusoidal, Trapezoidal, Triangular,
Pure and pulsating.
Amount of energy that
can be carried

Alternating Current

Direct Current
Square.

How do you measure free period of seismometer


sensitive seismometer, connect a multimeter in DC mode (most sensitive range)
Identifying polarity of a seismometer
first step is to identify the positive terminal on the seismometer by rapidly tilting the
seismometer, which means that the seismometer moves UP when using a vertical seismometer or
horizontally in direction of rotation for a horizontal seismometer.
Broadband seismometer
broadband band sensor can be understood as a usual velocity sensor with an extended frequency
range in the low frequency end

Identifying dominant period of seismometer


- Connect a voltmeter to the output of the sensor

Frequency response of multimeter and oscilloscope


Plotting frequency vs voltage output
Accelerometer
The accelerometer measures the ground acceleration from DC to about 50 Hz without significant
change in gain. The instrument therefore can measure static changes in the gravity field as well
as dynamic changes. When the accelerometer is placed horizontally, theoretically output from all
3 sensors should be zero.
Use of instrument noise
Theoretically, a standard seismometer or geophone will output a signal in the whole
frequency range of interest in seismology. So why not just amplify and filter the
seismometer
output to get any desired frequency response instead of going to the trouble of
making

complex BB sensors? That brings us into the topic of instrument self noise. All
electronic
components as well as the sensor itself generate noise. If that noise is larger than
the signal
generated from the ground motion, we obviously have reached a limit.

STUDY QUESTIONS FOR GEOPHYSICS


State the Law of Universal Gravitation
State Newton's Second Law.
Using the Law of Universal Gravitation and Newton's Second Law, derive an expression for the
acceleration of gravity.
What is the approximate value for the acceleration of gravity?
What is the unit used for the acceleration of gravity? How many cm/sec2 is it equal to?
How do we know that the interior of the Earth must be composed of rocks denser than those on
the Earth's surface?
Scientists believe that the overall chemical composition of the Earth is very similar
to a kind of meteorite called chondrites, which formed at the same time the Earth
was formed. We know a lot about the composition of the Earths crust and mantle,
because we can observe those rocks that have been brought to the surface by
geologic processes.

Anda mungkin juga menyukai