Anda di halaman 1dari 9

Acoustics

Artificial omni-directional sound source in anechoic acoustic chamber


Acoustics is the interdisciplinary science that deals with the study of sound, ultrasound and infrasound (all
mechanical waves in gases, liquids, and solids). A scientist who works in the field of acoustics is an acoustician.
The application of acoustics in technology is called acoustical engineering. There is often much overlap and
interaction between the interests of acousticians and acoustical engineers.
Hearing is one of the most crucial means of survival in the animal world, and speech is one of the most
distinctive characteristics of human development and culture. So it is no surprise that the science of acoustics
spreads across so many facets of our society - music, medicine, architecture, industrial production, warfare and
more. Art, craft, science and technology have provoked one another to advance the whole, as in many other
fields of knowledge.

Fundamental concepts of acoustics


The study of acoustics revolves around the generation, propagation and reception of mechanical waves and
vibrations.

At Jay Pritzker Pavilion, a LARES system is combined with a zoned sound reinforcement system, both
suspended on an overhead steel trellis, to synthesize an indoor acoustic environment outdoors.

The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of
cause, both natural and volitional. There are many kinds of transduction process that convert energy from some
other form into acoustical energy, producing the acoustic wave. There is one fundamental equation that
describes acoustic wave propagation, but the phenomena that emerge from it are varied and often complex. The
wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other
forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical
or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether
we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical
acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many
forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave. Transduction processes are also
of special importance.

Wave propagation: pressure levels


In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this
disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear,
known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness
of these disturbances is called the sound pressure level, and is measured on a logarithmic scale in decibels.
Mathematically, sound pressure level is defined

where Pref is the threshold of hearing and P is the change in pressure from the ambient pressure. The following
table gives a few examples of sounds and their strengths in decibels and Pascals
Example of Common Sound Pressure Amplitude Decibel Level
Threshold of Hearing
20*10-6 Pa
0 dB
Normal talking at 1m
.002 to .02 Pa
40 to 60 dB
Power lawnmower at 1m
2 Pa
100 dB

Threshold of Pain

200 Pa

134 dB

Wave propagation: frequency


Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because
this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are
pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic
measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave
bands or time frequency plots. Both these popular methods are used to analyze sound and better understand the
acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls
between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear.
This range has a number of applications, including speech communication and music. The ultrasonic range
refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allows
better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely
on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the
infrasonic range. These frequencies can be used to study geological phenomenon such as earthquakes.

Transduction in acoustics

An inexpensive low fidelity 3.5 inch driver, typically found in small radios
A transducer is a device for converting one form of energy into another. In an acoustical context, this usually
means converting sound energy into electrical energy (or vice versa). For nearly all acoustic applications, some
type of acoustic transducer is necessary. Acoustic transducers include loudspeakers, microphones, hydrophones
and sonar projectors. These devices convert an electric signal to or from a sound pressure wave. The most
widely used transduction principles are electromagnetism (at lower frequencies) and piezoelectricity (at higher
frequencies).
A subwoofer, used to generate lower frequency sound in speaker audio systems, is an electromagnetic device.
Subwoofers generate waves using a suspended diaphragm which oscillates, sending off pressure waves. Electret
microphones are a common type of microphone which employ an effect similar to piezoelectricity. As the
sound wave strikes the electret's surface, the surface moves and sends off an electrical signal.

Sound

A membrane of a drum makes vibration


Sound is a travelling wave which is an oscillation of pressure transmitted through a solid, liquid, or gas,
composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the
sensation stimulated in organs of hearing by such vibrations.

Perception of sound

Human ear
For humans, hearing is normally limited to frequencies between about 12 Hz and 20,000 Hz (20 kHz)[2],
although these limits are not definite. The upper limit generally decreases with age. Other species have a
different range of hearing. For example, dogs can perceive vibrations higher than 20 kHz. As a signal perceived
by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and
communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf,
or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine

and terrestrial mammals, have also developed special organs to produce sound. In some species, these have
evolved to produce song and speech. Furthermore, humans have developed culture and technology (such as
music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.

Physics of sound
The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases,
liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel
through vacuum.

Longitudinal and transverse waves

Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The
horizontal axis represents time.
Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves.
Through solids, however, it can be transmitted as both longitudinal and transverse waves. Longitudinal sound
waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of
compression and rarefaction, while transverse waves in solids, are waves of alternating shear stress.
Matter in the medium is periodically displaced by a sound wave, and thus oscillates. The energy carried by the
sound wave converts back and forth between the potential energy of the extra compression (in case of
longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter and the kinetic
energy of the oscillations of the medium.

Sound wave properties and characteristics


Sound waves are characterized by the generic properties of waves, which are frequency, wavelength, period,
amplitude, intensity, speed, and direction (sometimes speed and direction are combined as a velocity vector, or
wavelength and direction are combined as a wave vector).
Transverse waves, also known as shear waves, have an additional property of polarization.

Sound characteristics can depend on the type of sound waves (longitudinal versus transverse) as well as on the
physical properties of the transmission medium
Whenever the pitch of the sound wave is affected by some kind of change, the distance between the sound wave
maxima also changes, resulting in a change of frequency. When the loudness of a sound wave changes, so does
the amount of compression in air of the wave that is traveling through it, which in turn can be defined as
amplitude.

Speed of sound
The speed of sound depends on the medium through which the waves are passing, and is often quoted as a
fundamental property of the material. In general, the speed of sound is proportional to the square root of the
ratio of the elastic modulus (stiffness) of the medium to its density. Those physical properties and the speed of
sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In
20 C (68 F) air at the sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the
formula "v = (331 + 0.6T) m/s". In fresh water, also at 20 C, the speed of sound is approximately 1,482 m/s
(5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph).[5] The
speed of sound is also slightly sensitive (a second-order an harmonic effect) to the sound amplitude, which
means that there are nonlinear propagation effects, such as the production of harmonics and mixed tones not
present in the original sound (see parametric)

Acoustics and noise


The scientific study of the propagation, absorption, and reflection of sound waves is called acoustics. Noise is a
term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component
that obscures a wanted signal.

Sound pressure level

Sound Measurement
Sound pressure p
Particle velocity v
Particle velocity level (SVL(Sound velocity level))
Particle displacement
Sound intensity I
Sound intensity level (SIL)
Sound power Pac

Sound power level (SWL)


Sound energy density E
Sound energy flux q
Surface S
Acoustic impedance Z
Speed of sound c

vde
Sound pressure is defined as the difference between the average local pressure of the medium outside of the
sound wave in which it is traveling through (at a given point and a given time) and the pressure found within the
sound wave itself within that same medium. A square of this difference (i.e. a square of the deviation from the
equilibrium pressure) is usually averaged over time and/or space, and a square root of such average is taken to
obtain a root mean square (RMS) value. For example, 1 Pa RMS sound pressure in atmospheric air implies that
the actual pressure in the sound wave oscillates between (1 atm

Pa) and (1 atm

Pa), that is between

101323.6 and 101326.4 Pa. Such a tiny (relative to atmospheric) variation in air pressure at an audio frequency
will be perceived as quite a deafening sound, and can cause hearing damage, according to the table below.
As the human ear can detect sounds with a very wide range of amplitudes, sound pressure is often measured as
a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as

where p is the root-mean-square sound pressure and pref is a reference sound pressure. Commonly used
reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 Pa in air and 1 Pa in water.
Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound
pressure level.
Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that
the measured level will match perceived levels more closely. The International Electrotechnical Commission
(IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to
noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.

Sound synthesis
In music technology, sound synthesis is the process of generating sound from analogue and digital electronic
equipment, often for musical, artistic or entertainment purposes. In particular, it refers to the process of
generating, combining or mixing sounds from a set of fundamental building blocks or routines in order to create
sounds of a greater complexity and richness. Sound synthesis can be used to mimic acoustic sound sources or
generate sound that may be impossible to realise naturally. Since its development in the first half of the 20th
Century, it has provided applications to music, computer science, film, acoustics and even biology.

Introduction
When any mechanical collision occurs sound is produced. The energy from the collision is transferred through
the air as sound waves, which are perceived by the human auditory system. Sound waves are the aggregate of
one or many periodic vibrations, described mathematically by sine waves. The characteristics of a sound,
known as pitch and timbre, are defined by the amplitude and pitch of each individual sine wave, collectively
know as the partials or harmonics. Generally, a sound that does not change over time will include a fundamental
partial or harmonic, and any number of partials. Traditionally, the aims and methods of generating sounds via
synthesis is to attempt to mimic the amplitude and pitch of the partials in an acoustic sound source, effectively
creating a mathematical model for the sound.

Synthesis
When natural tonal instruments' sounds are analyzed in the frequency domain (as on a spectrum analyzer), the
spectra of their sounds will exhibit amplitude spikes at each of the fundamental tone's harmonics. Some

harmonics may have higher amplitudes than others. The specific set of harmonic-vs-amplitude pairs is known as
a sound's harmonic content.
When analyzed in the time domain, a sound does not necessarily have the same harmonic content throughout
the duration of the sound. Typically, high-frequency harmonics will die out more quickly than the lower
harmonics. For a synthesized sound to "sound" right, it requires accurate reproduction of the original sound in
both the frequency domain and the time domain. Percussion instruments and rasps have very low harmonic
content, and exhibit spectra that are comprised mainly of noise shaped by the resonant frequencies of the
structures that produce the sounds. However, the resonant properties of the instruments (the spectral peaks of
which are also referred to as formants) also shape an instrument's spectrum (esp. in string, wind, voice and other
natural instruments). In most conventional synthesizers, for purposes of re-synthesis, recordings of real
instruments are composed of several components.
These component sounds represent the acoustic responses of different parts of the instrument, the sounds
produced by the instrument during different parts of a performance, or the behavior of the instrument under
different playing conditions (pitch, intensity of playing, fingering, etc.) The distinctive timbre, intonation and
attack of a real instrument can therefore be created by mixing together these components in such a way as
resembles the natural behavior of the real instrument. Nomenclature varies by synthesizer methodology and
manufacturer, but the components are often referred to as oscillators or partials. A higher fidelity reproduction
of a natural instrument can typically be achieved using more oscillators, but increased computational power and
human programming is required, and most synthesizers use between one and four oscillators by default.

Amplitude Envelope
One of the major characteristics of a sound is how its overall amplitude varies over time. Sound synthesis
techniques often employ a transfer function called an amplitude envelope which describes the amplitude at any
point in its duration. Most often, this amplitude profile is realized with an "ADSR" (Attack Decay Sustain
Release) envelope model, which is applied to a overall amplitude control. Apart from Sustain, each of these
stages is modeled by a change in volume (typically exponential). Although the oscillations in real instruments
also change frequency, most instruments can be modeled well without this refinement.
Attack time is the time taken for initial run-up of the sound level from nil to its peak amplitude. Decay time is
the time taken for the subsequent run down from the attack level to the designated sustain level. Sustain level is
the amplitude of the sound during the main sequence of its duration. Release time is the time taken for the
sound to decay from the sustain level to zero.

Overview of popular synthesis methods


There are a range of different sound synthesis techniques, each with their own strengths and weaknesses. The
most popular ones are outlined below:

Subtractive synthesis
Additive synthesis
Granular synthesis
Wavetable synthesis
Frequency modulation synthesis

Phase distortion synthesis


Physical modeling synthesis
Sample-based synthesis
Sub-harmonic synthesis

Subtractive synthesizers use a simple acoustic model that assumes an instrument can be approximated by a
simple signal generator (producing saw tooth waves, square waves, etc...) followed by a filter which represents
the frequency-dependent losses and resonances in the instrument body. For reasons of simplicity and economy,
these filters are typically low-order low-pass filters. The combination of simple modulation routings (such as
pulse width modulation and oscillator sync), along with the physically unrealistic low-pass filters, is responsible
for the "classic synthesizer" sound commonly associated with "analog synthesis" and often mistakenly used
when referring to software synthesizers using subtractive synthesis. Although physical modeling synthesis,
synthesis wherein the sound is generated according to the physics of the instrument, has superseded subtractive
synthesis for accurately reproducing natural instrument timbres, the subtractive synthesis paradigm is still
ubiquitous in synthesizers with most modern designs still offering low-order low-pass or band-pass filters
following the oscillator stage.
One of the newest systems to evolve inside music synthesis is physical modeling. This involves taking up
models of components of musical objects and creating systems which define action, filters, envelopes and other
parameters over time. The definition of such instruments is virtually limitless, as one can combine any given
models available with any amount of sources of modulation in terms of pitch, frequency and contour. For
example, the model of a violin with characteristics of a pedal steel guitar and perhaps the action of piano
hammer ... physical modeling on computers gets better and faster with higher processing.
One of the easiest synthesis systems is to record a real instrument as a digitized waveform, and then play back
its recordings at different speeds to produce different tones. This is the technique used in "sampling". Most
samplers designate a part of the sample for each component of the ADSR envelope, and then repeat that section
while changing the volume for that segment of the envelope. This lets the sampler have a persuasively different
envelope using the same note.

Anda mungkin juga menyukai