Anda di halaman 1dari 49

Chapter 6

Digital Audio Technology


2
Objectives
Understand the physical and mathematical basis of
sound waves
Recognize the amplitude, frequency, and phase
properties of sound
Apply the three-step process for audio digitization
Understand and apply the Nyquist Sampling
Theorem
3
Objectives (continued)
Recognize the need for digital audio compression
Explain how music files can be compressed without
significant sound quality degradation
Understand the difference between lossy and lossless
compression techniques
Become familiar with popular digital audio formats
like MP3, WMA, WAV, AAC, and AIFF
4
Recording Sound
The phonograph, invented by Thomas Edison in the late
1800s, was the first device created for sound recording and
playback
The tin-coated cylinders of the early phonograph were
ultimately replaced by vinyl records
Another early analog recording technique from the late 1800s
relied on magnetism and electricity
The magnetic recording approach was adopted by the music
industry, and magnetic tapes replaced metallic wires as a
recording medium
This type of recording technique was later used to store music
on audiocassettes, but its popularity diminished in the wake of
CDs, flash drives, and alternative storage media

5
Recording Sound (continued)
6
Creating Sound
Sound is caused by physical disturbances of air
molecules
This transfer of energy among molecules creates a
mechanical wave of energy called a sound wave,
which propagates away from the source of the
disturbance
The creation of sound waves is usually compared to
the formation of water waves



Information Technology in Theory
7
Creating Sound (continued)
8
Converting Between Sound and
Electricity
To digitize sound, electrical systems must convert
mechanical sound waves into electrical sound
wavesin other words, into an electrical audio
signal
Microphones capture sound waves and convert them
into an electrical form, while speakers convert the
electrical signal back into sound waves
The electrical fluctuations correspond to the pressure
fluctuations of the sound wave




Information Technology in Theory
9
Converting Between Sound and
Electricity (continued)
This electrical signal can then be converted into a
stream of bits by passing through an audio digitizer
Such digitizers are found in sound cards embedded
within computers and in chips within Cellular phones
To understand the process of audio digitization, you
must understand some additional properties of sound

Information Technology in Theory
10
Converting Between Sound and
Electricity (continued)
Information Technology in Theory
11
Converting Between Sound and
Electricity (continued)
Information Technology in Theory
12
Pure and Complex Sounds
Sound may be classified as either pure sound or
complex sound
Almost all the sounds we hear, from human speech to
music to a barking dog, are classified as complex
sounds
One example of a nearly pure sound is the tone
produced by a tuning fork
A pure sound is a signal that varies in a sinusoidal
manner


Information Technology in Theory
13
Pure and Complex Sounds (continued)
By contrast, a complex sound captured by a
microphone fluctuates in a nonsinusoidal and
irregular manner
Although most sound is classified as complex, we
will first discuss properties of pure sounds and then
address complex sound properties

Information Technology in Theory
14
Pure and Complex Sounds (continued)
Information Technology in Theory
15
Amplitude, Frequency, and Phase
Amplitude (A) is the magnitude of the signal at a
given instant in time (t)
Period (T) is the time a wave requires to complete a
single cycle; it is measured in seconds (s)



Information Technology in Theory
16
Amplitude, Frequency, and Phase
(continued)
Frequency (f), measured in hertz (Hz), is the number
of cycles a wave completes in one second (s)
Phase difference describes the alignment of two
waves along the time axis and may be measured in
degrees

Information Technology in Theory
17
Amplitude, Frequency, and Phase
(continued)
Information Technology in Theory
18
Amplitude, Frequency, and Phase
(continued)
Information Technology in Theory
19
Amplitude, Frequency, and Phase
(continued)
kilohertz (kHz) 10
3
= 1000 Hz (thousand)
megahertz (MHz) 10
6
= 1,000,000 Hz (million)
gigahertz (GHz) 10
9
= 1,000,000,000 Hz (billion)
milliseconds (ms) 10
-3
= 0.001 seconds (1/1000 seconds)
microseconds (s) 10
-6
= 0.000001 seconds (1/1,000,000
seconds)
nanoseconds (ns) 10
-9
= 0.000000001 seconds
(1/1,000,000,000 seconds)
Information Technology in Theory
20
Amplitude, Frequency, and Phase
(continued)
Information Technology in Theory
21
Amplitude, Frequency, and Phase
(continued)
Information Technology in Theory
22
Amplitude, Frequency, and Phase
(continued)
Information Technology in Theory
23
Frequency Composition of Sound
Pure sounds, in simple terms, can be considered the
basic components that make up complex sounds
Each pure sound component can differ in terms of
frequency, amplitude ranges, and phase differences
A spectrum analyzer separates frequency components
and displays this information on a graph called a
frequency spectrum, with vertical spikes indicating
the frequency components


Information Technology in Theory
24
Frequency Composition of Sound
(continued)
Information Technology in Theory
25
Frequency Composition of Sound
(continued)
Information Technology in Theory
26
Digitizing Sound
The digitization process takes place in circuits called
analog-to-digital converters (ADCs)
Digital cellular phones have ADCs that digitize the
audio signal generated by their microphones
Sound cards and cellular phones contain digital-to-
analog converters (DACs)circuitry that converts
streams of ones and zeros into a corresponding
electrical audio signal

Information Technology in Theory
27
Digitizing Sound (continued)
Information Technology in Theory
28
Three-Step Process of Digitization
Sampling
Quantizing
Encoding
Information Technology in Theory
29
Three-Step Process of Digitization
(continued)
Information Technology in Theory
30
Sampling
Information Technology in Theory
31
Quantizing
Information Technology in Theory
32
Quantizing (continued)
How do we calculate the available voltage values?
Two parameters are required:
Audio signal dynamic range
Number of bits we are willing to use per sample
2
Number of bits
= Number of voltages that can be
represented

Information Technology in Theory
33
Encoding
Information Technology in Theory
34
Nyquist Sampling Theorem
How do you know how often to take samples so that
they adequately represent the original signal?


Information Technology in Theory
35
Nyquist Sampling Theorem
(continued)
The minimum number of samples per second (the
sampling frequency, or f
s
) required to perfectly
reconstruct the analog signal should equal at least
twice the value of the difference between the signals
highest frequency component (f
max
) and lowest
frequency component (f
min
)
The theorem is represented by the following
equations:
f
s
2(f
max
-f
min
)
f
s
2B

Information Technology in Theory
36
Standard Sampling Rates
In the public switched telephone network and cellular
systems, analog voice signals are sampled at a rate of
8000 Hz, or 8000 samples per second
The standard is to assign 8 bits per sample
CD quality music is sampled at a rate of 44.1 kHz
The standard is to assign 16 bits per sample

Information Technology in Theory
37
Quantization Error
Quantization error is the difference between the
actual value of the sample and the value to which the
sample is rounded off
It can be reduced by assigning more bits per sample
during the ADC process

Information Technology in Theory
38
Digital-to-Analog Conversion of Audio
The recovery phase takes place in the DAC, which is
present in any device associated with digital audio,
such as an iPod, CD or DVD player, Play Station
Personal (PSP) device, or a cellular phone
If a signal is sampled at a rate that is lower than the
Nyquist rate during digitization, the reconstructed
signal is said to undergo aliasing into a new form

Information Technology in Theory
39
Digital-to-Analog Conversion of Audio
(continued)
Information Technology in Theory
40
Digital Audio Compression
Compression significantly reduces the size of digital
audio files using techniques that are transparent to
users
Without compression, users would quickly run out of
space on their hard disks and flash drives and be
frustrated by time-consuming file downloads


Information Technology in Theory
41
Audio Compression Requirements
Information Technology in Theory
42
Enablers of Compression
The limitations of human abilities, such as hearing,
enable the compression of audio information
Compression techniques fall into two categories:
lossy compression and lossless compression

Information Technology in Theory
43
Enablers of Compression (continued)
Digital audio formats compress audio by actually
eliminating some information, but not enough to be
detectable by human hearing
The process of compressing information in
consideration of the limitations of human senses is
called perceptual coding

Information Technology in Theory
44
Simultaneous Masking
Information Technology in Theory
45
Popular Digital Audio Formats
Moving Picture Experts Group Audio Layer-3 (MP3)
Advanced Audio Coding (AAC)
Windows Media Audio (WMA)
Audio Interchange File Format (AIFF)

Information Technology in Theory
46
Example of Digital Audio Storage
ProblemConfirm that 20,000 compressed songs can be stored on an iPod
that has a hard-disk capacity of 80 GB. Assume an average of 4 minutes per
song and that the iPod encodes the songs using 128-Kbps AAC formatting.


80 GB = 80 8 230= 687,194,767,360 bits

687,194,767,360 bits/128,000 bits per second = 5,368,709.12 seconds

5,368,709.12 seconds/60 seconds per minute = 89,478.4853 minutes of
digital music

89,478.4853 minutes/4 minutes per song = 22,369.621 songs

This number confirms the iPods advertised rate of accommodating 20,000
songs

Information Technology in Theory
47
Summary


Sound waves are created by the mechanical disturbance of air
molecules
Sound waves can be captured electrically by a microphone as an
analog audio signal, which can be digitized using an analog-to-
digital converter
Audio digitization is a three-step process of sampling,
quantization, and encoding; there is an important trade-off
between the accuracy of this process and its associated cost
Sound is categorized as complex or pure
Complex sounds comprise frequency components that vary in
terms of their amplitude ranges, frequencies, and phase
differences
Information Technology in Theory
48
Summary (continued)


The minimum sampling frequency for digitizing
audio signals is at least twice the value of the highest
frequency components; aliasing occurs with a
sampling rate that is less than the Nyquist rate
Digital audio files consume considerable storage
space and bandwidth, but they can be compressed by
exploiting the limitations of human hearing and the
built-in redundancy within information

49
Summary (continued)


The two general approaches to compressing
information are lossless compression and lossy
compression; some compression techniques use a
combination of these two approaches
In lossless compression, none of the original
information is lost
In lossy compression, some information is discarded,
such as frequencies
Some common digital audio file formats include
MP3, AAC, WMA, WAV, and AIFF