Anda di halaman 1dari 15

Digital Audio

The sound that heard by the air (also called audio) is analog
in nature and is a continuous waveform.
Bell Labs produced the first digital audio synthesis in the
1950s.
A computer needs to transfer the analog sound wave into its
digital representation, consisting of discrete number
A microphone converts the sound waves into electrical
signal.
This signal is then amplified, filtered, and sent to an analog-
to-digital converter. Output this data as sound, the
stream of data is sent to the speakers via a digital-to
analog converter, a reconstruction filter, and the audio is
amplified. This produces the analog sound wave that we
hear.
Sampling
The audio input from a source is sampled several thousand
times per second. Each sample is a snapshot of the original
signal at a particular time.

Sampling rate
When sampling a sound, the computer processes snapshots
of the waveform. The frequency of these snapshots is called
the sampling rate. The rate can vary typically from 5000-
90,000 sampling per second.
Sampling rate is an important (though not the only) factor in
determining how accurately the digitized sound represents
the original analog sound.
Digitization
Digitization is the process of assigning a discrete value
to each of the sampled values. It is performed by an
Integrated Chip (IC) called an A to D Converter. In the
case of 8-bit digitization, this between 0 and 255 (or –
128 and 127). In 16-bit digitization, this value is
between 0 and 65,535 (or – 32,768 and 32,767).
The process of digitization that a digitized signals. This
is related to the number of bits per sample. A higher
number of bits used to store the sampled value leads to
a more accurate sample, with less noise.
Fidelity
Fidelity is defined as the closeness of the recorded
version to the original sound. In the case of digital
speech, it depends upon the number of bits per sample
and the sampling rate. A rally high-fidelity (hi-fi)
recording takes up a lot of memory space (176.4Kb for
every second of audio of stereo quality sampled at 16
bit, 44.1 kHz per channel). Fortunately for most
computer multimedia applications, it is not necessary
to have very high fidelity sound.
NYQUIST THEOREM
The sampling frequency determines the limit of audio
frequencies that can be reproduced digitally.
According to SyQuest theorem, a minimum of two
samples (per cycle) is necessary to represent a given
sound at a minimum rate of 880 samples per second.
Therefore, Sampling rate are = 2 x Highest frequency.
A distortion knows as “aliasing” occurs and it cannot
be removed by post processing the digitized audio
signal. So frequencies that are above half the sampling
rate are filtered out prior to sampling to remove any
aliasing affects.
Sound formats and settings
Recording at high sampling rates produces a more
accurate capture of high frequency content of the
sound. Another aspect to consider is the “ BIT-
RESOLUTION.”
Stereo recording are made by recording on two channel,
and are lifelike and realistic and not Mono Sound are
less realistic, flat, and not a dramatic, but they have a
smaller file size.
Stereo sounds require twice the space as compared at
mono recording. To calculate the storage space
required, the following formula are used:
Mono Recording:
File size = Sampling rate x duration of recording in seconds x
(bits per sample/8)x1
Stereo Recording:
Sound format are standard in most audio editing
software. Sampling rates of 8, 11, 22, and 44 kHz are
used more often.

Quality of Sounds in a telephone


Voice mail servers convert the analog voice and store it in
digital form. With the standard for voice mail file
formats and digital storage of sounds for computer
system coming closer together, use of a computer
system to manage the phone system is a natural
extension of the user desktop. The Bandwidth of
telephone conversation is 3300 Hz. The frequency
ranges from 200-3500 Hz. The signal of course is
inherently analog.
Quality if Sound in a CD

CD-ROMs have become the media choice for the music in


a very short period pf time. The reasons are follows:
 Ease of use and durability of the media

 Random access capability as compared to audiotapes

 Very high quality sound

 Large storage volume

CD – ROMS are becoming important media for


multimedia applications. The sampling rate is typically
44 kHz for each channel (left and right).
Criteria for selecting a particular quality audio
This requires a large volume of on-line disk space to store
the data;
It takes longer to transmit this larger volume of data over
a network
Compression
An important aspect of communication is transfer of data
from the creator to the recipient.
Compression in computer terms means reducing the
physical size of data such that it occupies less storage
space and memory. Compressed files are, therefore,
easier to transfer because there is a sizable amount of
reduction in the size of data to be transfer.
Compression Requirements
Processing data in a multimedia system leads to storage
requirements in the range of several megabytes.
Compressions in multimedia systems are subjected to
certain constraints.
Many application exchange multimedia data using
communication networks, the compatibility of
compression is required. Standard like
INTERNATIONAL CONSULTATIVE COMMITTEE
for TELEPHONE and TELEGRAPH),
INTERNATIONAL STANDARD ORGANIZATION),
AND MOVING PICTURE EXPERTS GROUP) are used
to achieve this compatibility.
 Common Compression methods
An array of compression technique have set by the CCTI group- an
international organization that develops communication standard
known as “ recommendations” for all digitally controlled forms of
communication.
There are two types of compression:
Lossless compression-Data are not altered or lost in the process of
compression or decompression. Decompression produces a replica of the
compressed object. The compression technique is used for text
documents, databases, and text- related objects.
Lossy Compression -The loss of this data is such that the object looks more or
less like the original. This method is used where absolute data accuracy is
not essential. Lossy compression is the most commonly used compression
type. This compression technique is used for image documents, audio,
and video objects.
Schemes of audio Compression
ADPCM (Adaptive Differential Pulse Code
Modulation)- It is family of speech
compression and decompression algorithm.
Using this technique one can achieve about 40-
80% compression.
MPEG (Motion Picture Expert Group)- using
MPEG audio coding you can compress the
original audio on a CD by a factor of 12
without losing the sound quality.
Audio Files Formats
Common Audio File Formats
Game application on the other hand made use of
the PC speaker to produce sounds of varying
frequencies to create good sound effects. Each
has made its own mark in the field of
multimedia across various platforms. Audio is
one of the most important components of any
multimedia system.

Anda mungkin juga menyukai