Anda di halaman 1dari 1

When it comes to testing digital QAM signals,

testing experts suggest that cable network oper-


ators use their digital video analyzers to test both
Modulation Error Ratio (MER) and Bit Error Rate
(BER). Thats because MER and BER measurements
detect different types of impairments.
MER is the measurement in dB of the RMS error magni-
tude over the average symbol magnitude. The higher the
error magnitude, the poorer the MER. MER essentially
assigns a value to the fuzziness of the symbol cluster (see
Figure 1). So, the larger or fuzzier the cluster becomes, the
poorer the MER. Likewise, the farther the dots move from their
ideal locations, the poorer the MER.
For example, the diagram shown here on the left is a constellation with a good MER of
34 dB (Figure 2), while the diagram on the right (Figure 3) shows a constellation with a
poor MER of 21 dB.
Each symbol, or dot, on the constellation is framed by decision
boundaries (see Figure 4). When the carrier falls inside the
boundaries, the information is transmitted without
errors. In this example, BER testing is not an
effective measurement because the BER is per-
fect. But the good news could be hiding problems.
Using MER instead, it is clear that while each of
the following constellations have a perfect BER,
the constellation in Figure 7 has a much better
MER, with less noise (see Figures 5, 6 and 7).
So, why measure BER? Because MER is a poor
indicator of fast, intermittent transients. Examples of
these types of impairments include laser clipping
(the most common cause), loose or corroded con-
nections, sweep system interference and microphonics.
So, if you have high MER, but errors are present, they
are probably being caused by intermittent interference.
This shows up on a constellation diagram as a lone dot
that is away from the main cluster.
JUST ANOTHER WAY WERE
UNCOMPLICATING CABLE
MER
PRE/POST BER
AMPLITUDE RESPONSE
GROUP DELAY
MICRO-REFLECTIONS
SUNRISE TELECOM
www.sunrisetelecom.com
UPSTREAM
CHARACTERIZATION TOOLKIT
AT2500RQv
Spectrum Analyzer
CM1000-USG
Upstream Signal
Generator
Figure 1
Figure 3. Constellation with poor MER
-7 -5 -3 -1
-1
-3
-5
-7
1
Correct locations fall
within decision
boundaries
Locations in error fall out
of decision boundaries
-7 -5 -3 -1
-1
-3
-5
-7
1 3
Good MER
Perfect BER
-7 -5 -3 -1
-1
-3
-5
-7
1 3 5 7
Poor MER
Perfect BER
-7 -5 -3 -1
-1
-3
-5
-7
1 3 5 7
Best MER
Perfect BER
How good
should it be?
Targeted performance goals
The charts shown at right outline performance goals for a
typical network. Specific system requirements may require
tighter or less critical performance.
Scientific notation
BER (bit error rate) measurements are expressed in terms of
errors divided by the total number of un-errored bits transmitted
or received. Since the number of errors is very small compared
to the number of bits transmitted, the measurement is typically
expressed in scientific notation. For example, one error out of
one million bits would be expressed as 1/1,000,000 or 1.0 E-6.
Confusion often arises when a second measurement is com-
pared. Is 7.0 E-7 better or worse? 7.0 E-7 means seven errors
out of 10 million bits, which is actually a little better than 1 in
one million. The chart at top right may be helpful in interpreting
scientific notation.
One important note: Many instruments will read 0 (zero) or
0.0E-0 when no errors have been detected. E0 or E-0 is equal to
1, but the leading 0 makes the measurement equal to 0.
1.00E+00 1/1 One
1.00E-01 1/10 One in Ten
1.00E-02 1/100 One in One Hundred
1.00E-03 1/1,000 One in One Thousand
1.00E-04 1/10,000 One in Ten Thousand
1.00E-05 1/100,000 One in One Hundred Thousand
1.00E-06 1/1,000,000 One in One Million
1.00E-07 1/10,000,000 One in Ten Million
1.00E-08 1/100,000,000 One in One Hundred Million
1.00E-09 1/1,000,000,000 One in One Billion
1.00E-10 1/10,000,000,000 One in Ten Billion
1.00E-11 1/100,000,000,000 One in One Hundred Billion
1.00E-12 1/1,000,000,000,000 One in One Trillion
0.00E-00 0 x 1 Zero (no errors)
SCIENTIFIC NOTATION
64 QAM 256 QAM
Excellent 35 dB 35 dB 0.0 E-00 0.0 E-00
Acceptable 34 dB 35 dB 0.0 E-00 0.0E+00
Marginal 32 dB 34 dB 1.0E-08 1.0E-09
Excellent 35 dB 35 dB 0.0 E-00 0.0 E-00
Acceptable 33 dB 34 dB 1.0E-09 0.0 E-00
Marginal 30 dB 32 dB 1.0E-08 1.0E-09
Excellent 33 dB 35 dB 1.0E-09 0.0 E-00
Acceptable 31 dB 33 dB 1.0E-08 0.0 E-00
Marginal 28 dB 30 dB 1.0E-07 1.0E-09
Excellent 33 dB 35 dB 1.0E-08 0.0 E-00
Acceptable 29 dB 32 dB 1.0E-07 1.0E-09
Marginal 25 dB 30 dB 1.0E-06 1.0E-08
Excellent 32 dB 35 dB 1.0E-08 0.0 E-0
Acceptable 28 dB 32 dB 1.0E-07 1.0E-08
Marginal 25 dB 28 dB 1.0E-06 1.0E-07
N
o
d
e
A
m
p
M
o
d
e
m
H
e
a
d
e
n
d
Digital data
Pre FEC
BER
Post FEC
BER
MER
64 QAM 256 QAM
Excellent 35 dB 35 dB 0.0 E-00 0.0 E-00
Acceptable 33 dB 35 dB 1.0E-08 0.0E+00
Marginal 30 dB 32 dB 1.0E-07 1.0E-08
Excellent 34 dB 35 dB 0.0 E-00 0.0 E-00
Acceptable 31 dB 34 dB 1.0E-08 0.0 E-00
Marginal 28 dB 30 dB 1.0E-07 1.0E-08
Excellent 33 dB 35 dB 1.0E-09 0.0 E-00
Acceptable 30 dB 32 dB 1.0E-08 1.0E-09
Marginal 25 dB 27 dB 1.0E-07 1.0E-08
Excellent 32 dB 35 dB 1.0E-08 0.0 E-00
Acceptable 28 dB 31 dB 1.0E-07 1.0E-09
Marginal 24 dB 28 dB 1.0E-06 1.0E-08
Excellent 32 dB 35 dB 1.0E-08 0.0 E-0
Acceptable 27 dB 31 dB 1.0E-07 1.0E-08
Marginal 23 dB 27 dB 1.0E-06 1.0E-07
Digital video
N
o
d
e
A
m
p
T
a
p
S
e
t-to
p
Expected MER & BER results
Pre FEC
BER
Post FEC
BER
MER
H
e
a
d
e
n
d
T
a
p
When it comes to the deployment of IP-based services, its still so
early that there appears to be a lack of consensus regarding which tests
need to be run, and how often they should be performed. Some say that if
a cable operator builds a strong platform, then the network will run well and
services will be pristine. But just what are the key components to a strong
platform? In talking to the experts, it consists of controlling the worst of the
impairments, namely latency, jitter and packet loss.
Latency
Performing all the functions that are required to process and packetize voice signals
and then transport them from the origination point to the receive point in any IP
architecture, including PacketCable, takes time. Each particular function requires tiny
fractions of seconds, but the total amount of time varies based on the architecture of
the device as well as the amount of traffic that has to be processed. This
time delay is known as latency.
Most network latency occurs after the packets leave the endpoint, or
gateway. Every time a packet encounters a network router, a few mil-
liseconds or more of additional latency is introduced. Therefore,
unless the signal is kept within a carefully managed intranet or
similar type of network, there is no control over the number of
router-to-router hops a packet takes. Monitoring the total
latency a packet is experiencing is necessary to maintain-
ing a high-quality signal transmission.
According to the International Telecommunications
Union (ITU) guidelines, delays below 150
milliseconds are considered acceptable
for most communications. Delays
ranging between 150 and
400 ms could also
be acceptable, depending on the voice quality desired, but over 400
ms is deemed unacceptable. Delays on VoIP sessions are measured in
two categoriesfixed and variable.
Fixed delays can include the following:
Propagation delay: The time it takes for the packet to be transmitted over the
physical link. This delay is usually bound by physical characteristics of the trans-
mission media (e.g., when using a fiber optic circuit, it would be bound by the
speed of light).
Serialization delay: The time it takes to place the bits from the transmission buffer
into the transmission media. The higher the speed, the less serialization delay.
Processing delay: Includes the time it takes to code, compress, decompress and
decode the voice signal, and the time it takes to collect enough voice samples to be
placed on the payload for a data packet. This varies, depending on the algorithm used.
An example of variable delay is Queuing Delaythe time a packet has to wait in a router
before it can be serviced. This delay will occur at every router in the path of a VoIP session.
Jitter
In addition to being sent over an unpredictable number of router hops, packets are also routed
fromone router to another using different assigned routes, each of which has a dif-
ferent amount of traffic it has to handle. So, packets fromthe same voice conversa-
tion will experience differing amounts of
latency as they head toward their
destination. These variable
delays produce jitter
a phenomenon that
comes fromdiffer-
ent packets
arriving at the
destination
at different
points in
time.
Gateways use
buffers to collect
and hold the
packets and put
them back in
the proper
order. But even
this process
has to be optimized, so as not to introduce its own
unacceptable latency. Again, jitter must be effective-
ly monitored to be sure its being properly dealt with.
Dropped packets
When traffic levels rise to a level that overloads a
router, the device may intentionally drop packets to
relieve the congestion. Error-checking has been built
into the protocols and is used to maintain data
integrity. But this procedure requires additional over-
head, and isnt really optimized for voice signals. A
certain number of dropped packets (less than
3 percent, typically) can be tolerated by the
human ear before signal degradation is
perceived, but beyond that amount,
call quality can degrade to
unacceptable levels.
Standalone
MTA
HFC access
network
(DOCSIS)
Embedded MTA
MTA
Cable
modem
Call
management
server
Media
servers
Network-based
call signaling
architecture
Line control
signaling
architecture
Cable
modem
CMTS
MTA
Standalone
MTA
HFC access
network
(DOCSIS)
Embedded MTA
MTA
Cable
modem
Cable
modem
CMTS
MTA
OSS
backoffice
Billing
Provisioning
Problem resolution
DHCP servers
DNS (Domain name service)
TFTP servers

Managed IP backbone
(QoS features)
(Headend, local, regional)
PSTN
For years, cable operators have been diligently testing their networks to ensure signal quality and to comply with national signal leakage
guidelines. But as the industry adds new services to its repertoire, testing the network simultaneously becomes more difficult and more important.
The addition of digital video, DOCSIS data channelsand now, voice-over-IP servicehas ushered in a new list of parameters that have to be
monitored, analyzed and even adjusted to ensure that customers are getting what they pay for.
The goal of this chart is to explain several new, emerging testing concepts that relate to voice-over-IP, DOCSIS data and digital video.
TESTING FOR IP-BASED SERVICES
MEAN
OPINION
SCORES
Speech quality is usually evaluated on a five-point scale, known
as the mean-opinion score (MOS) scale, in speech quality
testingan average over a large number of speech data, speakers and
listeners. The five points of quality, from one to five, are: bad, poor, fair,
good, and excellent. Quality scores of 3.5 or higher generally imply high levels
of intelligibility, speaker recognition and naturalness.
MOS is a global method used to evaluate the users acceptance of a transmission
channel or speech output system. It reflects the total auditory impression of speech
by a listener. For quality ratings, normal test sentences or a free conversation are used
to obtain the listeners impression. The listener is asked to rate his impression on sub-
jective scales such as: intelligibility, quality, acceptability, naturalness, etc. The MOS
gives a wide variation among listener scores and does not give an absolute measure
since the scales used by the listeners are not calibrated.
Using this method, a score from 4 to 5 is
considered toll quality; 3 to 4, communica-
tion quality; and less than 3, synthetic
quality. But this method is both time
consuming and expensive. Objective
models that predict human quality
judgments have also been devel-
oped. These perceptual models
transmit an audio file through
the network, comparing the
received and transmitted files to assess distortions. While
perceptual models are useful in laboratory settings,
these models are unsuitable for the continu-
ous monitoring of VoIP networks.
1
0
2
3
4
5
MER AND BER
4HE!CTERNA$3!-
Measures voice, video and data.
FACKETL0SS
BELAY
JTTER
MER
BER
vetP ui|t Right tn
:-866-AC1RkA
www.acterna.cem
Figure 2. Constellation with good MER
P.O. Box 266007, Highlands Ranch, CO 80163-6007
CED magazine, August 2004 www.cedmagazine.com
Tel.: 303-470-4800 Fax: 303-470-4890
The publisher gratefully acknowledges Trilithic Inc., Sunrise Telecom,
Acterna and others for contributing content to this chart.
VoIP and Digital TV Testing
PacketCable architecture
Figure 5
Figure 6
Figure 7
Figure 4

04 Testing Wallchart 7/19/04 4:22 PM Page 1

Anda mungkin juga menyukai