Anda di halaman 1dari 4

Ericssons User Service Performance

framework
Gerd Holm-ste and Matz Norling

This article shows how Ericsson is securing predictable user service


performance by introducing innovative methods and reusing proven tools.
Among other things, this entails setting requirements at the system service level using input from user studies; defining system service KPIs that
reflect the user experience; verifying system service performance values
in a test environment with relevant network load; verifying the system
service assurance architecture and reports for the defined system service
KPIs; and providing system service audits and benchmarks.

Background
Today, users take telecom-grade service performance for granted and judge operators
and service providers by price and the quality
of their service portfolios. For this reason, the
maturing telecom market must act swiftly to
create and deploy new services a fact that
market scenarios confirm. Operators are thus
shifting their focus to
user service performance (quality);
best-effort IP services and quality-enabled
user services; and
user-experience-based targets.
Ericsson has anticipated this shift in focus
and is fully positioned to help by providing
systems that perform up to user expectations. To this end, Ericsson has created the
User Service Performance framework, which
by means of a handful of performance indi-

cators (PI), enables operators to predict and


monitor essential performance values and
gives telecom players a common ground for
discussing user performance.
The telecom network, with its many hundreds of services, produces a huge volume of
performance data. Determining which indicators one should measure to best gauge
user demands is thus no easy task. This is
why Ericsson has defined a new system service assurance level (quality of system service,
QoSS), which paves the way for estimating
user service performance.1

User and system services


Figure 1 describes Ericssons adaptation of
standards from the Telecom Management
Forum (TMF) and International Telecommunication Union (ITU). By sorting services

into user services and system services, one can set


performance requirements and specify how
user quality is best measured and monitored.
To establish the right user service performance, one must distinguish between
user services and system services (Figure 2). In principle, system services are userindependent, which means they can be standardized. End-to-end system-service solutions include the user equipment. Examples
of system services are Push-to-Talk, mobile
TV, IPTV, and Multimedia Telephony. User
services (that is, what operators provide via
their portals) are based on one or more system services.

Key performance
indicators
To measure, meet, and secure user service
performance, Ericsson puts strong emphasis
on system services. A service can be made
available to a user if it meets the users
quality-related criteria. For system serivces,
the quality of the basic integrity performance
is determined by the (basic) quality functions in the service path. For many services,
these functions are located in terminals and
content-creation products.
ETSI has drafted a standard that defines indicators that report performance in terms of

Figure 1
Ericssons adaptation of standards

FjVa^ind[
jhZghZgk^XZ

JhZghZgk^XZ
eZg[dgbVcXZ

FjVa^ind[
hnhiZbhZgk^XZ

HnhiZbhZgk^XZ
VXXZhh^W^a^in

HnhiZbhZgk^XZ
gZiV^cVW^a^in

HnhiZbhZgk^XZ
^ciZ\g^in

HnhiZbhZgk^XZ
eZg[dgbVcXZ

HnhiZbhZgk^XZVkV^aVW^a^in

IgV[[^XVW^a^ineZg[dgbVcXZ

CZildg`
eZg[dgbVcXZ

CZildg`gZhZa^ZcXZ>c"hZgk^XZeZg[dgbVcXZ>HE

Ericsson Review No. 1, 2008

43

EHhZgk^XZ"
^cYZeZcYZci

:"bV^a

JhZghZgk^XZh

;^aZigVch[Zg
;IE

BBH$HBH

Edh^i^dc^c\

BdW^aZIK

HigZVb^c\

LZW
Wgdlh^c\

LZH]VgZ

>EIK

Ejh]idIVa`

IZaZe]dcn

K^YZd
IZaZe]dcn

EVnbZci

Bjai^bZY^V
IZaZe]dcn

Bjai^b#K^YZd
IZaZe]dcn

HnhiZbhZgk^XZh

Figure 2
User and system services.

Service assurance
architecture, end-to-end

JhZggZfj^gZbZcihejidchnhiZbeZg[dgbVcXZ
@E>kVajZh

HnhiZbhZgk^XZ@E>h

JhZgVXXZhh^W^a^ink^Zl

8dbeaZi^dc
gViZ

9gde
gViZ

JhZggZiV^cVW^a^ink^Zl

Kd^XZ
fjVa^in

6jY^d
fjVa^in
JhZg^ciZ\g^ink^Zl

DWhZgkVW^a^in
JhZg
YViV

Bdc^idg^c\
gZfj^gZbZcih

IgV[[^X
YViV
>c[gVhigjXijgZ
YViV

44

Verifying system service


performance in a test
environment
Verifying system service performance entails
verifying S-KPIs and calls for a solid QoSS
level within a defined test environment that
includes typical user equipment and relevant
traffic load. By benchmarking the test results
with the target values, one can determine the
required performance during a specific timeframe. Armed with this information, one can
then stipulate adequate requirements for the
service-assurance architecture.

Figure 3
Performance data collection model.

HZgk^XZ
HZgk^XZ
HZgk^XZX]Vc\Z
BZcj
VXXZhhi^bZ VXXZhh^W^a^in hjXXZhhgVi^d ^ciZgVXi^dc

accessibility (probability with which users


can start a service);
retainability (the service stays up once it is
running); and
integrity (the quality of voice, video and
pictures).
Of these, Ericsson has selected a few vital
performance indicators that reflect the performance users expect. The criteria for this
selection are a strong user focus combined
with access and system independence. The
selected performance indicators are called
system service KPIs (S-KPI) and define quality
of system service (QoSS) the new serviceassurance level. The associated documentation provides formulas as well as user and
system (end-to-end) trigger points. It also
recommends suitable tools for measuring the
performance of relevant infrastructure.

ion
t
c
e
coll
a
t
a
ce d
n
a
form
r
e
P

K^YZd
fjVa^in

Figure 3 shows the shift in focus from purely


network-related targets for user service performance to a perspective that also includes
user-experience-based targets. Figure 4 also
serves as a service model that shows what information is needed to calculate the values of
defined S-KPIs. To measure and report the
end-to-end service quality of every session,
one must combine data from several sources.
At present, S-KPIs are collected from three
different sources of data:
User data (data on the user experience),
which includes access time and duration,
quality of audio and video, and so on.
Traffic data that shows performance (for
example, accessibility, and retainability)
for every user.
Infrastructure data, which includes netEricsson Review No. 1, 2008

work performance and information needed


for root-cause analysis (for example, PDP
context, RAB information).

BH9E

Different services require different sets of


tools to guarantee user quality. Ericsson has
integrated service assurance into its multimedia reference network in order to evaluate
methods and tools. The prime focus of the
architecture is on Ericsson system services,
but one may also easily include important
user services. The tools suite includes
OSS-RC and Ericsson Network IQ (ENIQ),
which provide easy access to infrastructure
data and alarms;
OSS Navigator, which provides an intuitive human interface for reporting system
service issues that affect users. An SLA
view, for example, shows areas that require
attention, and a network view shows the
part of the system that is causing the problem;
TEMS Automatic Intrusive User probe,
which generates test traffic (service ping)
in the system to provide high-quality user
data;
TEMS Handheld Test Unit Non-intrusive
User probe, which monitors ongoing user
traffic in terminals and can provide KPI
values close to users;
SASN Non-intrusive Traffic probe, which
listens to traffic in certain interfaces and
reports on traffic flow (for example, success
rates, bit rates);
Moniq, which is used for benchmarking
and auditing data traffic; and
MSDP (Mobile Service Delivery Platform)
for monitoring user service performance.
Performance reports

Different users in an operator organization


require different kinds of information and
reports from the service-assurance system
(Figure 5). Customer support, for example,
requires real-time information on the availability of every system service. Ideally, even
end-users can be provided with this information. Likewise, service operation centers must
know the actual values of the S-KPIs. They
also use the historical view to monitor service capacity and to adjust alarm threshold
values. Network operations centers, in turn,
must see the status of the infrastructure that
supports the system service and GIS information. And chief marketing officers (CMO)
require easy access to information on service
usage and trends.
Ericsson Review No. 1, 2008

>c[gVhigjXijgZ
YViV
H"@E>

HiVcYVgYhZgk^XZhVcY>BH

DHHcVk^\Vidg
IgV[[^X
YViV

Bjai^"VXXZhh:Y\Z

L^gZa^cZVXXZhh

HZgk^XZ
bdc^idg^c\
cZildg`
k^Zl

DHHG8
CZi">F

HZgk^XZaVnZg

Tools

HA6VcYgZVa"i^bZ
hiVijhlZWedgiVa

JhZghZgk^XZ
igV[[^XYViV

Bdc^f

L^gZaZhhVXXZhh

H"@E>

H6HC

I:BH

IgVchedgi

H"@E>
H"@E>

Cdc"^cigjh^kZ
jhZgYViV

>cigjh^kZ
jhZgYViV

G86VcY
^beVXi
VcVanh^h

I:BH
VjidbVi^X

I:BH
]VcY]ZaY
iZhijc^i

Figure 4
Service assurance architecture.

Figure 5
Service assurance reports.

8jhidbZghjeedgi

JhZg

HZgk^XZdeZgVi^dc
XZciZg

CZildg`deZgVi^dc
XZciZg

HigZVb^c\

LZW

HnhiZbhZgk
GZVa
BBH$HBH

:cY

8]^Z[bVg`Zi^c\
d[[^XZg

45

build GUIs for their organizations.


One other approach is to have operators join
forces with Ericsson via a performance partnership to optimize network performance
and service quality. In this case, the process
consists of three phases: assessment and evaluation of the current performance and supporting environment, followed by implementation.
During implementation, Ericsson consultants
guide operators every step of the way.

Conclusion

Figure 6
Above: Top-level, real-time view of system service quality. Below: Network view of
Push-to-Talk service in the OSS Navigator.

Ericsson has captured these needs and used


them as input for defining the service model
and its associated monitoring tools.
OSS Navigator reports

Figures 6 consists of screenshots of example


reports generated by the service assurance
testing environment. The image above shows
overall availability of system services. The report gives real-time status and forecasts the
service level agreement (SLA) for select users.
Clicking on the gauges opens the next level,
which presents S-KPIs and trends.
The network view in OSS Navigator (Figure 6, bottom) shows which data is used to
report service quality and which alarms are
generated at defined thresholds. Network

managers use this report to determine the


root cause of service problems.

Customized services
A customized service-assurance solution includes a combination of Ericsson tools, legacy operator tools, and third-party products.
Proceeding from the architecture, Ericssons
method describes, step-by-step, how operators may
select and monitor user services;
combine PIs in a service model;
feed the PIs with measurement information;
find the right tools for collecting information; and

BOX A, TERMS AND ABBREVIATIONS


CMO
GIS
IP
IPTV
ITU

KPI
MSDP

46

Chief marketing officer


Geographic information system
Internet protocol
IP television
International Telecommunication
Union
Key performance indicator
Mobile service delivery platform

PDP
PI
QoSS
RAB
S-KPI
SLA
TMF
UE

Packet data protocol


Performance indicator
Quality of system service
Radio access bearer
System service KPI
Service level agreement
Telecom Management Forum
User equipment

Service performance is an increasingly important differentiator in maturing telecom


markets.
Thanks to Ericssons innovative User
Service Performance framework, operators
can emphasize requirements that affect the
user experience.
The separation of services into user services and system services lays a solid foundation
from which to measure and monitor services.
System services (such as mobile TV, IPTV,
MMTel) are technically user-independent,
and can therefore be standardized.
Knowing the performance of system services makes it possible to cost-effectively
monitor all services. By identifying and
monitoring a selection of system-service
KPIs (S-KPI), operators can determine the
availability of service quality for all sessions
24 hours a day. As a result, they can guarantee service performance and thereby meet
user expectations.
The criteria for Ericssons selection of a vital few KPIs are strong user focus combined
with access and system independence.
Infrastructure data shows how nodes support system services and facilitates operator
efforts to find and rectify faults. Traffic data
shows the actual performance of each session.
And user data shows how users perceive service. This data becomes more and more relevant as access networks become less aware of
services in other parts of the system.
Ericsson employs proven serviceassurance methods and tools in an innovative
way which guarantees that user performance
is predictable and adequate (meets end-user
expectations). In addition, the service-assurance model supports different users in operator organizations. A combination of data
from monitoring tools, such as OSS Navigator and TEMS, is used to calculate the perceived end-to-end performance of user services. Finally, benchmarking tools effectively
close the loop.
Ericsson Review No. 1, 2008

Anda mungkin juga menyukai