Anda di halaman 1dari 10

Dimensioning of UTRAN Iub Links for Elastic Internet Traffic

Xi Li
A
, Richard Schelb
B
, Carmelita Grg
A
and Andreas Timm-Giel
A


A
Communication Networks, University of Bremen, FB1
Otto-Hahn-Allee NW1, 28359 Bremen, Germany
{xili, cg, atg}@comnets.uni-bremen.de

B
Siemens AG
Siemensdamm 62, 13627 Berlin, Germany
richard.schelb@siemens.com

Abstract: This paper studies a comprehensive dimensioning approach to design the UMTS
access networks for elastic Internet traffic carried by the TCP protocol. The dimensioning
approach is based on the M/G/R Processor Sharing (M/G/R-PS) queuing model for
application performance. With considering all TCP connection aspects, an adjustment on the
extended M/G/R-PS model is proposed in this paper. And specifically, a new expression for
the mean sojourn times is presented for advocating admission control as a necessary means to
maintain goodput in case of traffic overload, which behaves as an M/G/R/N processor
sharing queuing model. In addition, the frame protocol (FP) PDU delay is addressed as
another critical dimensioning factor due to the strict delay requirements on the Iub link. But
an analytical solution considering all lower layer aspects could be intractably complex, thus
the dimensioning for the FP layer is taken from a simulation approach. The M/G/R-PS model
and related notable results are validated by comparison with simulation results.
Keywords: UMTS, UTRAN, Iub, Dimensioning, M/G/R-PS, Call Admission Control,
M/G/R/N-PS, FP PDU Delay

1. INTRODUCTION

In a public multi-service UMTS network, it is important to provide an appropriate quality
of service (QoS) for a great variety of services as well as effective resource utilization. Each
of these services asks for specific quality of service demands with respect to performance
parameters such as: end-to-end delay, jitter, loss, blocking ratio, etc. This poses new
challenges in the planning of the UMTS network, especially within the radio access network
(RAN) where the transmission resources are scarce and expensive. With the extension of the
UTRAN to cover more and more suburban and rural areas, the narrow-bandwidth links
between Radio Network Controller (RNC) and NodeB (base station) become considerably
costly. For a cost-efficient design of the UMTS access network, it is essential for the network
operator to dimension the Iub links carefully: over-dimensioning wastes precious bandwidth
resources, whereas under-dimensioning generally leads to less satisfactory quality of service
perceived by subscribers. This paper discusses a comprehensive dimensioning approach for
ITC19/ Performance Challenges for Efficient Next Generation Networks
LIANG X.J. and XIN Z.H.(Editors)
V.B. IVERSEN and KUO G.S.(Editors)
Beijing University of Posts and Telecommunications Press
415-424
the Iub link carrying the elastic Internet traffic which resides on top of the TCP/IP protocol
suite, where the rate of TCP flow adjusts itself to fill the available bandwidth according to the
network traffic condition by using the TCP flow control. This dimensioning approach is based
on the M/G/R Processor Sharing (M/G/R-PS) queuing model which characterizes TCP traffic
at the flow level, where mobile users represent flows generated by downloading Internet
objects; the sojourn times represent the object transfer times. The transmission bandwidth of
UMTS user connections is limited by their assigned radio access bearer (RAB) type, e.g. 64
kbps or 384 kbps. In order to guarantee a minimum throughput one must have a CAC (Call
Admission Control) [6] scheme employed in the network to avoid instability and low
goodput in case of overload. The admission control for elastic traffic is based on the
maximum number of allowed flows. Whenever a per-flow access control is employed a
blocking model is used to dimension the network. Thus the original M/G/R-PS method is
extended to an M/G/R/N-PS model, which allows a maximum number of N active user
connections in the system sharing R servers simultaneously. In addition, since the Iub
interface has to fulfill strict delay requirements posed by UMTS specific layers in order to
guarantee each radio frame delivered on time according to the air interface, the FP PDU delay
is also a very critical dimensioning issue to be specifically considered. Due to this AAL2 was
chosen as the adaptation layer which basically is focused on real-time traffic whereas a major
portion of UMTS upper layer traffic represents non real-time characteristics.
The remaining parts of this paper are organized in 4 sections. The following section
summarizes analytical approaches for UTRAN dimensioning with focus on application
performance. In the third section a description of the simulation model is given and the
simulation results obtained from this model are presented. In the last section, key conclusions
on the dimensioning problems of UMTS and an outlook of future work are given.

2. DIMENSIONING METHODS FOR ELASTIC TRAFFIC

Figure 2-1 shows an overview of a UMTS network sketching its main components.
Populations of mobile subscribers are connected to the NodeB via the radio interface
(Uu-Interface), and then go through the UTRAN and Core Network to the remote Internet
servers. The link between the Base Station (NodeB) and the Radio Network Controller (RNC),
the Iub interface, is a potential bottleneck within the UTRAN. End users generate service
requests to the application servers to download Internet objects such as Web pages, emails
over TCP connections, which are established between the end users and the application
servers. In this way, each request generates one or more TCP flows over the Iub link.
It is assumed in this analysis that every elastic traffic flow is generated by one file
transfer and all users are assigned to the same maximum radio access bit rate, e.g. 128kbps.
The file transmission rates are controlled by the TCP feedback algorithm as a network
congestion control function. If TCP works ideally (i.e. instantaneous feedback), all elastic
traffic flows going over the same link will share the bandwidth resources equally, and thus the
system only carrying elastic traffic flows is essentially behaving as an M/G/R-Processor
Sharing queue on the Iub link. Each application is assigned a specific radio access bearer with
a certain peak data rate. This is modeled by assuming R servers inside the system where each
application can (at maximum) utilize the full capacity of a single server. In this paper the
focus of the analysis is placed on a single RAB type, i.e. each application receives the same
416
maximum data rate.

Figure 2-1: Iub link connecting the access network to the core network
The M/G/R-PS model can be imagined in such a way that several files that need to be
transmitted over one link are broken into little pieces, i.e. individual IP packets of the different
traffic flows, and are processed by the link quasi-simultaneously. In this way, large files do
not delay small ones too much when compared with FIFO scheduling. Recently, a number of
studies ([1], [2] and [3]) show that the M/G/R-PS model provides a simple and accurate
characterization of elastic IP traffic at flow level. The performance criteria of dimensioning
for elastic IP traffic is the average transfer delay and throughput, both of which are related to a
delay factor, in addition other parameters like blocking probability have to be considered.

2.1 M/G/R-PS Model
In this case, the attainable transfer rate for one individual subscriber is limited to a certain
peak rate r
p
(i.e. RAB rate in our context) and C represents the trunk line capacity. Thereby
the link appears like a Processor Sharing queue system with R servers (here server means TCP
flows with maximum available bandwidth) where R = C/r
p
, hence the name M/G/R-PS queue.
That means, only up to R flows can be served at the same time without rate reduction imposed
by the system. But if individual flows are not subject to any bandwidth restrictions, i.e. a
single elastic flow is able to fully utilize the total link capacity on its own, and then the
M/G/R-PS model reduces to the simple M/G/1-PS model. However, this is not a common case
in the real system. An important property of M/G/R-PS queues mentioned in [4] is that the
average sojourn time (average time in system) is insensitive to file size distributions.
If E{x} is the average file size and is the average flow arrival rate, then the traffic load
(or utilization) is: = (E{x})/C. It has been shown in [1] that the expected sojourn time (or
file transfer delay) E{T(x)} for a file of size x is given by:
{
R
p p
R / G / M
f
r
x
) ( R
) R , R ( E
r
x
) x ( T E

,
`

.
|

1
1
2
(2-1)
Where E
2
represents Erlangs second formula (Erlang C formula) and f
R
is defined as the delay
factor. The delay factor f
R
represents the increase of the average file transfer time (and
Core
Network
UTRAN
Internet
Iub Interface
Radio Interface
(Uu)
User Equipment
RAB Rate
417
decrease of the average throughput) due to link congestion. It is a quantitative measure of how
link congestion affects the transaction times, taking into account the economy of scale effect.
An example of calculating the file transfer delay using formula 2-1 can be seen in Figure 3-1.

2.2 Extended M/G/R-PS Model
However, the M/G/R-PS model introduced above assumes ideal capacity sharing among
active flows. But in practice the TCP flows are not always able to utilize their fair share of
available bandwidth. TCPs effectiveness of capacity sharing is determined by the TCP slow
start and congestion avoidance mechanisms which are affected by network conditions such as
round-trip times and packet loss probability. Slow start is executed at the beginning of the
TCP transmission when the link capabilities are unknown to the senders. During the slow start
phase, the TCP congestion window, which represents the number of TCP segments allowed to
send to the network when receiving an acknowledgement from the receiver, starts from one
segment and then is increased exponentially until a slow start threshold value is reached.
Within the slow start phase, the available bandwidth assigned to a connection is not fully
utilized. Thereby, the slow start mechanism leads to the increase of the file transfer time by
not completely utilizing the available bandwidth at the beginning of each transmission process.
Especially in scenarios with low utilization, this influences the achieved throughputs while
the number of concurrent flows is not sufficient to utilize the bandwidth left by TCP flows
being in slow start phase. To solve this problem, [2] proposes a realistic modeling which
considers the impact of slow-start into the M/G/R-PS model.
Given the required delay factor f
R
and the maximum rate r
p
, the amount of data sent up to
the time when the sender can start utilizing its bandwidth share is calculated as:
( )MSS x
n
start slow
. 1 2

with
]
]
]
]

,
`

.
|
]
]
]

MSS f
RTT r
n
R
p
2
*
log (2-2)
where
*
n represents the time step in terms of Round Trip Times (RTT) where the available
shared bandwidth starts to be fully utilized and MSS (Maximum Segment Size) is the
maximum allowed TCP packet size (in our scenario MSS is set to 1460 bytes). If the total size
x of the file is smaller than x
slow-start
, the sender never reaches the state of fair capacity sharing.
Therefore, the approximation of the expected file transfer delay E{T(x)} for files of size x
(including overhead) can be more accurately described by formula 2-3 (cf. [2]):
{
{
( ) {

'

+
< +
]
]
]

,
`

.
|
]
]
]

start slow start slow R / G / M


*
start slow start R / G / M
x x x x T E RTT n
x x ) x x ( T E RTT
MSS
x
log
) x ( T E
2
(2-3)
As seen from formula 2-3, the computation of the expected time to transfer all data is divided
into two parts: the first part gives the sum of all necessary RTTs which are mostly for waiting
for acknowledgements as a consequence of the slow start mechanism. The second term
considers the time of sending the rest of the data with the available capacity. x
start
is the
amount of data sent within the first part as shown in formula 2-4, which is only used when the
file size x is smaller than x
slow-start
:
418
MSS x
MSS
x
log
start

,
`

.
|

]
]
]

,
`

.
|
]
]
]

1 2
2
(2-4)
Thus (x - x
-start
) gives the amount of remaining data according to the last acknowledgement,
which is sent using the available capacity. E
M/G/R
{T(x)} is the sojourn time corresponding to
the pure M/G/R-PS model according to formula 2-1. By comparing formula 2-3 with 2-1, it
turns out that the linear relationship between file size x and expected transfer time disappears
when TCP flow control is taken into account. Thus file size distributions with a substantial
probability of files mainly driven by slow start phase perceive higher average delays than
derived from consideration of mere file size and delay factor. It has to be noted that the
system utilization remains the same, i.e. the impact is restricted to the performance
experienced by individual flows.
However, the above extension does not consider the TCP connection setup and release
delay. During the connection setup, the connection requesting instance of the client sends a
SYN segment to the server and then the server responds to the request by sending its own
SYN segment and at the same time acknowledging the SYN of the client. To conclude the
connection setup, the client has to acknowledge the SYN of the server. In the procedure of
connection release, each side performs TCP half-close to close the TCP connection, i.e. each
sends a FIN segment and then waits for the other side to acknowledge it. Therefore, there are
a total of two RTTs and an additional uplink message sent from NodeB to RNC.
In the formula 2-3, RTT is configured as a constant value during one user connection,
although it is influenced by network congestion. Therefore, an adjustment is proposed in this
paper to enhance the model to be more realistic:
{
{
( ) {

'

+
< +
]
]
]

,
`

.
|
]
]
]

start slow start slow R / G / M adjust


*
start slow start R / G / M adjust
x x x x T E RTT n
x x ) x x ( T E RTT
MSS
x
log
) x ( T E
2
(2-5)
Here
R adjust
f RTT RTT (f
R
is the delay factor).

While
*
n (cf. formula 2-2) is only
dependent on the RTT experienced under the low load condition (in our simulation scenario it
is 200ms), so
*
n is a function of f
R
, i.e. when the network is heavily congested, the obtained
shared bandwidth by each user turns out to be lower as well. By adding the extra delay for
setting up and releasing the TCP connection, the expected file transfer delay E{T(x)}* is:
{ {
adjust
* RTT io) UL_rtt_rat 2 ( ) x ( T E * ) x ( T E + + (2-6)
UL_rtt_ratio is the ratio of a round trip time taken for the uplink. Let UL_rtt_ratio = 0.1
assumed that a low loaded uplink path experiences smaller latency. The distinction of uplink
and downlink is necessary as a consequence of an asymmetric data transfer property in UMTS.
Here RTT*
adjust
is the round trip time only during the connection setup and release procedure
where the transferred data volume is rather low and thus the resultant RTT is much lower.
419

2.3 Performing the Connection Admission Control
It has been advocated by Roberts et al. [6] that admission control for TCP flows allows
guaranteeing a minimum throughput. Let r
m
denote the minimum rate which can be used as an
admission control threshold. Then the number of admitted flows is limited by: N = C/r
m
. This
behaves as an M/G/R/N-PS model. The results for the M/G/R/N-PS queue follow directly
from the M/G/R/-PS queue only that the state space corresponding to the number of flows or
connections is now restricted to a total of number of N. The probability of each state is given
in formula 2-7. When the number of connections reaches the maximum value N, formula 2-7
gives the blocking probably p(N). When R = N, p(N) reduces to Erlangs first formula.

'

<

) R j N (
) R , R ( E
) ( ) R , R ( E
) R j (
) R , R ( E
) R , R ( E ) R (
! j
! R
) (
) j ( p
R N
R j
R N
R j




2
2
2
2
1
1
1
1
(2-7)
Here is the offered traffic load. With the probability of each queue state, the average number
of connections (or mean queue size) can be calculated. By applying Littles law, the average
file transfer delay can be obtained by:
{
{
( ) ) N ( p
W E
T E

1
with {


N
j
) j ( p j W E
0
(2-8)

3. MODEL VALIDATION AND SIMULATION RESULTS

This section validates the M/G/R-PS model for configurations without admission control
and the M/G/R/N-PS model for configurations using admission control. The validation is
performed by comparing the results of the analytical investigation with simulation results
obtained by the UTRAN simulation model developed for this study.

3.1 Simulation Model Set-up
The considered traffic type (web, ftp) is assumed to use UMTS QoS best effort (cf.
[14]) which allows the UTRAN to select a bearer type which is most suitable for a certain UE
at a certain time (i.e. higher rates for nearby UEs in unloaded radio cells and lower rates for
far-off UEs in loaded cells, etc.). With respect to best effort services the properly
dimensioned Iub has to achieve two targets. Firstly, the application performance (mainly
response time for object transfers) has to fit to the requirements. Secondly, the interaction of
the radio link control layer (RLC) and the frame protocol layer as a transport resource needs a
timely arrival of the radio frames (i.e. FP PDU), because radio frames arriving (downlink
from RNC to NodeB) later than scheduled for the transmission on-air will be discarded
resulting in a waste of Iub bandwidth (additional load due to re-transmission of discarded
blocks, if RLC operates in acknowledged mode). The percentage of FP PDUs exceeding the
allowed time budget is referred to as delayed FP PDUs within this paper.
420
For the performance analysis and dimensioning of the UTRAN Iub link, within this work
the simulation model according to UTRAN release 99 was successively extended by using the
OPNET simulator (version 10.0). The UTRAN models were set up based on the OPNET ATM
workstation/server model, which allows a flexible configuration of applications on the user
layer. The following key extensions have been added to the model:
Modeling of RLC, MAC, FP for UMTS
Admission control for UMTS connections based on the flow level
Shaping of IP flows according to configured UMTS rates
AAL2 connection management
AAL2 scheduling with QoS differentiation.
For the setup of the simulation, the application is chosen as FTP running over TCP/IP. Within
this paper, a constant file size distribution is studied with file sizes of 12kbyte and 50kbyte. In
following examples, all users choose the same RAB rate of 128kbps to transfer data. The used
Iub PVC selects a CBR PVC with a reserved maximum bandwidth of 1500 kbps and the
remaining bandwidth of the modeled E1 line (420 kbps) is consumed for general signaling
purposes and dedicated control channels. During the simulation a passive monitoring is
applied for determining the amount of delayed FP PDUs, i.e. no RLC retransmission is
applied as the PDUs are delivered whenever they arrive.

3.2 Basic Scenario (No Admission Control)
Figure 3-1 compares file transfer delay derived from analytical models with simulations.
With 12kB file size, TCP flow remains in slow start phase. The offset between the basic
M/G/R-PS to the extended M/G/R-PS under the low load is due to the underestimation of the
delay caused by the slow start during which the available bandwidth cannot be fully utilized.
When the load goes higher the shared bandwidth is fully utilized, thus these two curves
converge. The adjusted and extended M/G/R-PS proposed in this paper (formula 2-5 and 2-6)
considers the impact of the slow start as well as the additional delay for setting up and
releasing the TCP connection. It can be easily seen that this approach matches the simulation
results very well.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
load
f
i
l
e

t
r
a
n
s
f
e
r

d
e
l
a
y

(
s
)
M/G/R-PS
Extended M/G/R-PS
Adjusted & Extended M/G/R-PS
Simulation (no CAC)

Figure 3-1: File transfer delay, delayed FP PDU ratio: constant file size, mean =12kB, Iub link = 1 E1 line
421
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
load
f
i
l
e

t
r
a
n
s
f
e
r

d
e
l
a
y

(
s
)
0%
10%
20%
30%
40%
50%
60%
70%
80%
d
e
l
a
y
e
d

F
P

P
D
U

r
a
t
i
o
file transfer delay (1 E1) file transfer delay (2 E1)
delayed FP PDU ratio (1 E1) delayed FP PDU ratio (2 E1)

Figure 3-2: File transfer delay, delayed FP PDU ratio: constant file size, mean =12kB
Figure 3-2 presents simulation results for the delayed FP PDU ratio and file transfer
delays over different loads, for both a single E1 line and 2 E1 lines. The ftp connection
performance for load ratios above 0.85 is severely degraded. For dimensioning purposes the
file transfer delay in the high-load case (load ratio > 0.8) can be negligible because the
requirements for the frame protocol delay become the constraining factor. Not using
admission control, in case of both link capacities, when the load ratio is above 0.9 the system
is overloaded and the application performance becomes unstable, i.e. result in an extremely
high file transfer delay. Figure 3-2 also highlights the achievable multiplexing gain comparing
an Iub with a single E1 line with the 2 E1 line case. The available bandwidth ratio delimited
by the amount of delayed frame protocol PDUs reaching 10% is increased from 0.6 to 0.75
when changing the Iub bandwidth from one 1 E1 to 2 E1. Thus the traffic properly served by
the Iub (i.e. expected traffic from the group of radio cells belonging to the corresponding
NodeB) could be increased by 150% from 0.6 E1 units to 1.5 E1 (0.752 E1) units although
the line capacity is only doubled. Furthermore, the file transfer delay is also greatly
improved by the additional E1 line. At the load ratio of 0.75 the delay obtained from two E1
lines is 7.8% lower than that from one E1 line case. If the mobile network operator aims for a
traffic load reaching the maximum acceptable utilization of an E1 link, the acceptable traffic
load can be derived from the corresponding results of both application and FP PDU
performance. With a maximum expected delay of 1.2 seconds for a file transfer the utilization
can reach 80% (1 E1) according to Figure 3-2. With a maximum of 5% of delayed FP PDUs
only 55% utilization is acceptable, i.e. the FP delay is the limiting QoS constraint and an
over-saturation of the application QoS requirement as the ftp delay for 55% utilization is close
to the unloaded line results. It has to be noted, that the requirement for FP PDU has to be
carefully aligned with implementation and parameter settings of higher layers, e.g. using RLC
in acknowledged mode with retransmission is more robust against FP PDU delay/losses than
RLC transparent mode.

3.3 Scenario with Admission Control
When employing the admission control into the system, the system can avoid the heavy
congestion by limiting the number of active user connections simultaneously on the link, and
422
thus each user gets a minimum throughput which can guarantee QoS requested by the user.
In the following example, the minimum rate used as an admission control threshold, i.e. CAC
rate in the case of this study, is set to 96kbps for user group using RAB of 128kbps. Thus the
system allows a maximum of 15 simultaneous user connections on the 1500 kbps VC. The file
size distribution is set to constant with a mean file size of 12kbyte. Figure 3-3 highlights the
influence of CAC on the file transfer delay and delayed FP PDU ratio.

0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
load
f
i
l
e

t
r
a
n
s
f
e
r

d
e
l
a
y
(
s
)
0%
10%
20%
30%
40%
50%
60%
70%
80%
d
e
l
a
y
e
d

F
P

P
D
U

r
a
t
i
o
file transfer delay (with CAC) file transfer delay (No CAC)
delayed FP PDU ratio (with CAC) delayed FP PDU ratio (No CAC)

Figure 3-3: File transfer delay, delayed FP PDU ratio: constant file size, mean =12kB, Iub link =1 E1 line
It is obvious that CAC protects the ongoing flows from contending flows stabilizing the
application performance for load ratios above 0.72. The influence on delayed FP PDU ratio is
only slightly improved, with 10% delayed FP PDU ratio as a maximum threshold, the
acceptable link utilization increases to 65% with CAC (61% without CAC).
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
5
offered load
f
i
l
e

t
r
a
n
s
f
e
r

d
e
l
a
y

(
s
)
M/G/R/N-PS
Simulation with CAC (50kbyte)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.11
0.12
offered load
b
l
o
c
k
i
n
g

r
a
t
i
o
M/G/R/N-PS
Simulation with CAC (50kbyte)

Figure 3-4: File transfer delay, blocking ratio: constant file size, mean =50kB, Iub link =1 E1 line
Figure 3-4 gives a comparison of the simulation results with the calculation of M/G/R/N-PS
formula (formula 2-7 and 2-8) adding additional 200ms connection setup and release delay in
the cases of 50kbyte constant file size, in terms of file transfer delay and connection blocking
ratio. The x axis is the offered traffic load including the rejected traffic, as a percentage of the
link capacity. For 50kbyte file size, the slow start does not take significant effect and the
bandwidth efficiency is relatively high, and thus the simulation results matches well to the
M/G/R/N-PS formula, as well as the connection blocking ratio.
423

4. CONCLUSION

Within this paper two key constraints (file transfer delay and FP PDU delay) for the
dimensioning of the Iub interface in UMTS access network have been discussed. The
dimensioning approach based on the M/G/R-PS model has been demonstrated to be able to
determine the application performance perceived by a UMTS user. The presented simulation
results also prove the necessity of using admission control to protect against instability e.g.
overload situations and thus guarantee the quality of service even for best effort class. The
proposed M/G/R/N-PS model for determining the application performance of applying
admission control was validated by comparing with the simulation results. Nevertheless, the
importance of the performance on the AAL2 layer is stressed additionally. The related
performance curve was obtained from the simulations in this study. The derivation of an
analytical approach to predict the AAL2 performance based on a given UMTS traffic profile
is subject to further work. Although, the focus of this paper lies only on packet switched user
plane traffic, of course the Iub link also has to enable transferring the real-time stream traffic
such as voice and video services, in addition certain control data channels exchange data
between the NodeB and the RNC. Available dimensioning approaches for mixing stream and
elastic traffic giving higher priority for stream traffic have been discussed in [7] and [8]. This
will be further investigated. In this paper, the shown simulations emphasize the necessity for a
careful network monitoring in evolved UMTS networks.

REFERENCES

1. Zhong Fan, Marconi Labs Cambridge, UK, Dimensioning Bandwidth for Elastic Traffic
2. Anton Riedl; Thomas Bauschert; Maren Perske; Andreas Probst; Investigation of the
M/G/R Processor Sharing Model for Dimensioning of IP Access Networks with Elastic
Traffic, Munich University of Technology (TUM), Siemens AG
3. Anton Riedl; Maren Perske; Thomas Bauschert; Andreas Probst, DIMENSIONING OF
IP ACCESS NETWORKS WITH ELASTIC TRAFFIC, Siemens AG; TUM
4. D. P. Heymant; T. V. Lakshman; Arnold L. Neidhardt, A new method for analysing
feedback-based protocols with applications to engineering web traffic over the Internet
5. R. Vranken; R.D. van der Mei; Performance of TCP with Multiple Priority Classes
6. L. Massoulie and J. Roberts, Arguments in favour of admission control for TCP flows
7. K. Lindberger, Balancing quality of service, pricing and utilization in multiservice
networks with stream and elastic traffic, in ITC 16, 1999
8. R. Nunez, H. van den Berg and M. Mandjes (1999), Performance Evaluation of Strategies
for Integration of Elastic and Stream Traffic, Proceedings ITC 16, Edinburgh.
9. F. Brichet, Admission Control in multiservice networks
10. Anton Riedl; Thomas Bauschert; A frame work for multi-service IP network planning
11. Jianhua Cao, Mikael Andersson, Christian Nyberg and Maria Kihl, Web Server
Performance Modeling Using an M/G/1/K*PS Queue, Lund Institute of Technology
12. T. Bonald, Insensitivity results in statistical bandwidth sharing, France Telecom R&D.
13. 3GPP 25.853, Delay Budget within the Access Stratum
14. 3GPP 23.107, Quality of Service Concept
424

Anda mungkin juga menyukai