Anda di halaman 1dari 156

What is Envelope Tracking?

Perhaps you've heard that some mobile devices consume less or more battery than others.
Or you may have heard the following: "When I put my mobile in LTE mode, it drains more battery."

Unfortunately, this is true, especially in the case of LTE. This is one reason why the first LTE devices
became known as 'bad' - battery drainer.
The 'solution' adopted by some manufacturers has been to increase the size of the batteries. And
nowadays, it is not uncommon to find people carrying an extra battery. (The battery was increased to
compensate for the 'waste' of energy, as we shall see soon).

Analyzing some characteristics of the mobile access technology, we see that more and more increase
the complexity (red in the table below).

Though this complexity is required to support high rates increasingly demanded, they end up bringing
about some disadvantages.
The main one, and technical explanation for the higher battery use in LTE is due to the specific
characteristic of the RF OFDM (A) signal waveform, where the waves have "peaks" and "valleys" more
extreme than their predecessors (3G for example), so that they can be achieved high data throughput
with limited spectrum resources.

When we are comparing typical WCDMA, HSUPA and LTE signals, we see clearly that the peaks in LTE
are more prominent.

Important note: Of course there are other factors that can cause a mobile LTE drain more battery
than a 3G mobile, but certainly this ratio is the most striking: the signal strength 'peak/average' ratio.
So let's know what it means that ratio. Consider now a typical output signal of an OFDMA transmitter
(LTE). The red line indicates the Peak, and green the Average RMS (signal strength).

The ratio of peak power to average power is referred to as PAPR (Peak-to-Average Power Ratio).
But how can I better understand this ratio?
By means of example. Suppose that in the signal above the average is 200 mW. And the peak signal
reaches 2 W. Then, so that we can transmit all the signal (which has an average of 200 mW), the
amplifier must be able to handle a ten times higher signal (10 dB) that, ie , a power of 2W.
Congratulations: you just saw in practice what it means PAPR equal to 10 db!
Practical values (typical) of this ratio varies depending on the technology (or more precisely to the
shape of characteristic waveform. For example, for WCDMA is around 3.5 dB, for the HSUPA around
6.5 dB and LTE about 8.5 dB.

Let's move on, and consider an RF LTE signal as shown below.

Important Note: For simplification purposes, we 'envelope' the RF signal, ie the displayed waveform is
the 'envelope' of the actual RF waveform, as shown below.

Understood the envelope concept, and as we have demonstrated the waveforms in this tutorial, we
will continue.

Constant Output
To 'support' all this signal a battery should have a constant level, as shown in the figure below, in
yellow. In orange we have the part that is lost - dissipated as heat.

In other words, the simplest possible form for a power system would be a direct connection to the
battery, with a continuous signal level (independent of the RF signal). But in this case, it is easy to
see that efficiency would be extremely bad, since it is directly related to the PAPR waveform.
In cases such as shown (Continuous Output) comparing the WCDMA versus LTE consumption, the LTE
consumes almost twice (two times) than the WCDMA.

Amplifier Efficiency
We said above that the power amplifier has a 'bad efficiency'. But what does it mean?
Taking advantage of this our first example, let us understand what is the 'efficiency' of RF amplifiers,
since from it depends on the design of many other elements, such as which source of energy that will
be used, if we have heat sinks or not , etc. In other words, the RF design of all modern mobile
systems depend on the 'efficiency' of RF amplifiers.
In an RF amplifier, power is supplied to the circuit, and an output signal is generated (always less than
the DC input power).
The ratio of output power on the input power (DC) is what we call Efficiency.

If the signal being processed has a fixed shape (modulated signal but with fixed amplitude) and the
amplifier doesn't need to be adjusted, we have a high efficiency.
However, the waveform of modern cellular systems and amplifier operation mode set the Efficiency.
And that is why alternative techniques emerge, as discussed below.

Amplifier APT (Average Power Track)


Continuing from the above example, we saw that it is the worst possible case: a system which we do
not have any power control. In fact the power control exists (at intervals of periodicals times the
energy is varied).
And the most common solution used is the APT (Average Power Track) amplifier circuit .
In APT DC: DC solution, each slot is tracked based on the transmission power control levels.
In cases of low power, this solution actually improves the efficiency, but does not help much in cases
of high power. Furthermore, the efficiency also depends on the waveform.

In APT mode according to a certain periodic time, the drain voltage of the battery is adjusted by the
device - however this value remains until the next control slot! But even if the voltage is adjusted,
just waste much of the energy - 'leveling up'. The PA must be adjusted so that the power 'settle'
peaks and valleys.

In the figure below, we see again in yellow the battery powerand in orange the amount of energy
wasted. In other words, we consume more energy than necessary!

Ideally, we should not have orange area, ie, the device should pull the battery only the energy it
needs to transmit.
It sounds simple, but it was actually a very difficult thing to accomplish, especially in LTE
smartphones, as we have seen they consume much more energy in your power amplifiers than the
3G or 2G devices.
Fortunately, this story has changed. The company Nujira has developed (and is already commercial
for a long time, even with numerous devices available on the market) a new technique that eliminates
this huge waste that we have in these radios (especially LTE).

ET (Envelope Tracking) Amplifier


The new technique is called Envelope Tracking. Disregarding the intuitive concept of envelope we use
when we send a letter to someone (although the concept is similar) in the Engineering and Physical
envelope concept refers to a function. For any oscillating signal, the envelope is a smooth curve that
involves, or outlines its limits. The signal is contained within the envelope.
You may not have noticed, but we have previously talked about it (we gave a hint). The signals that
we represent in this tutorial have an envelope shape!

So it's easy to describe: ET (Envelope Tracking) is the technique that 'match' the energy that is
powering the device with the power it transmits.
This scheme is much more dynamic than the APT, and allows tracking of the signal amplitude.
Using ET, the voltage of the power source applied to the amplifier is continuously adjusted to ensure
that the device is always operating at its peak efficiency so that the desired power output.
Unlike the APT it does not depend on the waveform. And with the use of ET the LTE consumption is
very close to the WCDMA (about 1.2 times).
The baseband signal powers the chip (ET) and varies the DC power source so as to maintain the PA at
compression point (or at least close to it), resulting in far greater efficiency.
Another difference compared to the APT is that the ET replaces the fixed DC source with a variable DC
power supply (rapidly changing) that dynamically tracks the amplitude (or envelope) of the RF signal.
The signals of the mobile baseband Chipset are sent to ET power supply chip.

Returning to the same previous figure, but now with the ET instead of APT, we see that the orange
area (waste) is minimal!

The goal of the ET is reached: get efficient power conversion. Namely, that every moment is provided
only the necessary energy (for that time of transmission). Although this exact relationship is
practically impossible to achieve the continuous adjustment to the PA allows us a very good
approximation.

Comparisons
Comparing oversimplified way the 3 implementations, we can clearly see that energy is better utilized
with the use of Envelope Tracking technique.

In practice, the efficiency of a conventional amplifier (APT) is around 25%, while with the ET is around
60%.

Availability
As already mentioned, the ET has been available for some years. Samsung for example debuted with
Qualcomm QFE1100 chip in the Galaxy Note 3. LG with the Nexus 5. Apple with iPhone 6.
Currently, its use is very wide - virtually all new models of all manufacturers.

Conclusion
Over time, several techniques were used in order to improve the efficiency of power amplifiers in
communications systems, each with its advantages and disadvantages, but all with the same goal - to
improve the efficiency and linearity of power amplifiers and consequently reduce energy consumption
(which costs money) of BS (Base Stations) and to save the battery of mobile (prolonging the duration
of use and decreasing heating).
We have seen here today some of these techniques such as ATP and ET (Envelope Tracking), the
latter being most appropriate - and used - in modern cellular systems, such as LTE, with a high PAPR
(Peak-to-Average Power Ratio).
We have also seen that Envelope Tracking technique is very effective, and includes a new component
- the amplitude, which is now tracked and used as input: the 'envelope' of signal amplitude is tracked
and used by the amplifier!

What are Modes, States and Transitions in GSM, UMTS and


LTE?
Hello friends. Today we will talk about a very important issue for those working with mobile
communications: what are the different modes and states that a mobile phone can take, as well as
how the transitions occurs between each of them.
The concept is simple, but the great amount of detail can end up making the topic an extensive or
complex task. This ends up causing many people simply give up trying to understand, or even not to
be interested about know such details.
However, the lack of knowledge of these key points of operation (when transitions occur, why they
occur, etc.) ultimately affects the understanding of other areas of the mobile network, since the
operation of the entire network is based on that. Not really understanding this fundamental base of
operation, then yes is that we run the risk of thinking that everything is too complicated.
So we will try to show in a very simplified way all the key concepts involved in the modes, states and
transitions that a mobile can have on a 2G/3G/4G network. We hope that by the end of this tutorial all
that is shown in the following figure are clearer to you.

Note: This tutorial just getting a little long, and could be been divided into 'parts'. However, we
decided by the maintenance of the centralized content. Feel free to read it the way you prefer - by
parts, at once. All right?
So let's take a deep breath, and let's begin.

Mode Off (Dead)


To demonstrate (always using our simple way of exemplifying) we start from the basic so that the
mobile can be: Off!
In this case, we do not have much to talk about, don't you agree? When the mobile is off, it does not
'appear' to the network. Do not waste battery, does not consume network resources. In terms of the
network, it serves no purpose.

But serves at least for we to begin to understanding today's concepts: this is a 'mode' that the mobile
can take!

Location
Already making a short stop, before moving forward: a parenthesis in our conversation. Before
proceeding to the next modes and states, we need to talk about another important issue, closely
related to the theme, and one that should also be well understood: the location of the mobile, and
how the network sees it.
This is because the location of the mobile has a significant role in the ways and especially in transition
states that it can take. We must remember, even if very quickly, some basic concepts of location in
mobile systems.
The general rule is that whenever the mobile detects that it has changed cell, it performs a procedure
to inform the network its new location, ie, makes an update of its position, stating its current 'location
identifiers' in specific messages.

The following figure shows the different possible location identifiers, from the point of view of RAN
(Radio Access Network) and also Core CS (Circuit-Switched) and Core PS (Packet-Switched).

For example, if the mobile moves from cell 'A' coverage area for the cell 'D' coverage area, it performs
a 'cell update' procedure and informs the network that now is being served by the cell 'D' .
This is the general rule, and similar procedures occur whenever there is any change from one area to
another (whether an area of the cell, URA, LAC or RAC).

Of course the above rule does not set it all - there are still many aspects and concepts to consider (for
example, the cell update may be triggered by other events not only relating to location). But it isn't
our goal today, as we are seeking only to know the modes and states. So we will continue, but feel
free to extend after the study in the areas that you are interested - will definitely be worth it.

Idle Mode
So we will continue knowing the modes.
The next mode that the mobile can take is quite intuitive: on. But the mode name is not that - after
being turned on and consequently turn out to be perceived by the network, we say the mobile is in
'Idle' mode.
In idle mode, besides be seen (known) by the network, the mobile also comes to see (know) the
network and can then interact with it.
As such interaction in idle mode, the mobile can 'camp' in a given cell.

Even without knowing yet which means the mobile 'camp' in a cell, we can say that when in idle mode
mobile carries a huge amount of operations, depending for example of their available technologies
(2G, 3G and/or 4G ) or network where they are.
And it really is a lot that happens. You can check on the screen, so you turn on your mobile: first
comes a message that the mobile is searching the network. As soon as it finds, come the antenna
bars, followed by some indication of the type of technology that it could connect (GSM, HSDPA, LTE,
etc.). And to conclude, the name of the operator (or any other message that it uses as ID).
At this time, we say that the mobile is 'camped' in a cell of the network.
We understand that it is 'aware', both to start and to receive a completed call. It does not have an
allocated dedicated channel, and can not make or receive calls. So it should be constantly monitoring
the available communication channels, to know what to do when the time is right.
In this state, the mobile has no active connection to the network, and any data transmission will
require an establishment (or reestablishment) of a control connection, to only then start to transmit
data. It does not transmit almost nothing in that state (only in some cases, small information only to
update their registration area).
That is, the radio is 'asleep' most of the time and only wakes up when necessary - when instructed to
participate in any activity.
In the specific case of 4G mobile in idle mode, it has the support activities to the DRX (Discontinuous
Reception), System Information (SI System Information) for access, cell reselection and paging
information.
And in the specific case of 3G mobile in Idle mode, it stays listening to hear the CPICH channel
(Common Pilot Channel) of the cell where it is camped and also the neighbor cells. Also listening to
the PICH channel (Paging Channel Indicators). In the latter, he seeks its 'Paging Indicator' - a true or

false value that tells whether it should read the Paging Channel. In other words its 'Idle DRX' cycle
(Discontinuous Reception).
To enter this mode, the mobile makes contact with your PLMN, seeking a suitable cell that can provide
you the service, and tunes to its control channel. As already mentioned, this choice or tuning is what
we call 'camp' in the cell - the mobile will register its presence in that registration area.
If the mobile lose the coverage of this cell, it selects (search) a more suitable cell available, and
camps on that other - making a reselection.
But let's take a moment here: although the cell selection and reselection are closely related concepts
to the modes, states and transitions concepts, we are delving much a topic that is not the main goal
today. Let us return to the idle mode, in general. If so, and if there is interest, we talk more
specifically about this or other topics in another tutorial.
Returning (and summarizing) then the goal of the mobile camping on a cell in idle mode is that it can
receive information from the network. For calls originated by it, it already starts the call in the
corresponding control channel, from the the cell it is camped. And in the case of terminated calls, the
network previously known its location information, and in which area it is, and then sends a 'paging'
message for it in control channels of this registration area, from where the it answers.
If we seek the direct meaning of Idle, we find something like 'not doing anything'. But not quite
exactly what happens. In addition to the initial procedures described above (power-on procedure), the
mobile continues to carry out many other activities.

Airplane mode
Although not illustrated in the figure above, the act of turning on (power-on) is not the only one that
takes the mobile into idle mode. The mobile can go into idle mode also when we turn off the its
'airplane' mode.

This is a very particular mode, and in terms of network, we can consider the act of putting the mobile
in airplane mode as 'turning off the network'. Similarly, turn off airplane mode is equivalent to
'connect to the network'.
Airplane mode, as the name suggests, was originated due to the ban on the use of wireless mobile
phones on airplanes. The 'problem' actually refers only to the use of radio frequency device. So, it
was created the option to turn 'off' only the radio part of the device, leaving users free to use other
features, such as games and tools like text editors and spreadsheets.
And of course, it is not necessary to be on a plane to use this mode. Airplane mode can be used
whenever there is need to 'turn off'/'turn on' the device radios - without having to wait for it to a
complete restart.
When the mobile is switched on (or when Airplane Mode is turned off), it enters the mode we already
know: the Idle mode.
We will continue to know another mode that the mobile can take.

Unless you make (for example, call someone) or receive a voice call, or to make or receive a data call
(for example, browsing a web page), you will remain in Idle mode.
But if a call comes, then everything changes. The mobile switches to the mode known as Connected
mode.

Connected Mode
Okay, so far we understand how mobile comes in idle mode, and also that, although the name does
not indicate it is a very important mode, where much happens.
But the goal of every mobile is to transfer data in the form of calls, either voice or data. And when the
mobile is one of these calls, we say that it is in connected mode!
Unlike Idle mode, where we can do just about the same considerations for 2G technologies, 3G and
4G, the Connected mode considerations are different for each one.
The fact that is common to all is that when the mobile needs to initiate communication, it needs to
establish a control connection, and then a connection that allows traffic information.
In the case of 3G and 4G: when the mobile initiates a call, it first sends a request to establish a
signaling connection. & Nbsp; Then it is then initiated RRC connection establishment procedure (Radio
Resource Control). When the RRC connection is established, the mobile enters the Connected mode.
Note: In the case of 2G, the idea is the same, but some other concepts appear. As our tutorial should
get a little long because of the number of concepts that will be covered, and because the transition
from 2G states requires a little further explanation (concepts only 2G), we will leave these detailed
explanations for another tutorial if there is interest from our readers. In any event, although not
explained here, 2G state transitions remain represented in the complete image.
At first, we are led to understand the connected mode simply as the 'opposite' of Idle mode.
Unfortunately, the picture is not so simple.
Today, it is increasing the number of smartphones on the market, whose side benefit was greater
adoption (mass) of data usage. Actually this range was very large, and is growing bigger each day.
However, brought a challenge on how to support this signaling 'tsunami' that such massive use of
data requires.
The now many users want all to be connected at the same time, in many different types of
applications.
For several reasons and mechanisms, each of these smartphones periodically active and disables its
connections.
While the goal is that the user has the perception of always connected, the amount of signaling makes
this mission almost impossible.
Fortunately, to minimize this problem were created different 'states' in connected mode!
Although this tutorial are seeking generally to understand the state and mobile operating modes in a
network, UTRAN modes (3G UMTS) and E-UTRAN (4G LTE) has some states and concepts more
specific. So let's proceed, but speaking separately on each of them.

3G Connected Mode
When a 3G mobile is in connected mode, its level of connection with the UTRAN is determined by the
QoS requirements of the RAB active, and traffic characteristics.
The challenges in UMTS to keep a lot of users connected led to mechanisms and implementations that
seek to minimize this scenario.
For example, some implementations seek to minimize the mobile battery consumption, and other
implementations seek to reduce the signaling. Fast Dormancy functionality (provided by the 3GPP in
Release 8) also has mechanism to tackle this challenge. Other features has yet been developed and
improved till today.
Ironically, the UMTS systems have been developed to meet the growing demand for multimedia
(data) seen years ago. As was thought in a very large growth data, the system has been designed in
an efficient way to transport these high bit bandwidths, videos, etc. Even with a slight delay to start,
the system served well, especially in cases of high rates.
But in recent years, the seen explosion of data was larger than expected. Smartphones increasingly
cheap and affordable unlimited data plans extended up to prepaid users, explosion of all kinds
applications - especially applications using small volumes of periodic data with frequent updates.
New applications have emerged or have become more used, such as Social and Messaging Networks
(Whatsapp, Twitter, Facebook), Stock Portfolios, Email/Calendar/Contacts/RSS Sync.
While the UMTS system allows, it is not designed for this: send and receive very small amounts of
data, often less than 1 kB.
Each of these messages needs a connection with all the associated signaling load!
Many mobile operators keep a higher power channel for a longer period of time when imagine that it
will transmit or receive more data in the near future. But this ends up spending more battery and
taking up resources that could be in use by another user.
To help improve this problem 3G mobile that is in connected mode, there are the states: CELL_DCH,
CELL_FACH, CELL_PCH and URA_PCH.

Let us know each of them, and to facilitate understanding, we will make a classification according to
the items:

Channel: channels that mobile use in this state are dedicated or shared?

Knowledge by the network: the network knows where the mobile is in the cell level or at URA level?

Data Transfers: the volume of data to be transmitted is large or small?

Transitions: when you finish downloading, or a particular timer ends, to where the mobile will go?

From the above ratings, the one you maybe not fully understand is the dedicated or shared channel.
One way to understand the difference between dedicated and shared channel is making an analogy.
Think of channels dedicated as rooms in a hotel - care guaranteed and individual to the user. The only
problem is that, as a hotel, the number of channels - rooms - is limited. Anyway, the hotel always try
to provide the service in the best possible way - as well as the network.
Following the same analogy, shared channels would be a conference hall - serves many more people,
but not in the same way serves the rooms.
Let's talk about each of these states, seeking to make the aforementioned ratings.

CELL_DCH (UTRAN) State


If there is a state that is the 3G connected mode, this state is CELL_DCH.

Dedicated Channel. As the state's name suggests, the CELL_DCH (DCH: Dedicated CHannel) uses a
dedicated channel to the mobile in the Uplink and Downlink.
In the CELL_DCH state mobile is in connected mode, and utilizes a dedicated R99 channel or a shared
HS-DSCH (Downlink Shared Channel High Speed) and/or E-DPCH (Enhanced Dedicated Physical
Channel).
Known at Cell level. Also we can sense by the name that the network knows where the mobile is in
the cell level (according to the current Active Set).
Transfers of Large Data Volumes. When the mobile needs to transfer large volumes of data, this is
the ideal state.
But as we know the scenario has changed with the adoption of increasingly common applications
requiring small periodic data transfers. And if we use the limited resources of CELL_DCH to all
establishments and restablishments schemes, the system would inevitably collapse. In our analogy
with hotel rooms, there would be no rooms for everyone!
The solution is to create an auxiliary state that supports the extra demand. And that means using
shared channels, which define the state that we will see below, the CELL_FACH.
Transitions to Idle or CELL_FACH (or PCH states, as we shall see soon). When the mobile
ends the transfer, it may return to idle mode (releasing the RRC connection), or switch to the

CELL_FACH state (if in a buffer an amount of data to be transferred smaller than a certain set
threshold - or other words, if there is little volume of data to be transferred).

CELL_FACH (UTRAN) State


Channel Shared. The CELL_FACH state keeps the mobile in Connected mode, only instead of
dedicated physical channel, the mobile uses shared channels.

Compared to the analogy of the dedicated channel as rooms in a hotel, the shared channel would be
the this hotel conference room.
Small volumes data transfer. This makes this ideal state for transmission and reception of small
data packets:

In Uplink is used the RACH channel (Random Access Channel): the mobile is constantly transmitting RACH
messages.

For Downlink is used the FACH channel (Forward Access Channel): the mobile is constantly decoding the FACH
channel.

Known at Cell level. In the CELL_FACH state, the network also knows where the mobile is in the cell
level (the cell where the mobile has made the latest 'Cell Update').
Transitions to Idle or CELL_DCH (or PCH states, as we shall see soon). When the mobile has
finished transferring in the CELL_FACH, it may return to idle mode (releasing the RRC connection), or
switch to the CELL_DCH state (if in a buffer an amount of data to be transferred greater than a
certain set threshold - or in other words, if a large volume of data to be transferred).
But even with the help of CELL_DCH and CELL_FACH (hotel rooms, plus the conference hall), network
capacity may not be enough. Also, if the output options of these states after the end of the transfer
was only the idle mode, we would worst the signaling increasing problem (reestablishing
connections).
But then what is the solution for those cases where it is already occupied? In the case of hotel: get
the name and give a password to each user over the limit, and call them as soon as
possible/necessary.

In the case of 3G network to minimize this problem, there are the PCH states (CELL_PCH and
URA_PCH). Are states where the mobile can be transferred, and not lose their RRC connection (they
were called and got a password).
But for now, can not take advantage of the hotel's services (sending or receiving data). They can only
be aware, and when necessary/appropriate, obtain service.
Let's know the PCH states?

CELL_PCH (UTRAN) State


The CELL_PCH state is one of PCH states, a connected mode so that the mobile can take and it has
some interesting features. Starting with the name: PCH refers to paging.
Although not the same as the idle state, this state closely resembles the behavior in that way,
especially the mobile point of view. The big difference here is that control connection (RRC) is not lost
(although the mobile rarely uses).

Whenever the mobile camps in a new cell it informs the network ('cell update'). Remember that in the
Idle state, the mobile informs the network only when there is change in LA - Location Area or RA Routing Area; that is, in this state we have more updates as we the cell level.
But in this state, as in Idle, the mobile does not transfer data. And every time the mobile need to
send the 'cell update' message, the mobile needs to change temporarily to the CELL_FACH state.
The mobile keeps listening to the same channels as in idle mode - uses DRX to monitor the PCH
channel selected via associated PICH. The radio remains inactive most of the time and only wake up
in the DRX cycle of the CELL_PCH state (Note: the DRX cycle of the CELL_PCH state is different from
the DRX cycle of Idle Mode).
As mentioned, the control connection is maintained, then any new data transmission can be
performed more quickly and with much less signaling, because it means for only sending the data
that are present.
In this state there is no downlink activity: whenever the mobile needs to transmit or receive, it goes
to the CELL_FACH state.
Known at Cell level. In the CELL_PCH state, the network knows where the mobile is in the cell level
(the cell where the mobile has made the last 'cell update'). Remembering: the 'cell update' is done in
the CELL_FACH state.

No Data Transfers. The only objective of this state are:

Save energy (using DRX cycle similar to Idle mode)

Allow quick access to the network, since the network know exactly which cell to send the paging and because
there is no need to set up new RRC connection.

Transitions to Idle or CELL_FACH. If after a certain time, continue without data transfer, the
mobile is released. Otherwise, go to the CELL_FACH state (data is being transferred).

URA_PCH (UTRAN) State


The fourth and final state (URA_PCH) is virtually identical to the CELL_PCH state. The only difference
is that the 'cell updates' are sent only when the mobile changes URA (UTRAN Registration Area)
instead of Cell change.
With this, the mobile transmits even less frequently that in the CELL_PCH state (remembering that
keeps the control connection active).

Known at URA level. The network knows where the mobile is at the level of URA (UTRAN
Registration Area) according to the URA assigned for mobile during the last 'URA update' remembering that the 'URA update', as we saw in the CELL_PCH state, is done only in the CELL_FACH
state.
No Data Transfers. For the reason above, this state is recommended for moviles that are moving
fast. But continues with the similar goals of the state CELL_PCH:

Save even more energy;

Allow quick access to the network, since the network knows the URA to which to send the paging and also
because there is no need for new RRC connection setup.

Transitions to Idle or CELL_FACH. If after a certain time, continue without data transfer, the
mobile is released. Otherwise, go to the CELL_FACH state (data is being transferred).

Comparison between Idle Mode and PCH States (CELL_PCH/URA_PCH)


After knowing the connected paging states CELL_PCH and URA_PCH, we can say that are equivalent
to Idle mode?

No. Remember that in idle mode, we do not have any established RRC connection, unlike that in the
CELL_PCH and URA_PCH states, where this connection still exists.
It is important not to be confused with the fact that in Idle Mode and CELL_PCH and URA_PCH states
the mobile has no radio resource allocated! For this reason, it can not initiate any type of data
transfer in dedicated and common channels. This is true.
But there is a big difference when the mobile try to initiate communication with the network.
In Idle mode, the mobile needs to send an RRC connection request (via RACH). In the CELL_PCH or
URA_PCH state the mobile moves to CELL_FACH, and already sends a message such as 'cell update',
and is ready for communication - do not have to re-establish the signaling connection, and then the
RRC connection again.
Thus obtaining the network service is more efficient.

Battery and Signaling


Battery consumption and increased signaling and interference in the network are directly related to
some parameters configuration of state transitions, such as timers and other settings.
But to really understand how it all works, we need to know some auxiliary information.
Let's see some of the data that influence the reduction of mobile battery consumption, and reduced
signaling.
Considering the modes seen so far, we can compare the battery consumption in each of them by
relative units. Thus we have the approximate consumption of each mode RRC:

OFF = 0

Idle = 1

CELL_DCH = 100 (that is, 100 x Idle)

CELL_FACH = 40 (that is, 40 x Idle)

CELL_PCH < 2 (in this case it depends on the relation of DRX to Idle and mobility)

URA_PCH CELL_PCH (in mobility scenarios it is less than the consumption in CELL_PCH state; in static
scenarios it is already the same.).

There is a relationship between energy consumption and the efficiency of communication. The
following figure helps us better understand this, because it shows the workflow UMTS states, where
the state that has the highest consumption is highest in the figure. Remember though that the
consumption should not be the only variable to be taken into account: the greater the energy used by
mobile, more immediately communication occurs.

If the mobile remains in the CELL_DCH state, it has almost immediate connection, and a very high
throughput. Only that it consumes the battery 100 times more than in the idle mode.
If it remains in the CELL_FACH state, it has a lower throughput, but with 40% of CELL_DCH
consumption.
If it stays in paging state (CELL_PCH or URA_PCH), consumption is almost the same as in idle mode.
The advantage is that both maintains the control connection, namely the communication is resumed
faster than in Idle mode.
What the Idle mode is good in relation to this relationship (battery consumption versus
communication efficiency) is that the battery consumption is minimal, as the load produced on the
network as well.
Thus, the network always seeks to move the mobiles to the higher energy states when it is necessary
to transmit or receive, and as soon as possible, bring them back to lower energy states when there is
no provision of new transmissions.
The radio resource management algorithms (RRM) that take such decisions are implemented by the
network.
Important: The mobile alone can not change from one state to another, it is always directed by the
network!
Important: we are talking about battery consumption and increase signaling according to the
parameter settings on the network. So far we were short, and could calmly move to the next and final
mode of our tutorial today, 4G Connected mode. However, since we have this very recent matter in
our mind, and also the difficulty in finding specific documentation on this topic, we will make an
'extra', and talk some more about it, but now in a more detailed way. If you just want to know the
modes and states in general, you can skip to the last item (4G Connected mode). However, if you
want to go a little deeper in the 3G signal issue, just keep reading.

Battery and Signaling x Timers and Other Adjustments


Let's talk about the timers and triggers that make the mobile go from one state to another, in 3G
Connected mode.

We have seen that when the mobile is in the CELL_DCH state, it makes the transmission/reception of
large volumes of data. At any given time, there is nothing more to be transmitted/received, the
mobile stops transmitting.
But the network does not immediately remove the mobile from CELL_DCH state, since it may have
more data to transmit/receive soon.
This time that the network decides to move the mobile from CELL_DCH state to CELL_FACH state is
very critical (remember that while the mobile is in the CELL_DCH state, it maintains a dedicated
channel, or occupies a place in the HSDPA scheduling algorithm (High speed Downlink Packet Access).
This downtime is informally named T1, since it is not standardized by 3GPP, but is widely used by
manufacturers.
Only after expiration of the inactivity time set for each state, it is that the network puts the mobile in
a more appropriate state.
In the case of the mobile which is transmitting in the state CELL_DCH stop transmitting, starts
counting a T1 timer. After this period the RNC sends the mobile to the CELL_FACH or CELL_PCH state.
Now when the mobile is in the CELL_FACH state by transmitting/receiving small amounts of data (or
simply because it has been redirected from CELL_DCH), a similar timer is used by the network to
trigger the sending of the mobile into a lower energy state. Also informally as the T1, this timer is
called T2. The lower energy state where the network will send the mobile may be the CELL_PCH or
URA_PCH, depending on the availability of these states in the network.
For networks that support CELL_PCH or URA_PCH, we still have a third timer, T3. When the mobile is
in the CELL_PCH state for a certain time, the RNC triggers the transition to Idle.
The purpose of these times (elapsed times for the state transitions to start) is easy to understand, if
we try to answer the following question: In a set of mobiles, which of them will back to send or
receive first data (how likely)?
The answer is that it is more likely to be those who were using mobile data last recently.
For this reason the network keeps the mobile on a dedicated channel for a few seconds T1 before
sending to the common CELL_FACH channels - may be that it will request more data very soon.
This works well for some types of applications, such as a user navigating through pages in a browser.
However, this algorithm is becoming increasingly inadequate, due to the emergence and increasing
use of applications that have regular update schedule, as exemplified earlier this tutorial, as Social
Networking and Instant Messengers (Whatsapp, Twitter, Facebook) Stocks portfolios,
Email/Calendar/Contacts/RSS Sync.
This kind of update can happen for example every 2 or 3 minutes.
And what does that mean? We have given time for the mobile back from CELL_DCH to Idle! Again we
will have to re-establish the RRC connection in each of these updates; again get a dedicated or shared
channel. And all this, often to transmit only 1 kB, lasting 1 second or less!
The mobile remains a few seconds occupying a high power consumption channel, spending battery,
wasting network resources and causing interference to other mobiles!
As we can see, we arrive at a challenging point. In fact, a dilemma.

Regarding the battery consumption is better the mobile back as quickly as possible to the idle mode,
just after it finish transmitting. Ie be 'connected' the shortest possible time.
In relation to the user experience, it is better that it stays as long as possible 'connected'.
The Idle mode to CELL_DCH (RAB activation time) transition time takes about 2 seconds.
When the transition occurs from a PCH state to CELL_FACH, the RAB activation time falls to 0.25
seconds. In this case we need the network support some of PCH states (CELL_PCH and/or URA_PCH).
That is, we have an equation with several variables (reduce battery consumption, improve user
experience, reduce signaling and interference) that depends on several factors (if the network has
PCH states, the value of T1 and T2, the activation time the RAB, DRX).
Different types of optimization can be done in an attempt to achieve the best according to the
network configuration.
We will try to show below some of the possible combinations in graphic, considering the transmission
of a small data packet (~ 1kB), in a very short time (~ 1 s), shown in red (1).

In the vertical axis we have battery consumption compared to the consumption in the Idle mode. On
the horizontal axis we have the time in which the mobile is in each of the states, which in turn are
identified by colors. The respective areas represent the energy used.

Putting the key combinations together, we have the chart below.

In the first example (1) we have the time T1 and T2 (= 10 seconds) high, in a network that does not
have PCH states, and therefore always has a RAB high activation time (= 2 seconds).

In the following example (2) we have the same scenario, only reducing the T1 and T2 times in half (=
5/2). It is clear that the configuration of the timers T1 and T2 directly affects the battery life
perceived by the user - in this case reduced. However, the mobile back much earlier to the Idle mode.
This means that every time the user restarts the use of data (such as a new click on a web page after
some time they took reading the previous page) it must go through the RAB activation process (Radio
Access Bearer), waiting about 2 seconds.

In addition to the time that the user expects to be in itself an inconvenience, we still have the
problem of the large amount of signaling involved in this process, adding even more load to the
network (in this case, the RNC).
Trying to solve this problem, we use the PCH states, as in the following example (3). Now we have an
activation of the RAB (Radio Access Bearer) much faster (0.25 seconds), since in the PCH state we
still maintain the control connection.

The only drawback here is that the battery consumption in the PCH state, while also being low, it is
still double that in the Idle mode (lowest possible consumption). In the long term, consumption also
makes a difference in battery life.
To try to minimize this battery consumption in the PCH state, we can adjust the DRX cycle of each of
these states. In the previous example, the configuration was as recommended with the DRX cycle of
the CELL_PCH state twice the time of the idle mode DRX cycle. Typical values are Idle DRX = 1280 ms
and CELL_PCH DRX = 640 ms, or Idle DRX = 640 ms and CELL_PCH DRX = 320 ms.

But if we adjust the cycles to the same value as in the following example following graph (4), battery
consumption in the CELL_PCH state is almost equal to the consumption in Idle mode.

Note: We'll talk in another tutorial about the DRX, but for now know that it affects the way the mobile
keeps 'listening' the paging. The lower this cycle, more responsive is the mobile (closer, getting more
page information), but higher battery consumption. The higher the DRX cycle, lower battery
consumption, but less responsive mobile is for calls initiated by the network (pagings).
If we increase the DRX cycle of the CELL_PCH (to become equal to the idle mode) and consequently
reduce consumption, we have the disadvantage of slightly decrease the likelihood of mobile responses
to pagings.
As a last case of example (5), we will have the participation of terminal manufacturers, in the past,
when the signal problem was not as common as most recently, and they for its own developed
mechanisms to automatically save battery.

The basic idea of all was the obvious: if the mobile does not need to transmit more after some time
(idle) it must return to the Idle mode. Mobile simply alone decides when to release its connection (not
the network) through the SCRI message (SIGNALLING CONNECTION RELEASE INDICATION), existing
since Release 99, but did not expect any response from the network.
Here again the graphics with some examples of optimization that may be done by setting timers, use
of PCH states, DRX cycles configuration, etc.

Important: In the examples, we always initiate transmission using the CELL_DCH state, and then the
CELL_FACH. Our aim was to illustrate the T1 and T2settings. But in our particular example, we
consider a small volume of data in a very short time.
From what we have seen, these are ideal conditions for the CELL_FACH state. That is, in practice, this
transmission example of the packet is set to happen in the CELL_FACH state, rather than the
CELL_DCH.
But regardless, we have a common factor to all the examples: all mobile transitions so far are
controlled by the network, there is no 'dialogue' with it. (Except for the last example, with downtime
proprietary implementations - and still is just an arbitrary mobile action that simply decides to return
to the idle mode).
That is, we do not have resource optimization. But no one more than the mobile knows exactly what
is going on, what is happening. Which applications are being used, and which probably will demand
the network.
And even better: no one better than the mobile to say that does not need anything else - the network
may terminate its connection!
If we establish this dialogue, the network can immediately move the mobile to a more convenient
power state at the time, and configured by the operator.
This conversation is great for both: the mobile saves battery, and network saves resources (channels)
and reduces interference!
Unfortunately, the importance of this dialogue was perceived a little late (when manufacturers have
followed their own implementation to release the signaling connection, always to the Idle mode). Only
in 2008/2009, the 3GPP Release 8 of the standardized FD (Fast Dormancy) mechanism, which defined
the mobile could communicate with the network using the existing SCRI message, now with IE
(Information Element) 'Signalling Connection Release indication Cause' present and set to 'UE
Requested PS Data session end'.

In other words, with the FD, the mobile can tell the network that wants nothing more, and it can
immediately remove it from a high energy consumption channel, sending to a more appropriate state.
The FD allows that the states control bypass the inactivity timers, after mobile finished transferring all
its data - when receiving the SCRI, the RNC can send the mobile to the Idle mode.
Just then we see the main timers configuration scenarios related to state transitions, with a little more
detail (and we can still see that there is much to be seen and discussed!).
Again, we escaped the goal of being simple, but this understanding can be quite instructive, even for
the most experienced in the subject, and so we decided to approach it.
Let's go back and finish our tutorial, talking a little about the connected mode in 4G.

4G Connected Mode
Finally, we come to the last mode of today's tutorial. Do not worry, we are almost done, and very
soon you will be able to understand the figure with overview we showed at the beginning.
Just as the 3G mobile after being turned on, the mobile LTE performs a series of actions, initial access
procedures. This includes for example 'Cell Search' and 'Cell Selection', receiving system information
and the random access procedure. Again, these concepts are not explained here today because we
just want to generally understand just how the modes, states, and transitions work.
Compared to 2G and 3G, LTE state and states transitions (4G) are much simpler: Either the mobile is
in idle mode, or is in Connected mode. This also applies to LTE-A (Advanced).

A concept that should already be clear to you, from what we have seen so far is the importance of
improving the efficiency of saving battery life and network resources. And this is extremely related to
the states, as and when they occur their transitions. Especially in the growing global scenario like
'Always-on' applications with small data transmissions often unpredictable.
You must be beginning to wonder how to apply some of the concepts in your own network, but at the
same time worrying because you can not see such an efficient solution able to meet so many different
variables.

But we have a good and great news. The first, and good news is that it is not only you who have
these doubts. 3G networks are not really the most appropriate for this scenario, as they were not built
for that purpose. In any case, they can be greatly improved with the use of features such as
'Continuous Packet Connectivity'.
And the great news is that the LTE (4G) has been created thinking about this kind of problem since its
conception!
An example is the DRX (Discontinuous Reception) in Connected mode, which gives more options to
the mobile: the ability to periodically turn off its radio. This on-off time can be set to 1 ms granularity!
We know, however, that turning off the mobile can bring a negative impact on latency. To minimize
this problem, we defined two stages of DRX.

In the first stage, from a certain time elapses without more data transfer, the mobile uses the short
DRX cycle, or can 'sleep' (turning off the radio) and for short periods. The radio is 'asleep' and 'wake
up' more often.
When using the short DRX cycle, we can move to the second stage (or even return to the state
of Continuous Reception, if any data to be transferred). The second stage follows the same preceding
idea: after a certain time without data to transfer, the mobile utilizes a long DRX cycle, ie, will now
'sleep' (turn radio off) for longer periods.
On the one hand saves battery, on the other, it increases the latency.
Important: Be careful not to confuse the LTE connected DRX cycle with the DRX the Idle mode. In Idle
mode, the DRX is more related to paging, and so is often called DRX paging. This Idle mode DRX
cycle time is much longer than the LTE, reaching seconds!
In a way, we can consider these stages as 'sub-state' of LTE in the connected mode.
When the mobile LTE is in the connected mode, it has a RRC connection, and its information is saved
(known) in the network (e-NodeB). Mobile monitors control channels associated with the shared data
channel, and checks for scheduled data to it (or not), reports the CQI (Channel Quality Feedback
Information) after all the measurements and also performs neighboring measures of all networks
(2G/3G/4G).
Regarding to its knowledge by the network in the CELL_FACH state, the network (eNodeB) know
where the mobile is in the cell level (the cell where the mobile made the latest 'Cell Update').
Speaking of transitions, we know that the LTE have only two basic states: Idle and Connected. So the
mobile LTE will in Idle mode to Connected or Connected to Idle.
To enter the Connected mode, the mobile performs the connection setup: RRC setup,
configuration/reconfiguration and security. And start a new connection or maintain existing ones.

When the mobile does not request uplink or downlink resources of the network (eNodeB), and
likewise the network (eNodeB) does not receive signaling/traffic intended for the mobile, the mobile
reset/release all radio resources (including signaling), and tells the network that is going out of this
state and reason. In other words, when the connection ends, the mobile is released.
Regarding the battery power when the mobile is in connected mode, the mobile has a variable
consumption. If it is actively transferring data we say that it is in the Continuous Reception 'substate'.
After a certain time (t1) with no more data to be transferred, the mobile switches to the Short DRX
'sub-state' - and waits for more data, obey a second set time (t2).
If more data comes, it then returns to the Continuous Reception 'sub-state', otherwise goes to the
Long DRX 'sub-state'.

Unlike 3G, the energy drain on the 4G LTE is variable, depending on the throughput. Lower rates
require less energy, but as the rates increase, power consumption also increases.
In a rough comparison with the 3G, LTE radio consumes more power because their communication
states (Short DRX and Long DRX) consume the same high energy, while in 3G have the CELL_FACH,
which consumes less than half the base CELL_DCH energy . But although consumption a little higher,
we can not forget that LTE is much faster than the 3G.
All these comparisons and implemented algorithms can be seen indirectly in 2G/3G/4G modes and
states transition diagrams, like we have below.

Of course, this diagram is not fully complete, but we try to group at least the key information
necessary for explanations.
We hope that the goal today has been achieved, and that this summary help you to better understand
how is the operation of the mobile network from the point of view of the mobile, particularly what this
represents in terms of battery consumption, signaling increased, latency, interference and other
factors that directly affect the quality of the network and hence the user experience.

Conclusion
All concepts of modes, connected states and transitions of mobile networks seen in this tutorial are
much broader (and complex) than what was presented. We try to present it in a simple way, but the
amount of detail (and auxiliary concepts that need to be well understood) makes this very difficult
task, we must recognize.
With the understanding of the concepts presented, it is easy to see the large space we have for the
techniques and solutions that can improve the efficiency of communication and speed up traffic, while
they can save the mobile battery and network resources.
After all this it is what we always seek: Help you to understand some complex scenarios, making
them easier to be understood, and thus giving grants for you to continue progressing in your studies
and work.
We count on you, keeping as a loyal reader, and especially being part of this project with us. If you
liked this tutorial (or other from our website), share with your colleagues and friends.
Your recognition participating is what motivates us to keep following the evolution of networks, and
always bring new tutorials, news and innovations.
Until the next tutorial.

What is CSFB and SRVCC in LTE?

Regardless of the pace of LTE networks deployment around the world (faster in some areas, slower in
others), the number of users with 4G devices is intensively growing.
Thanks to factors such as lower costs - due to the gain of production scale, and also by encouraging
migration to 4G plans - offered by operators who already have an available network, more and more
people have access to new services and benefits that this technology offers.
However, as much as the current data services are improved, and that progress in the area lead to
the adoption of new services, a basic necessity should still continue to exist at least for a while: voice
calls!

While making a voice call may seem simple, largely depends on the scenario where the user is, and
alternatives available for its completion. So it is necessary to understand well what are the
possibilities and the most important concepts of these key scenarios.
In the first generation of cellular networks, the communication through voice calls was the main goal,
and was based on a circuit switched topology or 'channels' (CS Circuited Switched).
Over time, the need for other services (data!) has emerged. Voice calls have come into existence with
these new services. As demand increasead, these new services were supported by a new domain, the
IP-based packet-switched (PS Packet Switched). The following figure shows how these two domains
work.

And in LTE (4G) system we had another great change: the CS domain has been extinguished! LTE
networks are based exclusively on the PS domain, and voice services should be carried out in other
ways (as we shall see).
But as we mentioned, regardless of network topologies, voice services are still needed. (Of course,
they slightly decreased compared to a few years ago, but are still significant, ie their demand
continues).
With the continue growth of LTE networks, let's try to understand a little more the concepts,
alternatives and solutions for any user to make a voice call on an LTE network?
Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

How, when and where?


First of all, we need to understand how, when and where voice calls can occur.
In the 2G legacy networks, voice calls are made practically only on circuits - for each call (CS
domain).
In 3G legacy networks, voice services can use the CS domain, but can also be made through OTT
(Over The Top) solutions, applications that encapsulate the voice and transport via an IP domain (PS),
but who lack the QoS requirements needed to ensure good communication - with the Non GBR type
services (no bit rate guarantee). Example: Skype. Note: It is very unusual, but it is also possible to
make OTT voice calls on 2G networks. In fact, there may be OTT solutions in any technology - it can
be used in legacy networks, and also in others such as WiFi - which are already commonly used for
VoIP.

And in LTE networks, voice calls can be fully IP-based, can use OTT solutions via 4G, or be transferred
to the legacy 2G/3G.

As we begin to see, there are many alternatives. As usual, we will easily see each one.
Note: In this tutorial, we will always refer to voice calls (originating and/or terminating); However,
SMS services are also included.

Alternative to voice calls in a generic 2G-3G-4G Network Topology


And the best way to understand the alternatives or possibilities of making voice calls in LTE network
(4G), it is to start from a 2G-3G-4G network topology very simplified - considering only the main
elements involved.
As we can see in the following figure, the LTE (EPC) has no direct 'link' to the CS network - as we
have seen, it is designed to take care of purely IP (PS) calls. It has no Media Gateway directly
connected, so no CS call is supported by the MME.

In other words, if the user or UE (User Equipment) is on a LTE network, as shown in the topology
above, we can not make a voice call.
Note: As mentioned before and according to the topology above, the only way to have voice services
in LTE would be through OTT services such as Skype. However, this solution is not discussed today.
If we understand this, it is also easy to realize that in order for we to have voice services in LTE,
changes need to be made. There are some alternatives, and below we have the main ones:

VoLGA (Voice over LTE via Generic Access): Use legacy 2G/3G as a generic access, 'packaging' voice services,
and delivering via LTE.

CSFB (CS Fall Back): whenever the UE have the need to place a call, make it revert (fallback) for legacy
networks.

VoLTE (Voice over LTE): make voice over LTE itself. In this case, the voice is pure IP - VoIP LTE.
o

SRVCC (Single Radio Voice Call Continuity): ensure that purely LTE (VoLTE) calls are transferred (via
handover) to the legacy networks in a transparent manner.

Note: notice that the SRVCC is an option when the voice call has been established in LTE. Ie it is a
conditional alternative - considering that VoLTE option has been used.
Even without knowing very well the options presented, it is easy to imagine that the 'best' solution
would carry voice over their own LTE network. But like everything in life, it also have the other side,
the pros and cons.
To deliver voice services in LTE network is necessary to have an infrastructure that support it. In other
words, there needs to exist an IMS (IP Multimedia Subsystem or IP Multimedia Core Network
Subsystem). If an IMS is available, then the voice over LTE may be provided as long as a minimum
set of IMS functionality and entities also are present.

Note: IMS is much more complete, and have more other purposes than the voice. The voice is just
another 'application' of IMS, as we'll see soon.
This minimum set of features and entities of the IMS (called VoLTE or One Voice) was standardized to
enable LTE operators to provide voice services without having to make very radical changes in the
network (without having to invest in a complete IMS, with all entities and functionality).
In any case, it requires investment.
And therefore the first two alternatives become attractive: based on legacy network CS infrastructure.
But if on the one hand such alternatives require less investment in LTE network, these alternatives
depend on the existing 2G/3G networks.
Let's talk a little more about each of these possibilities, but always trying to maintain the overview, in
the simplest possible way to understand. Remember that our goal is to learn the concept, in order to
enable a deepening on the subject, if desired, more easily.

VoLGA
The first implementation alternative that emerged was the VoLGA (Voice over LTE via Generic Access),
or: try to use what are already available, with minimal changes required.
To use the infrastructure of legacy 2G/3G networks, VoLGA introduces a new network entity, the VNC
(VoLGA Network Controller), which basically functions as a 2G BSC, communicating with a GSM MSC
(Mobile Switching Center) and as one 3G RNC communicating with a UMTS MSC (Mobile Switching
Center).

When we have a new call (be it originated or terminated), it is managed by the MSC of legacy
network. VNC is who mediates the voice signal and its related messages between the MSC and the
LTE network.
Although it is possible to carry out the delivery of voice and SMS services to users LTE, the Volga was
unsuccessful. This is because, as we have seen, exclusive investment are needed for this purpose. At
the same time however, global efforts to VoLTE increased (eg investments in IMS), and thus this
alternative eventually falled into disuse.

CSFB
But if in one hand operators follow seeking a complete LTE infrastructure (with full IMS) to meet
multimedia services and also purely LTE voice, this is not a topology that is available in the short and
even medium term.
While that reality doesn't come, we must use the legacy network when there is the need of voice and
SMS delivery to LTE users.
And the most common alternative to this is the CSFB (CS Fall Back), an interim solution until we have
full support for voice over LTE.
At CSFB scheme, whenever there is a demand for a new voice call, the LTE user is 'backed' for a CS
legacy network, assuming that this provides an overlapping coverage. In other words, with CSFB, a
voice call is never active in LTE, but in legacy networks.
At the end of the call in the legacy network, the UE can re-register the LTE network.
It goes something like this: the UE is registered (also) in the legacy network. When it got a call, the
legacy network tells to LTE network: 'I have a call to the UE, can you ask it to come here and make
the call?'
To CSFB be possible, users must be using dual mode devices, ie able to operate both in LTE network
and in the legacy network.
To support CSFB, a new interface is introduced: the SGs, connecting the MME to the legacy MSC.

As the CSFB is currently the most widely used option by several operators, let's see some basic
scenarios of it (CSFB).

CSFB - Registration and Location


When the CSFB UE is turned on, it registers itself in the two networks: LTE and legacy network (CS).
And to allow quick transfer to the legacy network (either 2G or 3G) when necessary, the LTE network
needs to know the location of the UE.
For this, the MME, which tracks the location of the UE in the LTE network, continuously provides
location information to the legacy MSC, using the new SGs interface.
The set of SGs messages then supports management of mobility, paging and SMS.

CSFB - Originated Call


We will continue, and assume that the UE is initially covered by the LTE network, and that there is an
active IP connection.
When the UE decides to originate a voice call, it sends an SRM (Service Request Message) to the MME
(more specifically the ESR - Extended Service Request).
The MME checks whether the UE is CSFB capable, and notifies the eNodeB to transfer the UE to the
legacy network.

Before performing the UE transfer, the eNodeB can ask it to make RF measures on neighboring 2G/3G
network. The eNodeB then decides the best network for the UE and performs the transfer.
Once the UE camp in 2G/3G network, it starts the call procedure as usual - the UE starts the call
control procedures in legacy network.

CSFB Call + Data Connection in LTE


And what happens if I have an active data connection in the IP LTE network, and decide to make a
voice call?
There are two options:

The data are also transferred to the legacy network, or

The data are temporarily suspended, until I return to the LTE network.

Although the first option seems the best, we must take into account that the transmission of IP data
is also transferred: it can operate at much lower speeds (legacy systems). In addition, it may be that
the legacy networks deny the IP session due to lack of resources or for not being able to process it.
The S3 interface is used to carry out the PS session handover for 3G (in this case, the DTM - Dual
Transfer Mode must exist, but this details escapes form our theme today).
There are no 4G data handover supported to 2G - in this case, the data is suspended.
The eRABs 4G are released when the UE performs the CSFB.
An important information is that the S3 is a 'new' interface between MME and SGSN on GTPCv2. And
to support it, the SGSN needs to be updated (most carriers do not want to do this without a strong
justification).
And Gn interface is already on GTPCv1, which is the native GTP version for 3G networks. So in this
case only the MME needs to be updated, and as it is a relatively new node, it is probably easier to do.
Not to mention that the new SGSN may have native support for S3.

CSFB - Terminated Call


Finally, we have the case of a terminated call for LTE user.
The call request arrives first to the MSC where the UE was previously registered.
When the MSC the receives call request, it sends paging messages to the related MME via SGs
interface.
This message is forwarded to the UE, which is still connected to the LTE network.
If the user accepts the call, it sends an SRM (Service Request Message) to the MME.
The then MME notifies the eNodeB to transfer the UE for the legacy network, and the eNodeB then
decide the best network for the UE to make the call.

CSFB What happens after the end of the CS call?


We have seen that the 4G eRABs are released when the UE performs the CSFB. But what happens
when the UE ends the CS call?
About what should follow next (if the UE should return or not to LTE as soon end the call CS), there is
no specific rule.
Anyway, the main possibilities are:

The upper layers forcing the 'reselection' to LTE so that the UE enters idle mode in legacy network.

The operator send LTE 'redirection' information in RRC connection release message of legacy 3G network after
the call is finished. This will result again in reselection to LTE.

The lower layers (AS - Access Stratum in this case URRC or GRR) reselect to LTE if the reselection criterion is
satisfied. In most cases, operators have their parameters set such that the reselection to LTE happen if there is
a good LTE coverage area overlapping the legacy network.

VoLTE
Everything we have seen so far is based on the making of voice call in the legacy network. But as we
have seen these are 'temporary' solutions until the 'final' solution - VoLTE - is available.
And the final LTE voice solution (Voice over IP, or more specifically VoLTE) uses the IMS backbone. An
example of network topology supporting VoLTE is shown in the following figure.

To make voice calls, LTE networks need to have an IMS. When the first LTE networks appeared, they
had no IMS, and without IMS, it was not possible to make any calls to any PSTN or CS.
We have spoken of the IMS before, but let's remember.

IMS
IMS is a backbone (network) at the application level, which works on top of other wireless networks
and not just the LTE (as 3G, 2G, WiFi and others).

Its concept is quite broad, and to understand it with all its entities, possibilities, interfaces, protocols,
and possibilities is an extremely difficult task, even for the most experienced in the subject.
The IMS is not new: it already existed before the LTE (as well as other entities, such as the EPC PRCF,
which also is not new!).
Its complete specification consists of thousands and thousands of 3GPP standards. But let's try to
understand in a simpler way than that found there.
As its name suggests (IP Multimedia Services), IMS offers several multimedia IP services, including
VoIP (Voice over IP). In IMS, voice is just 'another' service!
IMS brings together voice features such as authentication, service authorization, call control, routing,
interoperability with PSTN, billing, additional services and VAS. None of these exist in the EPC: this is
the reason why the pure EPC without IMS can not process a voice call.
In other words, for VoLTE, access is made by the SAE (eUTRAN + EPC), while voice service lies in the
IMS.
An analogy we can do is to consider the IMS being a car. And the LTE voice, as our shuttle service (to
go from one place to another).

We can buy a very basic car - Basic 1.0 engine, wheels, steering wheel and other minimum parts: yes, we can
go from one place to another.

Or we can buy a 'connected' car - ultra modern, powerful, tetra-fuel, with all the safety features, ABS, Air bag,
connected to the Internet, etc: we also go from one place to another ... but we can make several other things
as well!

That's more or less what happens with the IMS. It is used in conjunction with the LTE network to
support voice: both full IMS implementation and also the minimum IMS suggested implementation for
Voice over LTE.
But the telecommunications industry would rather not invest in a full IMS, or at least did not have
sufficient reason to do it immediately. And for the adoption of the simpler IMS voice solution, appear
the VoLTE initiative, which specifies a minimum set of features, and selects a simple choice when
multiple options exist for certain features.

However, not all of these features are required for delivery of basic voice services by the LTE network.
So let's illustrate with a diagram (extremely simple) the implementation of a voice in IMS (VoLTE).

Let's assume that we will make a VoLTE call with a CS network whatsoever, for example the PSTN (Public
Switch Telephony Network).

And consider in the IMS only two simple elements, one for the control plane (with signaling) and one for the
user plane (with voice).

And the entry being the SAE, or LTE network.

In IMS, the control element would be a SIP server (soon we will talk about SIP - for now just understand that
when we have a call request to this server, it sets up the call.); and the user element would be a Media
Gateway.

In comparison with the legacy networks, the SIP Server is equivalent to the MSC in the mobile
network topology and the media gateway is equivalent to a typical Media Gateway on any network
topology, which is common in virtually any voice network to handle calls.
The above concept is valid, but in practice the IMS consists of much more entities, as seen below.
Note: Not all possible/existing entities and interfaces are shown in the figure.

Let's (quickly) see a little about these key elements.


Note: Do not worry or try to understand everything now about these elements. Remember that our
goal here today is not that. Anyway, it's worth a read.
The MGCF (Media Gateway Controller Function) is the control element that communicates with other
PSTN networks. It is significant because it has to inter-networking function: can speak SIP, can speak
ISUP, can speak other signaling protocols.
The IM-MGW (IM Media Gateway) is the element that takes care of voice functions for example
making protocol translation required to support the call. More specifically between the Real Time
Transport Protocol (RTP) to analog format or basic PCM in the CS network; and vice versa.
The HSS (Home Subscriber Server) is an element that also exists in the LTE EPC (although appeared
first in IMS), and its functions are similar.
The MRF (Media Resource Function) provides many services related to voice, such as conferences,
announcements, voice recognition and so on. It is always divided into two parts, the MRFP (Media
Resource Function Processor), for media streams, and the MRFC (Media Resource Function Controller)
that functions basically as an RTP 'mixer'.
An important concept, and that's worth stand out here is the Proxy, for example to make filters,
identify where the users come from, the cases of roaming, etc. Remember that we are talking about
an IP network. Instead of the users to speak directly with the SIP server, they use the proxy.
The CSCF (Call Session Control Function) has some variations.

O P-CSCF (Proxy CSCF) among other tasks, provides QoS information related to the LTE network. Acccess an
AF to voice service, and provides the control functions 'policy' and 'charging' to the PCRF.

O I-CSCF (Proxy CSCF) is an interrogator.

And the S-CSCF (Serving CSCF): the CSCF server acts as a central node.

The BGCF (Border Gateway Control Function) functions as a routing table (or table B) and acts to help
the S-CSCF. It has basically routing decisions.

As we speak, the IMS voice is a 'service' - the IMS is a services 'facilitator'. The IMS services are
provided through AS (Application Servers).
One such application is the voice. And there are also video services, conference, etc.
In fact, sometimes the AS are not considered as part of IMS (when we understand the IMS as a
CORE).
And in IMS, the standard AS for voice is the MMTel (Multimedia Telephony Service), sometimes called
MTAS (Multimedia Telephony Application Server).
The SBC (Session Border Controller) is an element of the edges of the IMS to control signaling and
often the media streams involved in calls.
The S-CSCF will be responsible for call routing depending on where the other user (the other party)
are:

A SBG (Session Border Gateway) if the the other party is in IP domain;

A MGC/MGW if the other party is in the CS domain (PSTN/PLMN).

IBCF and TrGW are not shown in our figure, but are respectively the control and user plane for other
IMS networks, other SIP networks in general. They are similar to the MGCF/IM-MGW - the
requirements for reaching one or another type of network are different, so also have separate parts
for performing the same functions but with different networks.

SIP
To support telephone signaling between the LTE network and telephone networks, the IMS uses SIP
(Session Initiation Protocol). SIP is a standard protocol for establishing voice calls over IP networks.

The code is open, and uses the 'request-response' model to allow communication sessions.
There is a set of standard commands that can be used to initiate, manage and terminate calls
between two SIP devices.
The SIP has been adopted by IMS standardization as the protocol to allow signaling between
telephone networks and VoIP networks.
SIP is text-based and was developed - in the 90s - in order to be simple and efficient, just like the
HTTP protocol (in fact, was inspired by HTTP and other protocols such as SMTP).
A good analogy is to compare the SIP with HTTP.

You probably can understand well the HTTP interaction principle, which allows audio connection, text,
video and other elements on a web page. With SIP is pretty much the same thing: it allows the
establishment, management and calls endings (or sessions) for IP multi-users without knowing the
content of the call. A session can be a simple telephone call between two users, or a multi-user
multimedia conference.
Both (SIP and HTTP) take the control of the application to the end user, regardless of the transport
protocol (SIP is a control protocol in the application layer), so there is no need for switching
centers/servers.
The SIP however is not a resource reservation protocol, and has nothing to do with QoS.
A short break: our tutorial today is already quite extensive, but we'll keep a little more with this issue
because these concepts are very important, and you'll be hearing a lot of them.
To try to understand it better, let's see a simplified example for a voice call establishment process
using IMS platform and SIP signaling.

Initially, the UE sends a SIP message like 'Invite', containing the description of one or more measures for the
voice session (Initial SDP - Session Description Protocol - Offer).

Then the P-CSCF forwards this same message to the S-CSCF (which has been identified during the registration
process).

All going well, the termination network will have sent a message of type 'offer response' to the S-CSCF, and this
sends this message to the P-CSCF, authorizing the allocation of the resources necessary for this session.

Finally, the P-CSCF forwards the 'offer response' message back to the UE, which confirms the receipt of the
'offer response' message and the resource reservation is started.

This is a very simplified example of how you can be getting (origination) of a voice service by the UE,
via IMS.

Several other diagrams exist, with far more complex scenarios, but the basic idea can be seen above,
and extended if necessary.
Let's complete the tutorial today, now seeing the case where an initially established call on IMS has to
be 'transferred'.

SRVCC
Finally we come to our last alternative listed at the beginning of this tutorial: SRVCC (Single Radio
Voice Call Continuity).
The SRVCC however is not an alternative for delivery, but a rather a handover process of a voice call
previously started in the LTE (whether One Voice - VoLTE LTE or IMS Full Voice).
It is a call transfer method (handover), in a simplified and reliably way, when an LTE user has an
active voice session in IMS and is moving to areas without LTE coverage, but with legacy 2G/3G
coverage.
The main advantage is that the call will not drop - will only be transferred to the CS domain of the
legacy networks.
If in the above case the UE moves out of LTE coverage area with an active call (but goes to a legacy
2G/3G coverage), we must maintain the continuity of this active voice call. In this case, the SRVCC is
used: the procedure where the context of an active voice call on the IMS is transferred to the CS
legacy network (e.g. IMS node context transfer to the MSC).

The challenge with SRVCC is to perform the handover while the UE is connected to only a single radio
at any given moment.
There are two versions of SRVCC:

SRVCC handover to GSM or UMTS, defined by 3GPP;

SRVCC Handover to 1xRTT networks defined by the 3GPP2.

To allow SRVCC both the UE and LTE networks, as also the legacy, must support SRVCC. For this, a
new special SV interface is introduced between the MME and the MSC, which runs on GTPv2 protocol.

To support SRVCC, the IMS network should also include an application server, called SCC AS (Server
Centralization and Continuity Application Server).
This application server is who manages the signaling required for the process.
Let's see a simplified example of some SRVCC procedures from LTE to GSM.

When an UE that supports VoLTE is in an LTE coverage area, it starts voice sessions via the IMS network, which
will host the session and provide applications and session control based on SIP.

When the UE moves from an LTE coverage area for a CS 2G/3G coverage area, with the active IMS session, the
IMS switches the session to the CS domain, maintaining both parts aware of the handover session.

Example of SRVCC Handover


Realizing that its LTE signal level begins to decrease, the UE with an active IMS voice session signals it
to the eNodeB, initiating the SRVCC handover.
The eNodeB then identifies the best available network to receive the service, and sends the handover
request (specifying that it is the SRVCC type) to the MME.
The new voice call request is then sent to the IMS, using a SR STN (Session Transfer Number for
SRVCC) - a unique number that is generated by each UE, and is stored in HSS.
This unique number is sent by the MME to the HSS when the UE firts comes into contact with the
network.

Upon receiving the STN SR number, the SCC AS believes that the corresponding call should be
transferred to a different network network, and starts the redirecting process for the transfer point
(handover) to the legacy network.
After resource preparation is completed, the MME confirms the handover request, previously provided
by the eNodeB.
The eNodeB then transmits this acknowledgment to the UE, while still providing the required
information about the target network.
In the final stages, the UE is detected in legacy networks, and the call is re-established in it.
Thus we have the completion of the SRVCC handover.
Voice packets and also packets that are not voice can be transferred using this method, but the data
rates will be limited by the capabilities of the legacy networks.
Once the SRVCC is a procedure for inter-RAT handover based on IMS LTE network to the CS legacy
network 2G/3G, it is much more complex than that of handovers legacy networks 3G / 2G. The
question is how to maintain a handover performance comparable to or better acceptable.
In order to improve the performance of the SRVCC handover, one WI (Work Item) called eSRVCC
(SRVCC enhancement) was established in the 3GPP SA2 in Release 10. The anchoring solution is
based on the IMS, and introduces new entities ATCF (Transfer Control Access Function ) and ATGW
(Transfer Access Gateway).
Again, the deepening of this subject escapes from our goal today.
Finally, we will enumerate some of the main advantages and disadvantages (or pros and cons) of each
alternative.

Advantages and disadvantages of each alternative


Call setup time: When operators use CSFB, one of the biggest problems faced (and one of the major
disadvantages of CSFB) is the increase in call setup time due to retuning procedures in 2G/3G radios.
An efficient CSFB solution requires the the TAC -> LAC mapping is so that the fallback to an external
MSC/LAC be avoided, since this will further increase the call setup time.
Call quality: call quality in LTE is better when compared with the same third-party applications
(OTT). This is due to specific QoS allocated to the call IMS, which may not be present in common data
applications.
Resource limitations for VoLTE: AMR-NW LTE requires much less resources and datarate than
GSM, and we will have many more users on the same bandwidth (spectral efficiency).
Investment x Current Network: if everything is 'working well', what would be the reason for
investment, since surely such investments generate resistance from commercial and business areas?
The comparison that must be done is: Investment versus (all) Benefits of IMS/MGW/BGCF.
Future:In any way all that discussion hereafter will more significance. Currently we still have
extensive legacy networks, capable of supporting these voice calls.

In this case, it is no problem to continue using this available infrastructure. Resistance will only
decrease when such capacity also decrease. But in an LTE network, if the IMS is supported can make
a VoIP call. So why would we need to make a CS voice call?
CSFB x SRVCC:

It is not necessary to implement both solutions (CSFB and SRVCC) at the same time, if the network has a wide
LTE coverage and a complete IMS backbone.
o

If we implemente CSFB, it means we will not make the call setup using existing IMS Core, and that
could take care of that call in LTE.

In respect to the SRVCC: assuming the Backbone IMS is available. In this case, if the register in the
IMS is successful, the user do not need to do CSFB - A voice call can be simply initiated in LTE network
using IMS.

CSFB is a service handover procedure while SRVCC is a coverage handover procedure.

Case Studies and Analogies


With all that we have seen today, let's imagine some scenarios.
First, imagine that you are in a network that does not have LTE IMS. Then the only way to make a
voice call, whether originated or terminated, is through using legacy 2G/3G.
You need to be redirected/released from LTE to legacy 2G/3G network to make a voice call. Like a
'reselection' from cell LTE to the 2G/3G. Once the legacy network, you can make the call normally, as
you're already used to.
And so, you just saw the CSFB in practice!
Now suppose you are watching a video stream on 4G network, and receive a voice call. In this case,
you need to go to the 3G network (in idle mode), and get the resources for to make that call in 3G.
After you end your voice call, you keep watching the video stream, but now in the 3G network (the
handover from 3G to 4G is not yet defined).
You just saw the CSFB with an active data call!
Now let's imagine that you are in another LTE network, this time with IMS. In this case, you can make
a voice call using IP packets.
We have just seen a VoLTE call!
Further, imagine that you are in one of these voice calls using packets in 4G. Suppose further you
reach your 4G cell coverage edge. So the only option to keep your call is to handover it to the 3G
(assuming this is the existing coverage). Your call will then continue on the 3G network, but now as
one CS voice call. SRVCC!
If the SRVCC is not supported, the call is dropped as soon as it leaves the LTE coverage area.
If the SRVCC is supported, a set of messages are exchanged, and the voice call is transferred
(handover) from LTE IMS to CS domain of the 2G/3G network.
And so, we have just seen an example of SRVCC handover!

And that's all for today. We hope that the tutorial has managed to be useful for you that somehow are
interested voice in LTE networks.

Conclusion
We saw in this tutorial today, in a very general way, the main ways to make voice calls (and SMS) in
LTE networks.
The options or alternatives depend on several factors, such as available network topology and the
operator's strategy.
Depending on the situation, the call can be originated in LTE via data applications (OTT VoIP), be
purely originated on LTE IMS (VoLTE), sent to be performed on other networks through mechanisms
developed for this purpose (CSFB) or transferred via handover - if active VoLTE call - to a legacy
network (SRVCC).
So, for a user who is a LTE coverage area, a number of considerations should be checked, as the type
of device that it uses (whether supports CSFB), if the LTE network has an IMS that allows outgoing
calls, if the cells supports SRVCC, etc.
Based on the concepts seen here today, we hope you have a position to fully understand what
happens when a user performs a voice call from an LTE network.

What is CP (Cyclic Prefix) in LTE?


Continuing the study of small (but important) LTE concepts, let's talk today about CP (Cyclic Prefix).

Goal
Our goal today is simple and straightforward: to understand what is the Cyclic Prefix (CP) and how it
is used in LTE systems.

What is CP?
From the CP name itself (Cyclic Prefix), it is intuitive to think of it as a Prefix, information that is
periodically repeated - is cyclic.

But we need a little more information to know exactly what we're talking about.

As we are talking about LTE (although the concept applies to any technology that uses symbols to
convey information), we will start from LTE symbols.

Our tutorial on ISI (InterSymbol Interference) already gives us a very strong clue to the subject.
When we transfer symbols (from a receiver to a transmitter) we are subject to the appearance of
such interference.
If you understand well what is ISI, and now you know that the CP has to do with symbols transfer,
you should probably already be concluding the goal of CP: work as a guard band between LTE
symbols.
We have learned that we minimize the ISI a lot when we make the size of the symbol larger (the size
of the delay spread becomes relatively minor compared to the larger size). But as much as we could
increase this symbol size, the effects of ISI would always be present.

To permanently eliminate this problem, the solution is to find a way where the 'lost' part of the
symbol could be 'recovered'.
The one way to do this is by copying or duplicating an initial part of the symbol, and inserting the end
of it. Of course, this means increasing the full 'size' of that symbol, but the gains outweigh.

Considering the same symbols we use to demonstrate the ISI problem, we can see that with CP this
problem no longer exists.

What the CP does is to copy a small part of the initial information (hence the name prefix) to the end
of each symbol (hence the name cyclic).
Thus, the receiver can identify the end points of each symbol and correctly correlate the information,
thereby eliminating the interference problem.

We understand then why it is a 'prefix', and why is 'cyclical'. Now we need to understand another
important point.
This 'guard period' after each symbol is a simple but very efficient method for the problems of
multipath reception.

The receiver already "knows" the last part of the symbol at the time it receives the first component of
the signal, the multipath shortest path.
In this case, it can make the correlation with the information of other multipath components, making
the corresponding correlations and getting complete information.
In addition, the CP also helps to make an initial estimation of time and frequency synchronism, using
the same reasoning correlation of known information that arrives over time.

And what can we conclude about the CP? The higher the CP, the more 'useful' it is to eliminate the
ISI, but also higher the full time of the symbol - it takes longer to transfer the 'same' information.
Furthermore, depending on the propagation conditions, maybe the part that would be 'lost' in the
symbols is different.
For example, in an urban environment, multiple components (multipath) comes from relatively small
distances - and the CP can be smaller.
And also there's locations where components can come from greater distances until 4 km, as rural
areas.

And what do you do to 'adjust' each case?


The solution found was to create different CP sizes. A larger, which could be applied in areas with
higher probability of short distance multipath, and another complementary to be used in locations
with probability of longer distance multipath.
We then have two sizes of CP defined by 3GPP:

Normal CP: with a duration of 4.7 microseconds. To be used in the first case described above. Equivalent to 7
symbols per slot. Note: In Normal CP, we have a different case, where we have 160 time slots for the first
symbol (5.2 micro seconds) and 144 for the others (4.7 microseconds). We shall see in more detail why this in
a future tutorial. For now, just know that we use the Normal CP Setting we have 7 symbols, or 4.7 micro
seconds.

Extended CP: with a duration of 16.67 microseconds. Used in the second case. Equivalent to 6 symbols per
slot (or 16.7 microseconds).

Note: in general, the ranges of CP in OFDM systems ranges from 1/4 to 1/32 of the symbol period.
As we saw: if we have two possible sizes of CP, we have, within the same network, different values of
CP!
Considering a PRB with 7 symbols (we will not talk about PRB today, but do not worry - it is the
subject of our next tutorial), we have the correspondences between propagation times for Normal CP
(up to 1.4 km).

Advantages and Disadvantages


As we saw today, it is quite simple to understand the advantages and disadvantages of CP.
The advantage is that it eliminates the ISI, and the disadvantage is that it reduces the number of
symbols that can be transmitted in the same time interval.
But there wouldn't be a more efficient method for dealing with the problem that arises due to
multipath reception?
In an earlier tutorial we already saw for example the Rake receiver, which has a much better
'efficiency'. But we must always consider the 'cost-benefit' point of view.
The implementation of the Rake Receiver is far more complex, and if not well done, it can make the
situation even worse (further degrade the system). Furthermore, the capacity required for hardware
implementation is much higher (compared to the CP).
As CP serves well the need, it is the way used in LTE for ISI elimination.

Conclusion
Now we know another concept used in LTE networks, the Cyclic Prefix (CP), also used in other
technologies that utilize the transfer of symbols in their communication, while preserving the
orthogonality of subcarriers in OFDM transmission.
The CP is a set of samples that are duplicated (copied and pasted) to the end of each transmitted
symbol to its beginning, functioning as a guard interval, allowing to eliminate intersymbol interference
(ISI), practically without additional hardware needs.

What is LCS (and LBS)?


Probably you have already heard of LBS and LCS, but perhaps you did not associate those 'names' to
the topic. And if you still do not work or deal with LCS/LBS, you will certainly do some day. Especially
if you work in IT or Telecommunications area, where this subject is increasingly present.
We're talking about location services.

With the advance of services such M2M (Machine to Machine) and IoT (Internet of Things), location
services are each day more present in our lives.
For this reason, it's worth to have at least a good understanding of its concepts and operation.
Then let's get to today's tutorial on this subject?

Introduction
Networks Location Services were originally known as LBS (Location Based Services): location-based
services on the device. By knowing the location of the mobile device, the network may offer
commercial services in accordance with the same.
Currently these services are known as LCS (Location Services), as standardized by 3GPP.
Actually both refer to the same thing and can be deployed in GSM (2G), UMTS (3G) and LTE (4G)
networks. From now on, we will refer to these services only as LCS.
Beyond commercial services, the LCS allows the emergency calls location requirements to be met
(such as E112 services, 190 or 911).

Characteristics
Location services (LCS) are location based services, with the goal of obtaining information of where
the mobile is (location information). With the standardization of the format of the location information
(eg latitude and longitude) the operators can offer different types of services.
And these services can be used in several ways, as for pricing, legal requirements as intercept,
location services, emergency call services, among others.
Standardization includes aspects such as reliability, priority, security, and privacy.
In addition it also takes into account the technology used to obtain the information, which can be
based on the network (network-based), or based on the mobile (mobile-based).

In the case of technology network-based, the operator must install equipment that can perform this function.

In the case of mobile-based, the location information is obtained by the mobile itself, such as through a GPS
(Global Positioning System) chip therein.

A common way to represent these aspects is through a graph of Accuracy versus Availability.
Depending on the technology used, a certain percentage of calls must be located within a certain
distance.

In the 90s, the US government, through the FCC (Federal Communications Commission) mandated
that all emergency calls (E911) to begin to meet specific criteria for location and reliability.
Currently, these requirements are:

For network-based location: 67% of calls should be placed with an accuracy of at least 150 meters, and 95%
within 300 meters.

For location mobile-based: the requirement is slightly higher - 67% of calls must be accurate to at least 50
meters, and 95% within 150 meters.

Over time, however, it became apparent that the location of emergency calls based on mobile GPS
was not as efficient: if the mobile is in an indoor environment, or even in dense-urban, it will have
problems with direct line of sight.
Problems like this are making the FCC in the United States and other entities around the world require
operators higher efficiency (or accuracy) in these locations.
But regardless of the obligation to meet government bonds (which would be already a good reason),
it is also very interesting for the operator to explore other aspects that LCS has to offer (as
aggregated services that generate revenue).
For that, the methods for location of mobile device are increasingly evolving.

Location Methods
The principle of locating a mobile is quite intuitive: obtaining the distance between a mobile device
and a reference. For this, various kinds of measures can be used, depending on their availability. For
example, the cell measurements or from GPS satellites. With these measures the location can be
obtained (by calculation, we can even consider the differences between various measures of various
elements).
The more measurements, the more accurate the location. The radio signals have a known fixed rate
(c = speed of light). The propagation time (t) are measured by the network. Then: Distance = c * t.

But to better understand the main existing location methods, let's first analyze a figure with a basic
network (2G, 3G, 4G) in terms of the key elements involved / necessary for LCS. In red we have the
elements that may be present, and is specifically this purpose. Soon will become clearer - we'll see
how are used these new elements and protocols.

There are different positioning methods, some based on the mobile and some based on network.
The most common are:

A-GPS

OTDOA

CELLID+RTT

Besides these three most widely known and used methods, there are other methods such as RFPM
(Radio Frequency Pattern Matching).
Note: although we're talking about different methods, applied at different times and scenarios, the
methods are more complementary than contradictory to each other.

A-GPS Method (mobile-based)


The mobile-based location is the simplest to understand. The mobile makes a connection with
satellites (of course: it must have an active GPS chip) and obtains the location information.

There is however an important information: in the A-GPS (Assisted GPS) method, the mobile does not
make the initial connection directly to the satellite (which would be time consuming), but it gets the
data from its server cell - which in turn already has the the satellite information stored. As a result,
the mobile connection is much faster.

CELLID Method (network-based)


As most impressive example of network-based location, we have the CELLID method. Through the
communication that the mobile already has with the network (either in Idle state or Connected state),
it is possible to obtain location data.
The solutions based on CELLID has some variations. For example, the CELL-CENTER type is the
simplest to be obtained - but also less precise. The error is the coverage radius, usually from
hundreds of meters to kilometers.

The CELLID + RTT method ensures a little more accuracy. Using the result of two measurements, the
TOA (Time Of Arrival) can be calculated according to the formula:
TOA = ( [ RTT ] [ UE Rx-Tx time difference ] ) / 2

With this time, we can calculate the distance between the mobile and the cell. When the mobile is in
soft handover, the mobile position can be accurately calculated considering the TOA overlap circles (in
each cell).

TDOA Method (network-based)


Another method based on measurements of network, and also widely used is TDOA, which uses
measurements of TOA (Time Of Arrival) of the mobile signal measured by the network.
To measure that signal, it is necessary to install a new network element, the LMU (Location Mounted
Unit).

Current methods comparison


We can update the chart which we saw above, the requirements of Availability versus Accuracy with
the main methods.

When possible (available), the best option to increase the accuracy and reliability is to use a hybrid
approach, ie, incorporating the help of all methods to increase overall efficiency.
One option is for example, first try to get the location with A-GPS. If unsuccessful, uses the TDOA.
This is a good example of complementary solutions for different areas, since the A-GPS is the best
option in remote or rural areas, and the TDOA has excellent efficiency in urban and suburban areas.
In addition, other methods have been developed, such as location based on Beacons or Tags, and also
through Wi-Fi.
Another method worth mentioning is the RFPM.

RFPM method (network-based)


The RFPM was developed with the aim of measuring the movements of mobile, mapping in a georeferenced database information from MMR (Mobile Measurements Reports) with information such as
signal strength, signal to noise ratio and delay.
Requires no new equipment neither modification of existing equipment.

Comparison of Current and Future methods


Again, we can update the chart of the requirements of Availability versus Accuracy with current
methods along with the more recently developed.

Each method has its applicability, noting also that some methods can work together, increasing
accuracy and reliability.

Control Plane x User Plane


From the planes point of view (User or Control planes), we have two ways to convey signaling
information from the mobile.
The form which is an integral part of the network is to flow in the Control Plane.

In this case, we have a low bandwidth consumption, along with high security and integrity. This makes the use
indicated in cases of emergency calls or network planning/monitoring.

The other way (User Plan) works at the highest level of data connectivity, provided by the physical
network.

In the user plane, the bandwidth consumption is highest, and in addition we have problems with integrity. But
the method is more directly associated (remembered) when it comes to providing location information for
aggregate location-based services.

Specific LCS elements


Returning to our figure of a basic network (2G, 3G, 4G) highlighting the new elements with specific
functions of location, it is easier to understand.

LMU (Location Mounted Unit): equipment required in each cell to enable the calculation of the OTDOA (based
on the network location).

SMLC (Serving Mobile Location Server): server used for the locations calculation. It can calculate with
information from LMU (where it is available), or measures of the network itself, such as TA (Timing Advance).

GMLC (Gateway Mobile Location Centre): server that, as the name implies, serves as gateway to the LCS
services. Although not shown in the figure, communicates with the HLR (Home Location Register), HSS (Home
Subscriber Server), VMSC (visited Mobile Switching Centre), SGSN (Serving GPRS Support Node) and the MSC
(Mobile Switching Centre) corresponding functions in sending requests for location and receiving location
estimative.

Control Plan Communication Protocols between the mobile and the SMLC, defined by 3GPP:

RRLP (Radio Resource Location services Protocol): used in GSM networks.

TIA 801: used in cdma2000 networks.

RRC (RRC Position Protocol): used in UMTS networks.

LPP (LTE positioning protocol): used in LTE networks.

User Plan Protocols:

SUPL (Secure User Plane Location): used in A-GPS.

Finally, an example of an LCS architecture in a LTE network topology, which have an E-SMLC (SMLC
Evolved) directly connected to the MME, and the GMLC.

However, further study of the above figure is beyond the scope of our tutorial today.

Future
The development and use of localization techniques should be increasingly present in future networks.
The requirements that were previously present only in the x-y plane, are now already in three
dimensions (including the z axis - height).
While the future is not here (demand for new applications), we already have enough options to use
LCS:

Public and Private Safety: possibility of application in various cases of public and private security. For
example, in a car accident, the LCS service could even save a life. And in the case of equity security it
represents a another extra help.

Value Added Services, Media & Content: according to the location, a number of services can be offered to
users, such as users who are in a certain region (a shopping, a stadium).

Network Planning and Optimization: the LCS services can be used to improve efficiency in several areas of
engineering itself as an aid in locating points that require new sites (Planning) or points that can be improved
with optimization.

Internal Network Functions: as information to the internal network algorithms, with dynamic and optimized
resource allocation.

M2M and IoT: increasingly present in applications like Machine to Machine and Internet of Things.

Besides those listed above, the number of possible uses of location information is even greater, the
limit being the emergence of new applications and solutions itself.

Conclusion
This was our tutorial on LCS (and LBS), location services, increasingly used and present in the daily
lives of everyone.
Knowing the methods and basic workings described here today serves as the basis for a more detailed
study on the subject (LCS/LBS).
If you liked this tutorial, feel free to share it. And if you have any issue you want to be explored here,
please get in touch or comment below.
And until next tutorial.

What does Orthogonal means in Wireless Networks?


The orthogonality concept is one of the most important and crucial mechanisms that allow the
existence of modern wireless systems such as WCDMA and LTE.
But not always such a concept is understood with sufficient clarity, and ends up being just 'accepted'.

An example of this are the orthogonal codes - when it comes to CDMA and UMTS (WCDMA) we are
already referring to them in the name of these access technologies itself.
In LTE we also have the fundamental role of orthogonality, only that in this case it refers to another
access technology, OFDM transmission - although that is quite different from the code of CDMA and
WCDMA scheme, it depends entirely on the orthogonality principle.
And that's what we'll study today, trying to clearly understand this principle: the orthogonality.

And for this we will demonstrate its practical application in a CDMA system. This analogy can be
further extended by you in the understanding of any system that uses this principle/concept.

Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

CDMA Example
As we will demonstrate using the CDMA technology, it is important to first do a little consideration
(review) on the three most basic access technologies (FDM, TDM and CDM) with respect to bandwidth
allocation and channel occupation by its users.

In FDM (division by frequency) access: each user has a small PART OF THE SPECTRUM allocated during ALL THE
TIME;

In TDM (division by time) access: each user has ALL SPECTRUM - or nearly all - allocated at a small PART OF
THE TIME;

In CDM (division by code) access: each user will occupy ALL SPECTRUM during ALL THE TIME.

In the latter case, the multiple transmissions are identified using the theory of codes. And therein lies
the great challenge of CDMA: to be able to extract the desired data at the same time we exclude all
the rest - the interference, and everything that is unwanted.
These CDMA codes are called 'orthogonal', then let's start with the basic meaning of this word.

What does Orthogonal means?


Seeking the origin of the word, we find that it comes from the Greek, with the first part - 'orthos' meaning 'just', 'righteous', and the second part - 'gonia' - meaning angle.
In other words, the most common definition is: perpendicular, forming a right angle of 90 degrees
between the references.
Orthogonal is a commonly used term in Mathematics (in the same way that the term Perpendicular).
But there is an 'informal' rule: when we speak of triangles, we use the term Perpendicular, and when
we refer to Geometric Vectors or Coordinates, we use the term Orthogonal.

Then, according to Mathematics: if two vectors are perpendicular (the angle between them is 90
degrees), then we actually say that they are orthogonal.
Right. But how to know, through our calculations, that two vectors are orthogonal?
Let's talk a little math, but do not worry, we will be brief.
When we talk about vectors calculations, one of the best-known operations is the Dot Product (or
Scalar Product). Simply explained, it is an operation that results in a Scalar (a number) obtained by
the 'multiplying' of two vectors. Remember that we are talking about vectors: the Dot Product
multiplies each position in a corresponding position in the vector by another vector, and add all the
results.
To illustrate a simple way, consider two vectors u and v with only two dimensions.
Vector u = < -3 , 2 > e Vector v = < 1, 5 >
The scalar product is equal to +7 (equivalent to -3*1 + 2*5).
This positive number, although it does not seems, already give us enough information: we will not
show here, but the positive Dot Product indicates that the angle between these vectors is Acute
(smaller than 90 degrees!). Likewise, if the Dot Product is negative, the angle between these vectors
is Obtuse (greater than 90 degrees).
We're almost there, and you may already be 'connecting the dots'...
In Mathematics, another way to write the Dot Product between two vectors u and v is:

u.v = |u| |v| cos(x)


Note that now we have in the formula the information of the angle between the two vectors (the
angle X). We know that orthogonal vectors has an angle of 90 degrees to each other. And we also
know that the cosine of 90 degrees is equal to 0 (zero).
That is, we can conclude: if two vectors has an angle X of 90 degrees to each other - ie they are
orthogonal - their Dot Product is equal to Zero!
u.v = 0
And it is from here that comes a famous definition that you may find in several literatures: 'Two
vectors are orthogonal if (and only if) the Dot Product of them is zero.'
If you failed to understand or remember the basics that you saw long time ago in school, then you
now understand the algebraic operation that allows us to verify the orthogonality between vectors:
the Dot Product. It is a operation between two sequences of numbers of equal length, returning a
single number (the sum of the products of corresponding entries between any two sequences of
numbers).
But enough math for today! We are not here to deduce or do algebraic or geometric calculations, but
to understand what an 'orthogonal signal' is. Of course, the introduction we just saw was needed.
(Note: As always we are not concerned with definitions and more complex calculations involved in this
matter: you can feel free to extend in your studies).
Let's proceed, and try to apply this orthogonality concept to our practice in the wireless networks. In
this case, let's try to understand how can we transmit multiple signals at a single frequency band, and
then retrieve it.
The key here is that each of these codes do not interfere with each other. That is, these codes are
orthogonal!
We say that two codes are orthogonal when the result of multiplying the two, bit-wise, over a period
of time, when added is equal to Zero.
Suppose two vectors as shown below (1,1) e (1,-1). It is easy to see that they are orthogonal, no?

They have a right angle to each other, and the scalar product is Zero. It is easy to realize that they do
not interfere with each other.
But what if we need more orthogonal codes (mutually exclusive, that do not interfere with each
other)?

In this case, we need to create a set of codes (sequence of numbers) whose Dot Product is Zero. Ie,
to generalize this relationship to 'n' dimensions where we can apply the same simple mathematical
rule and verify the orthogonality.
Fortunately, a Mathematician has done all this work. Frenchman Jacques Hadamard created a simple
rule to obtain a set of codes that were orthogonal - mutually exclusive.

Hadamard Matrix
To generate our mutually exclusive sequences numbers, we can follow the basic rule that the
Hadamard defined in his 'Hadamard Matrix'. In this matrix, whose entries are either +1 or -1, all rows
are mutually orthogonal.
The rule of the Hadamard matrix is used for example in CDMA by the matrix that generate Walsh
Codes. Let's see how this generation would be, in a simple Excel sheet.
Consider the smallest and simplest existing matrix, a 1x1 matrix.

We apply this matrix rule of creation: replicate (copy matrix) to the right, replicate (copy matrix)
down, and reverse (copy the inverse of the matrix multiplied by -1) diagonal.

As a result, we have a 2x2 matrix.

Note: see that in this case, it is still easy to notice the orthogonality in the vectors that represent
these two codes.

Following the same way, now we generate a 4x4 matrix with orthogonal codes.

Again, we repeat the action, and now we get a 8x8 matrix.

At this point, we have a matrix of 8 codes (sequence of numbers) orthogonal to each other.
We show that the Dot Product of any line (code) for any other line (source) is always Zero. As an
example, we choose the code '2' and the code '5'. Multiplying each of the corresponding entries in
each of them, we have another sequence as a result. The sum of the entries of this new resulting
string is always Zero! (Do the test yourself: choose any two codes, and then multiply the
corresponding entries, and sum it up: the end result is Zero! Interesting, no!?).

Okay. We understand that we can - and how we can - generate a set of orthogonal codes (sequence
of numbers). But in practice, how is it all done?
Let's continue.

Code Spreading (and Despreading)


To apply this concept to practice, we need to try to see what happens in the transmission of a
spreaded signal in time: let's see how the transmission of spreaded data works. In other words: how
is it possible to many concurrent users to use exactly the same frequency, transmitting all at once,
and not be a total colapse in the system?
Again, the best way to demonstrate it using examples and simple analogies.
Let's begin by considering a signal from a particular user, with its information bits, as shown in the
following figure.

Consider yet another signal, with 8 'carriers' or 'bits'.

If we multiply each user information bit, for these 8 bits, we have a compound resulting signal.

This resulting signal carries the same 'information' that the user bits signal. But in a more 'spreaded'
way.
When the receiver receives this resulting signal, it knows that a particular sequence represents some
user information bit.

Similarly, when receiving another sequence, the receiver is able to determine that is another user
information bit!

Note that for the original user bit, we now have 8 corresponding information bits arriving at the
receiver. In other words: 1 bit spread in 8 'chips' *!
Important: * actually, when the signal is spreaded, we do not call more than bits anymore, but chips and this is another concept that will explain other time. When this chipstream arrives at the receiver it
is properly understood by the receiver. Unfortunately the transmission in systems such as CDMA is
much more complex than that. So we need to continue and try to bring this practical scenario for our
study (later we can delve more).
The user's information, which is above represented by generic bits, can be voice and/or data. Let's
represent it already as a pure digital signal, and 'follow' the transmission of the user bits, from its
generation to its recovery at the receiver.

Transmitting (and receiving) a user's data


We start considering a bit of one user, first as having the value of '1'.

Note: in this case the red color is identifying the user 1 (and not if the bit is equal to +1 or -1).
Consider as the bit value, the value that is within it - in this case equal to 1.
And now we remember what we saw earlier, about orthogonal code matrix. Let's use an example of
an array with 8 codes (ie, 8 codes are available for use, with guaranteed orthogonality among them do not interfere with each other).

Just to make it easier to follow, we will choose the 'Code 1' for use with 'User 1' (we can choose any
of the codes, doesn't need not be '1').

We then do the spreading: multiply the user one bit per each bit of code one we chose. This signal
can then be transmitted, and considering initially an ideal scenario, this same sequence reaches the
receiver.

Attention that our statement now begins to get 'interesting': 'How the receiver can recover the
information of one user (in this case, knowing that the bit value is equal to 1?)'
The answer is simple: doing this despread of the received signal, using exactly the same code that
was used to spread the signal!
Okay, still can not see how? So let's move on.

Multiply the received signal (user 1) by the same code used for spreading (code 1). As you can see,
the result is a sequence of 1's.

But this result is the content that has been spreaded into 8 parts. We need to add up all the
information, and divide by the total of parts. In this case, the sum is equal to 8 for a total of 8 bits.
Or: that information is equal to 8/8 = 1.

Okay, maybe you still have not understood well, and may think that it may have been coincidence.
Okay, let's continue.
Repeat the same process, but now with the bit equal to 0. So, surprised?

Even so, you may still not be convinced.


Let us now transmit new data, but now for a user 2 (we use dark blue to refer to data that user in the
figures). In the theory that we have seen, for the codes/signals do not interfere with each other, they
need to be orthogonal. So choose one of eight codes of our matrix of orthogonal codes that we are
using as an example (let's choose another code other than code 1, which we already use for user 1 soon you will understand why).
Unlike the first user, who transmitted '10', we now assume that the user has information on two bits
equal to '11'. Repeating the same calculations, we have the result for user 2. Interesting, no?

Even with the demonstrations above, if you're still not convinced that the division can be done by
code, see an example for a third user (green), sending '01'.

We could stay here with endless demonstrations, but we believe that you already understood the idea,
didn't you?
Right. So enjoy a little breathing because the best is yet to come.
As we are trying to learn today, one of the main functions of these types of codes that we used above
is to preserve orthogonality among different communication channels.
From the set of orthogonal codes obtained from the Hadamard matrix, we can make the spread in
communication systems in which the receiver is perfectly synchronized with the transmitter,
generating codes according to the characteristic of each system.
So far so good. In the example above, we have spreaded, transmitted and recovered the user data.
But individually! In practice, the data of each user are not transmitted separately, but all at the same
time!
The highlight, the great advantage of the transmission using codes is precisely the ability to transmit
multiple users the same time (using orthogonal codes) and extract the data for each user separately!
Again, through examples is easier to visualize how this works. So let's continue?

Transmitting (and receiving) data of multiple users


Suppose then all previously spreaded signals arriving at the receiver.

We can represent these signals as waveforms, so it is easier to view them.

And the composite signal can be represented in the same way (the incoming signal is always the sum
of all signals of each user).

Take a little break here: the composite signal (shown above) is the sum of all the spreaded signals
from all users, and apparently doesn't give us any interesting information, right?
Wrong. Actually, the above statement is only apparent: in fact, we have a lot of information 'carried'
in one signal.
Let's continue. Looking only at the composite signal, we have no idea what is 'clustered'. This is
normal, and it really is what it 'seems'.
But now let's go back to the theory that we learn of orthogonal codes: when we multiply a code by
another code, all that is not orthogonal is interference, and should be excluded.

Ie if we multiply an orthogonal code for this set of codes, we have to split or 'recover' that information
back.
So if we multiply the orthogonal code used to spread the signal of user 1, we obtain the original signal
of user 1!

Magic? No: simply Engineering, Mathematics and Physics!


We can do the same with the user 2: use the same unique spreading code, and obtain the original
signal!

Similarly, the same with the third user.

And it doesn't matter how many users are there: if they were spreaded using orthogonal codes they
can be despreaded the other way, with the same unique code for each one!
The following figure illustrates the general summary of everything we saw today (even if the font is
too small,it serves to show the generic scenario).

Congrats! You just prove that dividing signals using codes is actually possible, and that CDMA works!

Orthogonality in LTE and GSM


We did understand and apply the concept of orthogonality in orthogonal codes used in CDMA and
WCDMA systems.
But in other systems, how does it work?

In fact, and as mentioned before, all systems rely on some type of orthogonality to work, ie some
way of transmitting information so that it can be retrieved (as shown today).
However, due to the characteristics of each system, this concept applies differently.
As our tutorial is already quite extensive for today, let's not over extend ourselves, and we will cover
this topic in the future, in tutorials that require this prior understanding.

Ideal World x Real World


It is also clear that not everything is perfect: in our example, we consider an ideal world without
interference or any other problem that could affect our communication. Unfortunately, they exist, and
quite a lot!
For example the codes are orthogonal only - not interfere with each other - if they are perfectly
synchronized. In our example, we do not consider the phase difference between them - in other
words, we consider the light speed to be infinite.
There is also the effect of the multipath components, which makes this recovery of the signal much
more complex that shown in the simplifications we've seen.
But they all have its solutions. Most refers to common problems, problems grouped as 'Multi-User
Detection', which in turn have a number of techniques to minimize each of these effects. The Rake
Receiver already explained in another tutorial is one of them.
We still factors like Power Control, which would have different 'weights' for each user. And a number
of other factors and limitations of all types to further complicate this communication.
Naturally, all these complications have solutions, and that is fantastic task at a Telecom Professional
faces in its day-to-day.
But for today, we believe that what we've is enough, at least for the clear understanding of the basic
principle of orthogonality - which was our initial goal, remember?

Physical x Logical Mapping


To conclude, in the demonstrations we've used examples codes with 8 bits (remembering that in fact,
when the signal is spreaded we do not call it bits, but chips instead; this is the subject of another
tutorial, and let's not to lose our focus today), when in reality they are much bigger, as 64 Walsh
Codes in CDMA.
We used addition and multiplication operation, because we considered the physical signals in their
physical layer: when signals are mapped to the physical layer, we assign them bipolar values - in our
case, +1 and -1.
But in logical operations we have 0's and 1's. And in this case, we use the binary XOR operation. (We
are already well advanced by now, and let's not extend ourselves to that subject - which already
would require another tutorial just to explain in a simplified way).
Simply understand that the codes used by systems such as CDMA and WCDMA are also Type 0's and
1's. In the XOR operation: if the bits are equal, the result is 0. If the bits are different, the result is 1.

When we applied XOR to any different Walsh code (or any other set of orthogonal codes), the result is
always half 0, and half 1 (in the Hadamard matrix, so that the sum is 0, half is equal to +1, and half
is equal -1).
And when we apply XOR to equal codes, the result is only 0's!
Although it seems simple, this operation allows the system to work perfectly.
The following is an example of a Walsh code matrix with 64 orthogonal codes used in CDMA.

As an example for UMTS, we can mention OVSF (Orthogonal Variable Spreading Factor) codes,
virtually identical to the Walsh codes (CDMA IS-95) - only that the generation is different.
We'll also talk about these specific codes in other tutorials, explaining how they are used for different
channeling.
Now: enough for today, do you agree?
As you can see, we have plenty of themes or topics to explore and explain a simple way to make you
extract the maximum possible in your learning program, and also when apply it to your work. We've
seen many concepts for today!
If you managed to understand well at least the concept of orthogonality, and how it makes it possible
the code division to happen, it is definitely enough. In later tutorials we will continue addressing other
concepts, and trying to show them in the most simplified manner.
IMPORTANT: we did not talked too much about chips and symbols today, but it is directly related to
the subject. Anyway, they were not essentially the subject today.
Again, as always: all of these concepts that were not covered today will be detailed in other tutorials.

Conclusion
Today we knew one of the most basic concepts, but also one of the most important, which makes
possible the existence of modern wireless networks: the orthogonality concept.
As an example, we saw how the concept applies in practice to CDMA and WCDMA systems, more
specifically in the form of orthogonal codes applied to the physical system, allowing the access
division by codes to happen, and thus achieving multiplexing of different chipstreams of users in the
same carrier without inserting interference therebetween.
The concept is much broader than what we try to show here today, and extends to many other areas
and technologies. Anyway, we hope it can be helpful for the further development in the subject.
We look forward to your company very soon, in another tutorial. Your opinion is very important to
define new themes you would like to see detailed here. And please share this article with your of
friends if you enjoyed. Thanks in advance.

What is ISI (Inter Symbol Interference) in LTE?


The development of modulation techniques allowed significant developments in
communications through the transmission of symbols carrying a large amount of information.

mobile

However, there is a common problem in such systems: the interference inter symbols (ISI or Inter
Symbol Interference).

Let us understand this today.

Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, you'll probably find some typos (sometimes we just use the
automatic translator, with only a final and 'quick' review). We apologize and we have an
understanding of our effort. If you want to contribute translating / correcting of these languages, or
even creating and publishing your tutorials, please contact us: contact.

Goal
Understand what is the interference between symbols (ISI), existing in the LTE Systems, and also in
any technology that uses symbols for the information transport.

What is ISI?
In an ideal system (theoretical), the transmitted symbols arrive at the receiver without any loss or
interference, as shown in the following figure.

But in a real scenario the transmitted signals are affected in different ways, for example, according to
the propagation environment.

What happens in practice is that the "same" signal arrives via multiple paths ("Multipath") and
consequently with different delays ("Delay Spread").

Although the "Multipath" bring positive benefits, "Multipath" and "Delay Spread" also end up causing
interference inter symbols.
So let's learn about these factors.

Delay Spread
A transmitted symbol can be received multiple times at the receiver, more or less as an "echo" effect.
This echo is what we call "Delay Spread".

In the above figure, the transmitter transmits a single symbol. This symbol is propagated along
different paths (A, B and C), and eventually reaching the receiver at multiple time instants, and
therefore with multiple "replication."
The total elapsed time between the first and last is determined by the environment (including the
structures, how close they are, etc..). For example, in an urban environment, where the reflection is
high (many buildings, many vehicles parked and moving), this delay has a typical value of 5-10
microseconds.

Multipath
And as we talked about before, in a theoretical ideal scenario, all symbols would be propagated to the
receiver using a single path, and also arrive without any delays. But in practice, what happens is that
the signal propagates along different paths from the transmitter to the receiver - this is the
"Multipath".
Assuming three different paths (A, B and C), signals arrive at the receiver for example as shown
below.

At the receiver, all these "multipath" components are summed (1). And the practical result is that we
have multiple symbols being received "simultaneously" (Symbols "Overlap") - this is the intersymbol
interference (ISI)!

Symbol Duration
As can we easily conclude, a very important determining factor for the ISI is the time duration of the
symbol.
If the symbol period (T) is very short compared to the "Delay Spread" (t) the impact is significant (T
<< t).

But if we can extend the symbols length, most of them will not suffer the impact of ISI (T >> t).

One small part of the symbol will continue to be impacted, but for most of its duration, the symbol will
remain not affected by reflections propagated in "Multipath".
That is why the ISI is minimized when we use a higher symbol period (or Lower Symbol Rate).

Conclusion
Today we learned, in a very simple way, what is the inter symbol interference present on systems that
use symbols for communication.
We saw what causes the interference (the "Delay Spread", caused by "Multipath"), its consequences,
and what we can do to minimize it.
In the next tutorials, We'll continue to cover topics directly related to LTE (and similar technologies),
so that you'll be able understand all the complex concepts of this technology quite easily.
For questions, suggestions or comments, you can post your comment below.
Thanks for reading and visiting. Until the next tutorial.

What is Splitter and Combiner?


RF Components - RF Power Divider and RF Power Combiner
To know the equipments used in your area and work, is a basic need of any professional. And
understand its features and functions (applications) often represents the difference when getting a
new job, or finding solutions to problems.

In the Telecommunications & IT area we have a wide range of equipment (or components), which
vary according to the specific area of expertise. For each of these equipment you can find a huge
documentation available in the form of catalogs, courses, white papers, presentations, etc.
Many times however the basics - the most important - are not fully understood by all, even by those
who use it in practice.

With the goal of explaining in a simple way the main features of RF components, we initiate today this
new series here at telecomHall.
However we'll no delve into details as calculations and definitions - more extensive or complex. Let's
just stick to the main objective: to meet the basic and essential part of each equipment. With this
basis, any deepening in the studies can be done more easily, if you want.
And to start, let's meet two of these elements: the RF Power Splitter (or 'Splitter') and the RF Power
Combiner.

Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

Goal
Present in a simple way RF components: Splitter and Combiner.

RF Power Divider
Let's start with one of the most simple and intuitive of these components: splitter.
Splitter, as the name implies, divides.

In nature we can see an example of splitter on a river that has an obstacle, and splits into two. In this
case, part of the water continues down a path, and another part by another way.

In the case of RF Dividers, instead of water, it is the RF signal that is divided - in that case the input
signal is 'divided': the form remains unchanged, but the 'power' is splitted. For this reason, the RF
dividers are known as RF 'Power' Splitters.
In the following figure, we see a simple illustration of a Splitter. The signal (represented by large red
circles) goes in one side and out the other two (B) and (C).

Note that the output signal is the same (has the same form), but each output has 'half' the power of
the original signal (small red circles).

Basically, that's what the divisor does. And the next question then would be: 'Why or where do I use
the splitter?'.
Imagine the following situation: a small rural community was contemplated by the RF planning of
your company with the installation of a new BTS. The point for the installation of the Tower has
already been acquired: its on a small hill in the Centre of 3 small regions, with good line of sight to
all, as seen in the figure below.

Unfortunately, for reasons of 'cost reduction', the BTS has only 2 cells.
But there are 3 regions to be served (covered). And then, what to do?

Ok, we know that in the case shown above the ideal solution would be the installation of 3 cells but
we don't have this configuration available! Given this scenario, the alternatives would be leaving one
of the small communities without coverage or ... install a divider (splitter).
We can minimize the problem presented by simply using a divider (splitter) split one cell into 2 cells,
attending all 3 regions of interest, and achieving the satisfaction of a larger number of people (all
potential new customers).

An important observation in the case above is that the cell that is 'NOT' divided (yellow in the figure)
should cover the denser region, because that is what will have the greatest traffic. And the cell that
will be divided will cover simultaneously the other 2 smaller regions (in blue in the figure).
In addition, each of the two cells in blue has half the power of the yellow sector (considering the same
transmitter power for each one). This 3 dB difference must be taken into account so there's no loss of
quality, mainly in 'indoor' regions. Anyway, this can be fixed through adjustments, if for example it is
possible to increase the power of the transmitter. Will depend of course on how is the quality in the
regions met usually in cases like this, we don't have many losses in practice.
And as already mentioned, this is not the 'final solution', but it sure is the best action to take,
considering the scenario above - cover all small regions. In the future, with the development and
progress of each of these regions (and consequently greater use of telecommunications services) we
will have then justifications for expansion of the third cell in BTS.

Ok, we've seen how an RF Power Divider works, and also a good example of its implementation.
But the dividers not only divide for 2 outputs. We have for example a splitter with 4 outputs. In this
case, each output will be 1/4 of the original signal strength (remember that dividers always divide
'equally' the input signal between all outputs).
Note: one of the most important points when it comes to RF Dividers is the insertion loss, i.e. the loss
that we've added to the system when we inserted such elements. The bigger the loss entered in the
system, the lower part of the signal will arrive at your destination, which is bad.
So when we talk about that in a 4 outputs splitter will have 1/4 of the original signal strength on each
output we're 'disregarding' the loss by inserting the component itself, and considering only the loss
resulting from the division of signal (whose order of magnitude is much larger).
So in practice what are the losses that I have using the RF Splitter (splitter)?
Assuming NULL the loss by inserting the element (i.e. kept the characteristic impedance of the
system), and taking into account only the loss by dividing the signal into more outputs, we have the
following correspondence table of 'Number of Output Ports' x 'Power Level Reduction' in a Divider
(splitter).

For example, if at the input of a 4 output divider we have a signal of -84 dBm, there will be a signal of
-90 dBm in each one of its outputs.

Another important information regarding RF Power Dividers (splitters) is about isolation, i.e. a signal
should not interfere with the other. For this, it is important to know the characteristics of construction.
Its construction can be through the use of resistors or transformers, being these last used in
examples like the above. But beyond our scope today, and later we'll explain in a simple manner its
construction and operation, explaining in more detail how this isolation works.
For now, just know that all RF Splitter are passive elements, i.e. you don't need power.
Yet we are also NOT analysing other aspects such as different frequencies or technologies. Let's first
understand the most important aspects (main) in its simplest form. In the next series of tutorials let's
gradually assimilating the countless possibilities of combination and use of such equipment.
At this point then we already know the RF Power Divider, we understand its basic operation and for
what it serves, and we also saw a practical example of use.
Let's continue and learn a 'new' RF component.
What do you think would happen if we reversed the use of equipment that we showed at the
beginning of this tutorial?

RF Power Combiner
If we reverse the use of equipment shown at the beginning of the tutorial, inputting 2 different signals
on ports (B) and (C), we have the sum, or 'combination' of these signs on the output (A).

You've probably noticed that, actually, the combiner is nothing more than a divider, but used in the
reverse way, right?
And that's exactly what it is: a RF Power Combiner simply combines (sum) different signals in a single
output. In the above case, the signals are transmitted over port B and C go out through that output
(A).
In the same way as the divider, the name is suggestive: the combiner combines! At first you can find
very simple ... and it really is, but it is extremely important for all systems where we need group (and
ungroup) signals with same or similar characteristics.
The RF Power Combiner then are used in applications where it is necessary to transmit/send multiple
signals over a single medium.
We will use the same example above, to see how this is done. A user (in yellow in the figure)
transmits his conversation, which arrives via antenna (1) to the BTS (2). Another user (red) also
broadcasts their conversation, only via antenna (3) until the same BTS. On BTS then these signals are
present (summed or combined), and the BTS can then continue the processing of each one of the
calls.

See that the different signals of each one of the users (yellow and red) were then summed (or
combined) in a combiner, and both signal followed by a single cable from antennas to the BTS.

The combiner doesn't make any kind of transformation or change in the signal. Simply combines them
into a single output.
And also it is easy to understand that all the features as Loss and Isolation of RF Power Combiner are
the same we've previously seen for the divider. As the divider, the combiner is also a passive element.
Ok, you now know what is a RF Power Combiner!

What we've seen so far applies to signals that have the same characteristics, no matter the
frequency: the splitter and combiner 'don't care' about the frequency.
But what about when we need to convey different and specific frequencies via a single broadband
antenna, what do we do?
In this case, we need to 'adjust' the RF filters to ensure that interconnection in a single transmission
medium.
But this is already subject to the next tutorial in this series.

Conclusion

We completed the first tutorial in the series of RF components, understanding in a simple way what
are and what are they for: RF Power Combiners and Splitters. Now we are prepared to meet and
understand other elements (the next tutorials).
It is very important that these basic concepts (in a simple way) are well understood, because it is
very common that questions arise in the definitions between these elements and other elements that
we'll see in sequence.
In fact, there may be doubt even among combiner and splitter, because we have seen that a
combiner can be used as splitter and vice versa. That is, often the difference is only in use.
Thank you for your visit, and we are waiting for you in the next tutorial.

Analyzing Coverage with Propagation Delay - PD and Timing


Advance - TA (GSM-WCDMA-LTE)
One of the biggest challenges in Planning, Designing and even Optimization of Mobile Networks is to
identify where the users are, or how they are distributed.

Although this information is essential, it is not so easy to be obtained. But if we have and know how
to use some counters related to this kind of analysis, everything is easier.
For GSM, we have seen that we can have a good idea of the location (distribution) of users through
the measures of TA (Timing Advance), as we detailed in a tutorial about it.

Today we are going a little further, and know the equivalent parameters in other technologies, such as
WCDMA (and LTE).

Goal
Learn the Performance Indicators related to the users distribution in a multi-technology mobile
network, and also learn how to use these indicators together in analysis.

TA in 2G (GSM)
We've aready talked about TA in GSM in another tutorial, so let's just remember the most important
concept.
TA (Timing Advance) allows us to identify the distribution of 2G (GSM) users regarding its serving cell,
based on signal propagation delay between the the UE's and the BTS. The GSM mobile (from now on,
we will call here UE too - as in 3G) receives data from BTS, and 3 time slots later sends its data. It is
sufficient if the mobile is close to the BTS, however, when the UE is far away, it must take into
account the delay that the signal will have to go through the radio path.
So: the UE sends the TA data together with other measures for the necessary time adjustments to be
made.
In this way, we indirectly get a map with the distribution of users, or their probable location area,
corresponding to the coverage area of the cell, with a minimum and maximum radius. The following
figure shows this more clearly, for an antenna with 65 HBW, and maximum (1) and minimum (2)
radius.

And in 3G and 4G (WCDMA, LTE), does we also have TA?


The expected question here is: does we have TA in 3G/4G? The answer is Yes, but in WCDMA the
name is another, it is called Propagation Delay. (In LTE, we have both parameters - TA and PD).
So, let's learn a little more about it.

Propagation Delay in 3G (WCDMA)


As we've told, in 3G the corresponding parameter to TA in 2G (GSM) is the Propagation Delay. With
this parameter, we can estimate the distance between the UE and the serving cell, in the same way as
we do in GSM.
But in 3G it has some different characteristics. To begin with, 3G measurements are made by the
RNC, and not by the UE.

In one recent 'RRC and RAB' tutorial we have seen how an RRC connection is established, where the
UE sends a 'RRC CONNECTION MESSAGE' message. When the RNC receives this message, it sends
another message back to NodeB, to set up a Radio Link ('RADIO LINK SETUP REQUEST') (1). This
message contains the Information Element with the Propagation Delay data, that is, the delay that
has already been checked and adjusted to allow transmissions and reception synchronization.

As already mentioned, the information does not come from the UE as in GSM, but is the information
that the RNC already has to make the communication possible: the information of this delay, the
Propagation Delay Information Element (IE) is sent every 3 chips.
So let's do some math.

We know that the WCDMA has a constant rate equal to 3.84 Mcp chip/s.

We also know (we consider) that the speed of light is 300,000 km/s.

In 1 second I have 3.84 M chips, in how many seconds I have 3 chips? Answer: 0.26 ps (pico
seconds).
As we have seen that the information is sent every 3 chips, the total is 3 x 0.26 = 0.78 ps ps, which
is the Propagation Delay time granularity.
And now let's translate this minimum value into Distance: If I run 300,000 miles in 1 second, what
distance I run in 0.78 ps? Answer: 234 meters.

In other words, have the Propagation Delay with granularity of 234 meters!

Note: it is important to know that this distance information is available to the system not only in the
establishment of the call, but also during the entire existence of it.

Round Trip Delay - Round Trip Time (RTT)


When we talk about Propagation Delay, there's another very important concept, related to the subject
and used in several other areas that involve communication between two points: the Round Trip Delay
& Time.
Let's understand what it is with an example. Imagine a simple communication between two people,
where the first say 'Hi', and the second one also answers 'Hi'.

In an ideal world, first person speech travels up to the second one, taking a certain amount of time
(t1), and the speech of the second person returns with a time (t2). So, we have a total time elapsed
from when the first person said 'hi' till he received the other guy's answer. This time is the Round Trip
Time, or the time at which a signal travels a route until the response is received back at the source.
Bringing this analogy to an UE and a NodeB, we have the image below.

:: RTT = (t1 + t2)


In fact, the approach above is very close to real. But we have to consider also the time in which the
receiver takes to 'process' the information, or the time it takes to respond after receiving the
information.

Considering then this 'latency' time (TL), the RTT is so as:

:: RTT = (t1 + t2) + TL

So, we understand then what is RTT. But how do I use it?


This information is very important to the system, and can be used for several purposes. One of them
for example, can be also to find UE's locations. Our goal today is to know all means to find the
location information of the UE's, remember?
Well, this is another method (in addition to the counters, as we shall see soon). When the NodeB
sends a message to the UE it knows exactly what time is. And then, when it receive a response from
the UE, it also knows exactly that other time!
So, it just do the subtraction of the times to find the RTT, and calculate the distance! Note: the time
used for the calculation is half of the RTT as the RTT is the round-trip path. In this case, the latency
time on the receiver is 'disregarded'.
With this distance information we can draw a circle with the likely area where the UE is. And if it is
being served by various cells, the intersection of the circles of each one of them gives us a more
accurate positioning (it is what we call 'Triangulation'). And these calculations are even more accurate
when other information is used togheter, such as 'CellID', MCC, RNC, LAC and Call Logs (CHR), with
much more detailed information.

But let's go back to the case where we only use the information of Propagation Delay - that is our
focus today - and that already gives us sufficient allowance for several very interesting analysis.

TA and PD (Propagation Delay) counters


The Propagation Delay information are (also) available in simple form of Performance counters.
These types of counters are available in pre-set ranges according to each vendor. The ranges vary
from 1 Propagation Delay to several 'grouped' Propagation Delay.
For example in Huawei have some TA ranges in GSM, and other PD ranges in WCDMA (Note: Huawei
calls these propagation delay counter s as TP instead of PD). For an 'ideal' scenario, we would have
counters for 'each' Propagation Delay.

Actually, that's not what happens, because as we told before, they may be grouped into ranges. Note:
the reason for this is not the case, but really too many ranges may even disrupt analysis.
TP (Propagation Delay WCDMA in Huawei) has 12 ranges.

In the above figure we have PDTA from 0 to 11.

For TP_0 the UE is between 0 and 234 meters from NodeB;

For TP_1 the UE is between 234 and 468 meters from NodeB;

...

For TP_36_55 the UE is between 8.4 and 13.1 km from NodeB;

And for TP_56_MORE the UE is more than 13.1 km from NodeB.

In the GSM (Huawei) have the same concept.

Note: See however that the amount of ranges here (GSM) is much bigger, and only begin to be
grouped from 30 (from almost 17 km!).
With the counters organized in so different ways, be grouped by different ranges granularities,
different distance (550 m for GSM and 234 m for WCDMA) it is very difficult to analyze the
propagations, or rather, it is almost impossible to compare them...
And so what does we do, since we need to analyze the distribution of the UE's in a generic way,
doesn't matter if it is using 2G or 3G?
The solution that we found in telecomHall was to make an 'approach', that is, a way to be able to see
where we have more concentrated UE's, no matter if at the time they are using 2G or 3G. Even
because, this 'distribution' among Technologies and Carriers depends on several factors, such as
selection and handover parameters, and also physical adjustments of radiant system. But the
'concentration' of users does not depend on these factors: the total amount of users in a particular
area is always the same!

To this, the module 'Hunter Propagation Analyzer' uses a methodology and 'particular' counters,
allowing to do this approach: we have created a range, and called it PDTA. As the 3G (Huawei, which
we are using as an example) has less ranges - only 12, we made the initial PDTA definition based on
it. The result can be seen in the table below.

Of course this approach or 'methodology' is not perfect, but in practice the outcome is very efficient.
In addition, if you need a more detailed analysis (for example if you need to know with more accuracy
than the approach presented here) just look to the original table, which contains each counter in its
standard range in original granularity.
For other vendors, the ranges may be different, but the methodology is always the same.
In Ericsson for example, the Propagation Delay WCDMA counter is 'pmPropagationDelay', and it is
collected by the RNC just like in Huawei.

It has 41 bins, being the first to indicate the maximum delay in chips (Cell Range), and other (1 to
40) to inform the number of samples in the period, referring to the percentage of the maximum Cell
Range.
When the UE try to connect at one point greater than the Cell Range it will fail.
Regarding to bins, the distribution goes from 0 to 100%, as the rule below:

bin1: samples between 0 and 1% of Cell Range (for example, if the Cell Range is 30 km, bin1 has the samples
between 0 and 300 m from NodeB);

bin2: samples between 1% and 2% of Cell Range;

bin40: samples between 96% and 100% of Cell Range.

And the 'adjust' of PDTA can be done the same way, depending on your need.
Conclusion: Different vendors have different propagation counters, and in different formats - but the
information is always the same! In all cases we can do the calculations that bring the analysis to the
same comparison universe, with the benefits that we've illustrated above.

Distribution of Radio Link Failure (GSM) and EcNo (WCDMA)


Okay, we've seen today how to check the distribution of UE's on 2G and/or 3G networks based on its
counters. But in addition, we have also other equally interesting information!
In GSM, in addition to PDTA, we were able to count Radio Link Failures. And this gives us a great
opportunity of crossing this information with the amount of Call Drops! The rule is simple: the point
we have a lot of Radio Link Failures, 'much' probably we also have a lot of Dropped Calls! The relation
is straightforward.
And in WCDMA, in addition to PDTA, we also have the average value of EcNo, that indicates the
average quality of a given cell/region!
Note: In Huawei, for the average value of Ec/No for each TP, take the counter value and use the
formula: EcNo = (value - 49) / 2.

TA in 4G (LTE)
As well as in 2G and 3G, we were also able to get the UE's distribution information in LTE. The
concepts applied are the same as already seen before, we can only point out that in LTE we have both
TA and PD.
As today's tutorial is already quite extensive, we will finish this part here, but with the certainty that if
you assimilated what was presented, without any major problems you will be able to extend this
information to your specific scenario.

Practical Analysis
After having seen - even with a little more detail - the concepts of propagation (including Failures in
GSM and EcNo in WCDMA), we will see some possible analysis that we can do in practice.
We have already said that the professional who has experience on this kind of analysis can improve
enough to network Indicators as Retainability and Accessibility. But how he manages to do this?
Simple: with the propagation analysis, it is possible to identify cells that are with their much greater
coverage than planned/expected - 'overshooting' cells, especially if they are reaching places where we
have other cells with better signal level!

In this case, we have pilot pollution, interference and high transmit power. As a result, increase of
Establishment Failures and Call Drops, both in overshooting cell, as in the other where it is interfering.
In addition, we can discover cells that have their coverage area in the same direction (sector), but
that have very different concentration (for example in the case of 3 WCDMA carrier, where one Carrier
can be with the highest concentration of users closer to the cell, and another with this concentration
away don't worry, we will see examples below and will be easier to understand).
This difference of distribution/concentration can be seen between the multi-technologies of the sector,
for example, if the GSM coverage is much smaller than the WCDMA and vice versa. In this case, it
serves as a great call for adjustments of tilts and azimuth between the antennas in this sector.

Practical analysis Worksheets and Charts


Using data from simple counters, we already have excellent ways of analysis like charts and graphs.
For example, the following is a complete view of a particular sector of our network (all cells of all
technologies and all carriers). Note that the simple thematic distribution obtained with Excel
Conditional Formatting already gives us a clear vision of this sector.

Filtering only for the contribution ('PDTA_P') of each cell, we can see clearly that a sector (Hxxx21) is
with its coverage beyond the expected (1).

In addition, we were able to match (1) failures (now filtering by 'ECNORLFAIL_P'), showing the
immediate need for actions in this sector.

Practical Analysis - Maps


In addition to the simple analyses on charts and tables, we can geo-reference it, with a direct
relationship with the coverage area. For demonstration, we create some dummy PDTA data of our
network. Note: A real network has much more cells, but with these few sample data we can show the
main points of analysis.

Continuing, we will then see the PDTA data of 4 examples sites plotted.
To analyze the PDTA distribution in Google Earth, we use a report generated by the 'Hunter GE
Propagation Analyzer' module*, and so we need to know the criteria that we are using: in this report,
the heights (1) from each region (PDTA of 0 to 11) represent the percentage of samples in that
region. And the colors (2) represent the Quality: EcNo to UMTS, and Radio Link Failure % for GSM.
*Note: you can build your reports in Google Earth and/or Mapinfo, just follow and apply the concepts
presented here to your own tools/macros.

The data are grouped in 'Folders', with the first level being the sector (1) (a specific direction for all
cells of all technologies and carriers). At the second level, we have the ranges (2) of PDTA percentage
(how many samples from total cell samples we have in each region). And in the third level we have
cells/PDTA (3).

Also equally important is the definition of the range used in the generation of the data, and
consequently in the legend. Note that we use the same coloring scale for EcNo and Radio Link Failure.

So, no matter if the coverage is GSM or UMTS - for example if the region is Red, we know it's bad! (Or
WCDMA EcNo worse than -16 dB, or GSM Radio Link Failure more than 50%!).

Knowing these details, we can do some demonstrations. Giving a zoom in a more extensive area, we
see that we have multiple cells with coverage in places where they should not be covering. Of course,
these points have a few samples, but with vary bad quality, as we see in the region shown below (1) ranges mostly Pink, Red and Orange.

Analyzing specific cells, for example 'AAN', we see that the same coverage area is much larger than it
should (overshooting cell), both the GSM (1) and UMTS (2) are more than 4 km of the serving cell.

In this case, we have another interesting point, also seen below: most of the users in the region (1)
are served almost exclusively by GSM. Now in region (2) almost all users use WCDMA. This is another
point of optimization: these coverages should be, as far as possible, 'proportional'.

Another example: the 'ABU' site is a typical case of need of urgent action, for example by increasing
the tilt's of overshooting cells. Too many samples at more than 4 km, and with poor quality. As these
are cells of an urban area, and in addition we have other cells serving that distant locations, it is
recommended to increase tilt, and later run a new analysis.

The opposite of what we saw above is also possible: we can identify cells that have a very good
coverage area (in this case, a more contained area), and with excellent quality levels (Green and
Blue).

We could go on demonstrating several other analyses that are possible using the data presented here
today. However, the best way is that you use these incredible resource in your analysis, because with
no doubt it represents a big help.
Many people try to optimize the network based on parameter changes only. But we saw that in many
cases like above, there may be situations where the most recommended is physical intervention
(adjusting of Antenna, Height, Azimuth, Tilt, etc...).

No doubt the analysis presented in this tutorial are essential to the improvement of any mobile
network, and if you so far haven't used, it's a good time to start.

Conclusion
We learned today an important concept used in many areas of mobile 2G/3G/4G networks: the
propagation delay, used as a tool for assessment of the geographical distribution of users.
The measures are the Timing Advance, that in GSM is measured by the UE, and Propagation Delay,
that in UMTS is is calculated by the RNC. Both allow us to estimate the distance of the UE until the
serving cell, consequently allowing several analysis, exemplified above.
The TA in GSM has a granularity of 550 meters, and the Propagation Delay in WCDMA has granularity
of 234 meters. Using these measures, we can 'see' exactly where network users are distributed at a
level of cell/carrier/technology in each region.
In addition, we have other measures, also mapped by region: EcNo for WCDMA and Radio Link Failure
for GSM.
All these measures together with other network information (Radiant Systems, Azimuths, Tilts, etc ...)
give a huge help to the telecom professional for analysis and optimizing tasks with significant results
for the improvement of the quality of the entire network.
We hope you enjoyed. Until our next meeting!

What is RRC and RAB?


To work with modern wireless networks such as UMTS and LTE, it is essential that the telecom
professional has full understanding of its basic concepts, such as those that control the call
establishment and maintenance, whether it is voice (CS) or data (PS).

In this scenario, RAB and RRC are two of the most important concepts because they are responsible
for all the negotiation involved in those calls.

In addition to RAB and RRC, we still have some other terms directly involved in context, as RB, SRB,
TRB, among others. These terms are also important concepts, since without them RAB and RRC could
not exist.
So lets try to understand today - the simplest possible way - what is the RRC and RAB role in the calls
of these mobile networks in practice. As it become necessary, we will also talk about other concepts.
Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

Introduction
To start, we can divide a call into two parts: the signaling (or control) and data (or information).
Already ahead of key concepts, we can understand the RRC as responsible for the control, and the
RAB as responsible for the information part.
As mentioned, other auxiliary concepts are involved in calls, but our goal today is to learn the most
basic concepts - RRC and RAB, allowing us to evolve in our learning later.
Oddly enough, even professionals who already work with UMTS-WCDMA and LTE networks have
trouble to fully understand the concepts of RRC and RAB. And without this initial understanding,
hardly they can evolve with clarity and efficiency in their daily work.
Without further introduction, let's go straight to the point and then try to understand once and for all
these so important concepts.

Analogy
As always, and as usual the telecomHall, let's make an analogy that helps us to understand the
functioning of the RRC and RAB in practice.
Let's start imagining the following scenario: two people are cut off by a cliff. On the left side, a person
(1) want to buy some things that are for sale in a store (2) or deposit on the right side. In the right
side, in addition to the deposit, we also have a seller (3), which will help the buyer to contact
(negotiable) with the deposit.
As additional or auxiliary objects (4), we have some iron bars with different sizes, and some cars some like train wagon, others like remote control cars.
In short, we have the situation outlined in the image below.

And so, how this situation can be solved?


Let's continue with a possible solution: the buyer on the left write his request in a note, tie on a small
stone that he found on the floor, and send (1) it to the seller on the other side. So, the stone carry
the information or initial request.

The seller receives the request, but she need to send it to the deposit, in order for the shopping to be
sent. She sends the request on a remote control car (1), which run a previously demarcated path to
the deposit.

Some time later, the deposit response arrives to seller (1), which then checks to see whether she will
be able to send the data or not.

So that we can proceed with our call, let's consider a positive response. That is, what the buyer is
willing, or the 'resources' are available.
Seller realizes that to fulfill the request, and be able to send the purchases, she will need to build a
'path' (1) between the two ends of the cliff, so the wagons could carry over with the orders/receipts
and purchases. Then, the seller uses some of its iron bars and creates a link between the two sides.

Once established all the way between those involved, requests can be sent from both sides as well as
the purchases or any other information can be transferred by different paths and wagons/cars!

If you managed to understand how the above problem was solved, congratulations, you just
understand how the most common form of UMTS-WCDMA and LTE communication happens!
Although analogies are not perfect, it help us a lot to understand the complex functioning of these
networks, especially in relation to new concepts such as RRC and RAB, but also a very often used, the
'bearer' so much that it's worth talking a little bit about it.

What is Bearer?
If we search the word 'bearer' in the dictionary, we'll find something like trasnporter, or carrier. In a
simple way: one who carries or conveys something from some point to another point. In a restaurant,
we can compare the 'bearer' to a waiter.

But from the telecommunications point of view, 'bearer' is best understood as a 'pipe' that connects
two or more points in a communication system, through which the data flows.

Technically speaking, it is a channel that carries Voice or Data, a logical connection between different
points (nodes) that ensures that the packets that are traveling have the same QoS attributes.
Explaining better: for each 'bearer' we have several associated parameters, such as the maximum
delay and packet loss limit and these attributes that make sure each packet going in the same
channel have the same QoS attributes.

General Flowchart - RRC, RAB and Others


Now that we know what is bearer, let's go back to the analogy presented earlier, but now bringing it
to the real, more technical side.
All that we'll talk can be summarized in a single figure, having all the concepts seen today, and that
will be detailed from now on.
Note: If you manage to understand the concepts that will be explained in the figure below, you will be
with a great base for both WCDMA and LTE networks. This is because, in order to facilitate we use
WCDMA nomenclatures, but the principle is pretty much the same in LTE. Just do the equivalent
replaces, like NodeB for eNB.

On that ficticious scenario, the seller is the UTRAN, responsible for creating and maintaining the
communication between the UE (buyer) and CN (deposit) so that the QoS requirements of each are
met.

UTRAN: UMTS Terrestrial Radio Access Network


o

NodeB

RNC

UE: User Equipment

CN: Core Network


o

MSC: for switched voice services

SGSN: for packet-switched services

The cliff is the Uu Interface between the UE and the UTRAN, and the road through the remote control
car goes until the deposit is the Iu Interface, between the UTRAN and CN.
Sending requests and receipts is part of signaling, or the RRC. The shipment of purchases is the data
part, or the RAB. In our scenario, the RRC are the Rails, and RAB is the full service of sending data
between the UE and the CN.

RRC: Radio Resource Control

RAB: Radio Access Bearer

Note: the RRC is in Layer 3 - control plane, while the RAB occurs between the UE and CN, in the user
plane.
The railcars are the RBs, and convey the information in the radio path. These wagons define what
type of thing will be transported, and in what quantity. Similarly, the RBs define what type of data will
in the RRC, which can be Data or Signaling. When the QoS attributes change, then the Rbs associated
with that RRC connection need to be reconfigured.
The remote control cars are the Iu bearer, and carry information on Iu Interface (between the UTRAN
and the CN), either CS or PS.

RB: Radio Bearer

Iu bearer: Iu Bearer Interface

Note: RAB is the combination of RB and Iu bearer.


As examples of RAB for some services and different rates we have:

The Conversational RAB and the Interactive RAB can be used together, and in this case we have a
case of MultiRAB.
The RB is a layer 2 connection between the UE and the RNC, and can be used for Signalling and
control User Data. When it is used for Signalling or Control Messages is called SRB. And when it is
used for user data is called TRB.

SRB: Signalling Radio Bearer (Control Plane)

TRB: Traffic Radio Bearer (User Plane)

Note: in an optimized network, we can find much of the traffic being handled by HSPA bearers, even
MultiRAB. This option frees resources from CE (Channel Elements), relieving the load on R99 (that can
only use these resources). However, it should be done with caution, because if improperly configured
it can degrade the Performance Indicators with Blockage (Congestion) and Failures.
As you've probably noticed, we're talking about several new technical terms, but these terms are
what you'll find for example when reading UMTS or LTE call flowcharts. But if you can understand at
least in part the concepts presented today, everything will be much easier.

Let us then take a look again on our figure, and continue our analogy.

As we saw, in telecom we work with the concept of layers. And this way of seeing the network brings
us many advantages, mainly because we were able to 'wrap' physical access. In this way, any
modification or replacement can be made with less complexity.
We don't need to tell you how much the radio path is complex, continuously changing, right? This
structure using beares ensures this simplification: the RNC and CN bother with QoS requirements in
the path between them (Iu Interface); and only the RNC have to worry about meeting the complex
radio path QoS.
Sure, but why we have two types of carriers - wagons and remote control cars? The answer to this is
in the very characteristic of the two existing paths. Being the Iu a more robust interface, and also
because we have major changes in RABs during connections, it is normal that these bearers are also
different for the paths. it's like using a 4x4 pickup truck to climb a mountain, and a race car to an
asphalt race.

Regardless the carriers, with the RAB the elements of the CN has the impression of a physical path to
the UE, so don't need to be worrying about the complex aspects of radio communication.
For example, a UE can have 3 RABs between he and the RNC, and these RABs may be changing, as in
the case of soft handovers, while the RNC has only 1 Iu bearer for this connection.
From the point of view of the carriers, the main task of the UTRAN is managing these services on
these interfaces. She controls the Uu interface, and along with the CN, controls the provision of
services in the Iu interface.
Remember that in a communication between the UE and the CN, several other elements are involved,
mainly to negotiate QoS requirements between both parties. These requirements are mapped in the
RABs, that are visible to both (UE and CN), where the UTRAN is responsible for creating and
maintaining these RABs so that all of this is served in all aspects.
A little bit more details...

A RRC connection exists when an UE performs the call establishment procedure, and get resources
from the UTRAN. When a RRC connection is established, the UE will also get some SRBs. (If for some
reason the initial request is not accepted, the UE can make a new request after some time).
Since the SRB was established between the UE and the CN, the RNC checks a series of information
such as the UE identity, what is the reason for the request and whether the UE is able to handle the
requested service.
The RNC that maps the requested RABs into RBs, to transfer between the UE and the UTRAN. In
addition it is also check the attributes of the RABs: if they can be met by the available resources, and
even whether to activate or reset radio channels (reconfiguration of lower layers services ) based on
the number of Signaling Connections and RABs to be transferred.
This way, it creates the impression that there is a physical path between the UE and the CN.
Remembering again that no matter how many signaling and RABs connections there are between the
ue and the CN - there is only a single RRC connection used by the RNC to control and transfer
between the UE and the UTRAN.
Now that we have seen a lot about RRC and RAB, let's learn only a few more concepts today after
all, we already have enough information presented. Let's talk about the AS and NAS.

AS Access Stratum is a group of specific protocols of access network

NAS NON Access Stratum: so, are the other protocols, or those that are not access network

At this point of view, the AS provides the RAB to the NAS, or information transfer service.
The UE and CN need to communicate (events/messages) with each other to perform several
procedures with many purposes. And the 'language' of this conversation between them is called
protocols.
The protocols are then responsible for allowing this conversation between the UE and CN, and cause
the CN do not worry about the method of access (be it GSM/GPRS, UTRAN, LTE). In our case the RNC
acts as a protocol - between the UTRAN and CN.
According to what we learned today, the RAB is carried:

Between the UE and the UTRAN: within the RRC connection. The RRC Protocol is responsible for negotiating the
(logical) channels of Uu and IuB interfaces, and for the establishment of signaling dedicated channels as SRBs
and RBs among these interfaces.

Between the RNC and the CN: after being negotiated and mapped, in the RANAP protocol connection, through
Iu interface (CS/PS).
o

RANAP: Radio Access Network Application Part

As we have seen above, the RNC maps requested RABs into RBs using current radio network
resources information, and controls the services of lower layers. To optimize the use of these
resources, as well as the network band and physical resource sharing between different entities, the
UTRAN can also perform the function of CN messages distribution.
For this, the RRC Protocol transparently transfers messages from CN to the access network through a
direct transfer procedure. When this occurs, a specific indicator of CN is inserted in these messages,
and the entities with the distribution function in RNC use this same indicator for direct messages to
the appropriate CN, and vice versa.
But now it started to get more complex, and we have already reached our goal today, which was to
learn the basics of RRC and RAB.
Everything we just talked about above can be seen again in the same figure below, the same from the
beginning of the explanations.

RRC and RAB in GSM?


Okay, we understand how RRC and RAB works in UMTS-WCDMA and LTE networks. But in GSM, does
we have these concepts as well?
At first, the answer is NO. However, with what we learned today, we can make a comparison with
some GSM 'equivalent' parameters.
We can compare the SDCCH phase and TCH phase of a GSM call with RRC and RAB in UMTS.
RRC is the Radio Resource Control that works as Control Plane in Layer 3. Is used primarily for
Signaling in UMTS. Then we can compare with the signaling in GSM, as the Immediate Assignment
process for SDCCH resource allocation.
RAB is the radio access 'transporter' that works as the User Plane to provide data for the services
requested by the user. Then we can compare with the user part in GSM, as the TCH Assignment.
For each service requested by the user we have only 1 RAB. For example, if the requested service is a
Voice Call (CS-AMR), then 1 CS RAB will be generated and provided to the user. The same is true for
PS.
So our equivalence table would be:

UMTS / LTE

GSM

Control

RRC Connection

Immediate Assignment

User

RAB Assignment (RNC-CN)

Assignment (BSC-MSC)

RRC Connection and RAB example


To complete for today, let's see (always in simplified form) a simple RRC connection and RAB.
Whenever the UE needs the UTRAN resources, he asks. So that these resources are allocated, it
establishes a RRC connection with some SRBs.
In this case, a RAB connection is created to enable the transfer of user data. We remind you that the
RAB consists of RB + Iu bearer. The RAB is created by CN, with a specific QoS request.
For a single UE, there may be multiple RAB for NAS service (CS or PS).
But let's just stick to the initial procedure, that is, how is performed the 'RRC Setup' procedure, from
the UE's request.
The following figure shows this more straightforward.

The RRC has always 3 steps:


1.

The UE requests a new connection in the Uplink (RRC CONNECTION REQUEST);

2.

With sufficient resources available, the 'RRC Downlink CONNECTION SETUP' message is sent, including the
reason, along with the SRB configuration; (Note: otherwise, if the RRC connection cannot be established, the
message sent is 'RRC CONNECTION SETUP REJECT').

3.

If all goes well, the UE sends the message in the Uplink: RRC CONNECTION SETUP COMPLETE.

And after this, the MEASUREMENT CONTROL message are being sent in the Downlink, for the
communication continuity.
After the RRC connection is established, the UTRAN makes the checks between the CN and the UE, for
example the authentication and security operations.
And so, the CN informs the RAB to UTRAN in accordance with requirements of the service requested
by the UE. As we have seen, RAB occurs after the RRC, and without a RRC connection no RAB may be
established.

Conclusion
We have seen today a simplified explanation that covers a number of concepts involved in the
communication of the most modern existing mobile networks, primarily related to RRC and RAB.
With this conceptual base, we will continue to evolve in the next tutorials with examples that make
the assimilation of these complex concepts in a task far less exhaustive than normal.
We hope that you have enjoyed, and we count on your participation, which can be for example
suggesting new topics, or sharing our site with your friends. If possible, leave also your comments
just below.
Until the next tutorial.

What is Retransmission, ARQ and HARQ?


It's very important to use solutions that improve the efficiency of theadopted model in any data
communication system. If the transmission is 'Wireless', this need is even greater.
In this scenario we have techniques that basically checks, or verify if the information sent by the
transmitter correctly arrived in the receiver. In the following example, we have a packet being sent
from the transmitter to the receiver.

If the information arrived properly (complete), the receiver is ready to receive (and process) new
data. If the information arrived with some problem, corrupted, the receiver must request that the
transmitter sent the packet again (retransmission).

Let's understand a little more about these concepts increasingly used (and required) in the current
systems?

Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

Error Checking and Correction


We start talking about errors. Errors are possible, and mainly due to the transmission link. In fact, we
can even 'expect' errors when it comes to Wireless Data Transmission.
If we have errors, we need to take some action. In our case, we can divide it into two steps: error
checking and error correction.
Error checking is required to allow the receiver to verify that the information that arrived is correct or
not.
One of the most common methods of error checking is the CRC, or 'Cyclic Redundancy Check', where
bits (CRC) are added to a group of information bits. The CRC bits are generated based on the
contents of the information bits. If an error happens with the information bits, the CRC bits are used
to verify and help recover the degraded information.
The level of protection provided is determined by the ratio: number of CRC bits by the number of
information bits. Above a certain error level, the process is eliminated. CRC protection is used
practically in all existing Voice and Data applications.
The following diagram shows a simplified demonstration of how the CRC is used.

And the CRC is directly connected to the Error Correction methods. There are various ways of Foward
Error Correction (FEC), but the main idea is, given a level of quality in the link, try to get the lowest
number of required retransmissions.
Minimizing the number of retransmissions we ended up having a more efficient data flow result,
including - mainly - the 'Throughput'.
In simplified way: the CRC lets you know if a package arrived 'OK' or 'NOT OK'. Every packet that is
sent has a CRC, or a 'Signature'. As an analogy, it's like when we send a letter to someone, and in the
end we sign: 'My Full Name'. When the other person receives this letter (information), he checks the
signature: 'My Wrong'. In this case, he tells the Messenger: 'I don't know 'My Wrong', this information
has some problems. Please ask sender to send it again!'.

I.e. I do CRC checks. If the CRC is 'wrong', the information is 'wrong'. If the CRC is 'correct', probably
the information is 'correct'.

Retransmissions
Retransmissions are then: send information again (repeat) to the receiver, after it make such a
request. The receiver requests that the information be retransmitted whenever it cannot decode the
packet, or the result of decoding has been an error. That is, after checking that the information
reached the receiver is not 'OK', we should request it to be retransmitted.

Of course, when we have a good link (SNR), without interference or problems that may affect data
integrity, we have virtually no need for retransmissions.
In practice, in real World, this is very difficult to happen, because the links can face the most different
adversities. Thus, an efficient mechanism to enable and manage the retransmission is essential.
We consider such a mechanism as efficient when it allow data communication in a link meet quality
requirements that the service demands (QoS).
Voice for example, is a service where retransmission does not apply. If a piece of information is lost,
and is retransmitted, the conversation becomes intelligible.
On the other hand, data services practically rely on retransmission, since most have - or allows - a
certain tolerance to delays some more, some less. With the exception only for 'Real Time' services.
But it is also important to take into account that the greater the number of needed retransmissions,
lower the data transmission rate that is effectively reached: If the information have to be
retransmitted several times, it will take long for the receiver to obtain the complete - final information.

ARQ
Till now we talked in a generic way about data retransmissions, error checking and correction. Let's
now see some real and practical schemes.

The simplest way (or more common) control using what we described above is known as ARQ, or
'Automatic Repeat Request'.
In ARQ, when we have a 'bad' package, the system simply discards it, and asks for a retransmission
(of the same package). And for this, it sends a feedback message to the transmitter.

These feedback messages are messages that the receiver uses to inform whether the transmission
was successful or not: 'ACKnowledgement' (ACK) and 'Non-ACKnowledgement' (NACK). These
messages are transmitted from the receiver to the transmitter, and respectively informs a good (ACK)
or bad (NACK) reception of the previous packages.
If in the new retransmission the packet keep arriving with errors, the system requests a new
retransmission (still for this same package). That is, sends another 'NACK' message.

The data packets that are not properly decoded are discarded. The data packets or retransmissions
are separately decoded. That is, every time a packet that arrives is bad, it is discarded, and it is
requested that this same package be retransmitted.
But see that if there were no retransmissions, the performance of the data flow would be much better.
In the example below, compared with the previous, we transmit more information - 3 times in the
same time interval.

Unfortunately we don't have much to do about the link conditions. Or better, we are able to improve
the links performance, for example with configuration parameters optimization, but we'll always be
subject to face adverse conditions. In this case, our only way out is to try to minimize
retransmissions.
And that's where arise other techniques or more 'enhanced' schemes for retransmission. The main
one is HARQ.

Hybrid ARQ (HARQ)


The HARQ is the use of conventional ARQ along with an Error Correction technique called 'Soft
Combining', which no longer discards the received bad data (with error).
With the 'Soft Combining' data packets that are not properly decoded are not discarded anymore. The
received signal is stored in a 'buffer', and will be combined with next retransmission.
That is, two or more packets received, each one with insufficient SNR to allow individual decoding can
be combined in such a way that the total signal can be decoded!
The following image explains this procedure. The transmitter sends a package [1]. The package [1]
arrives, and is 'OK'. If the package [1] is 'OK' then the receiver sends an 'ACK'.

The transmission continues, and is sent a package [2]. The package [2] arrives, but let's consider
now that it arrives with errors. If the package [2] arrives with errors, the receiver sends a 'NACK'.

Only now this package [2] (bad) is not thrown away, as it is done in conventional ARQ. Now it is
stored in a 'buffer'.

Continuing, the transmitter send another package [2.1] that also (let's consider) arrives with errors.

We have then in a buffer: bad package [2], and another package [2.1] which is also bad.
Does by adding (combining) these two packages ([2] + [2.1]) we have the complete information?
Yes. So we send an 'ACK'.

But if the combination of these two packages still does not give us the complete information, the
process must continue - and another 'NACK' is sent.

And there we have another retransmission. Now the transmitter sends a third package [2.2].
Let's consider that now it is 'OK', and the receiver sends an 'ACK'.

Here we can see the following: along with the received package [2.2], the receiver also has packages
[2] and [2.1], that have not been dropped and are stored in the buffer.
In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of these
retransmissions? Up to 4. IE, we can have up to 4 retransmission in each process. This is the
maximum number supported by 'buffer'.

Different HARQ Schemes


Going back a little in the case of Conventional ARQ, whenever we send a package and it arrives with
problems, it is discarded.
Taking the above example, when we send the package [2], and it arrives with errors, it is discarded.
And this same package [2] is sent again.
What happens is that we no longer have the concept of 'package version' - [2.1], [2.2], etc. We do
not have the 'redundancy' version, or the gain we get in HARQ processing.
To understand this, we need to know that information is divided as follows:
[Information + Redundancy + Redundancy]
When we transmit the packet [2] we are transmitting this:
[Information + Redundancy + Redundancy]
When retransmit the same package [2] we are retransmiting it again:
[Information + Redundancy + Redundancy]

But when we use HARQ, and retransmit packet [2.1] or [2.2], we have the possibility of:

Or retransmit that same information again;

Or retransmit only the redundancy.

And then, if we retransmit less information (only redundancy), we spend less energy, and that will run
much faster. With this we have a gain!
That is, we work with different 'versions of redundancy', that allows us to have a gain in the
retransmission. This is called 'Redundancy Version', or what version of redundancy.
The redundancy version, or HARQ scheme with 'Soft Combining' can be 'Chase Combination' or
'Incremental Redundancy'.

HARQ Chase Combination


Chase Combination: when we combine the same information (the retransmission is an identical
copy of the original packet).
We transmit an information, which arrived wrong, and we need to do a retransmission. We retransmit
the same information - and there we don't have much gain.

HARQ Incremental Redundancy


Incremental Redundancy: where we retransmit only the portion that we didn't transmitted before.
Thus we retransmit less information. Less information means fewer bits, less energy. And this gives a
gain!
Redundancy bits are retransmitted gradually to the receiver, until an ACK is received.
With this, we adapt to changes in the condition of the link. The first retransmission can, for example,
contain or not bits of redundancy. If necessary, a small number of these bits is retransmitted. And so
on.

Finishing for today: what are the 2 steps of HARQ? Why it gives me a Gain?

First because from wrong packets 1 and 2 we can get a correct one, since we do not discard erroneous packets
anymore.

Second because we can - also in retransmission - send less information, and streamline the process.

The use of HARQ with 'Soft Combining' increases the received Eb/Io effective value for each
retransmission, and therefore also increases the likelihood of correct retransmissions decoding, in
comparison to conventional ARQ.

We send a package, and it arrives with errors: we keep this package. Receive the retransmission and
then we add or combine both.

HARQ Processes (Case Study)


What we have seen so far clarifies the concepts involved. In practice, in retransmission, this type of
Protocol is called 'Stop And Wait' (there are other kinds of similar protocols).
What would be: send the information and stop. Wait for the response to send other information.
Send, wait for response. Send, wait for response ...

No! Not so in practice. In practice, we work with a number of 'processes', which may vary for example
from 4, 6 or 8. The following image illustrates this more clearly.

Other types of HARQ


New schemes are constantly being developed and used, as the type III HARQ, which uses selfdecodable packages.
But enter these variations, terminology and considerations, is not the scope of our tutorial, which was
simply to introduce the concept of Retransmission, ARQ and HARQ.
Based on the key concepts illustrated here today, you can extend your studies the way you want,
however we believe that the most important thing was achieved understand how it works and what
are all the cited concepts.

JAVA Applet
Below, you can see how some retransmission schemes work. There are several Applets available, for
the many possibilities (ARQ, HARQ, With Sliding Windows, Selective, etc).
The next is a link for a JAVA Applet that simulates a 'Selective Repeat Protocol transmission'.
http://media.pearsoncmg.com/aw/aw_kurose_network_4/applets/SR/index.html

Conclusion
This was another tutorial on important issues for those who work with IT and Telecom: data
Transmission and Retransmission techniques, ARQ and HARQ.
ARQ is used for applications that allow a certain delay, as Web Browsing and Streaming Audio/video.
It is used widely in Wimax and WiFi communication systems. However, it cannot be used in Voice
transmission, as for example in GSM.
HARQ for example is used in HSPA and LTE, and therefore must be a well-understood concept for
those who work or want to work with these technologies.
We hope you enjoyed it. And until our next tutorial.

IP Packet switching in Telecom - Part 4


And then we finally get to NGN signaling protocols: SIP and SDP. The picture below was extracted
from RFC 3261 and gives a fairly good example of a SIP dialog between two users, Alice and Bob.

The entities involved in the call setup are called User Agents (UA). UAs which request services are
called User Agent Clients (UAC), and those which fulfill requests are called User Agent Servers (UAS).
Although the basic operating mode is end-to-end, the model supports the use of intermediate proxy
servers, which work as back-to-back User Agents (B2BUA) relaying requests from one user to the
other. In the picture Alices softphone and Bobs SIP phone are the end-to-end user agents, while
theres two proxy servers: atlanta.com and biloxi.com. Linking this with what we already know about
IMS, we can identify the P-CSCF as a SIP proxy server, while the communicating UEs are the end-toend UAs.

Note: Also visit my blog Smolka et Catervarii (portuguese-only content for the moment)
Quoting RFC 3261:
SIP does not provide services. Rather, SIP provides primitives that can be used to implement
different services. For example, SIP can locate a user and deliver an opaque object to his current
location. If this primitive is used to deliver a session description written in SDP, for instance, the
endpoints can agree on the parameters of a session. If the same primitive is used to deliver a photo
of the caller as well as the session description, a caller ID service can be easily implemented. As this
example shows, a single primitive is typically used to provide several different services.
SIP primitives are:

REGISTER: indicate an UA current IP address and the Uniform Resource Identifiers (URI) for which it would like
to receive calls;

INVITE: used to establish a media session between UAs;

ACK: confirms message exchanges with reliable responses (see below);

PRACK (Provisional ACK): confirms message exchanges with provisional responses (see below). This was added
byRFC 3262;

OPTIONS: requests information about the capabilities of a SIP proxy server or UA, without setting up a call;

CANCEL: terminates a pending request;

BYE: terminates a session between two UAs.

Typically a SIP message have to have a response. Like HTTP, SIP responses are identified with threedigit numbers. The leftmost digit says to which category the response belongs:

Provisional (1xx): request received and being processed;

Success (2xx): request was successfully received, understood, and accepted;

Redirection (3xx): further action needs to be taken by sender to complete the request;

Client Error (4xx): request contains bad syntax or cannot be fulfilled at the server/UA of destiny;

Server Error (5xx): The server/UA of destiny failed to fulfill an apparently valid request;

Global Failure (6xx): The request cannot be fulfilled at any server/UA.

Session Description Protocol (SDP) is described at RFC 4566 (warning: IETF mmusic working group is
preparing an Internet Draft which eventually will supersede RFC 4566).
Matter of fact, it should be called Session Description Format, since its not a protocol as we use to
know. SDP data can be carried over a number of protocols, and SIP is one of them (although RFC
3261 says that all SIP UAs and proxy server must support SDP for session parameter
characterization).
Quoting RFC 4566:
An SDP session description consists of a number of lines of text of the form:
<type>=<value>
where <type> MUST be exactly one case-significant character and <value> is structured text whose
format depends on <type>. In general, <value> is either a number of fields delimited by a single
space character or a free format string, and is case-significant unless a specific field defines
otherwise. Whitespace MUST NOT be used on either side of the = sign.
An SDP session description consists of a session-level section followed by zero or more media-level
sections. The session-level part starts with a "v=" line and continues to the first media-level section.
Each media-level section starts with an "m=" line and continues to the next media-level section or
end of the whole session description. In general, session-level values are the default for all media
unless overridden by an equivalent media-level value. Some lines in each description are REQUIRED
and some are OPTIONAL, but all MUST appear in exactly the order given here (the fixed order greatly
enhances error detection and allows for a simple parser). OPTIONAL items are marked with a "*".

Heres an example of an actual SDP session description:

Very well. I think thats enough to you understand how NGN signaling works. Now its time to get one
step down on the TCP/IP protocol stack, so on our next article well be starting to talk about transport
protocols, and will understand how the socket API is used to create separate sessions over the
transport protocols service

IP Packet switching in Telecom - Part 3


At the end of the precedent article Ive told you that were going to dig a bit deeper into IMS and NGN
signaling protocols (all this happens at the application layer of TCP/IP network architecture see the
first article of this series).

Note: My blog Smolka et Catervarii (portuguese-only content for the moment)

And so we shall do. I must warning you, though: youd better fasten your seat belts, cause theres
turbulence ahead. Few things can be more intellectually intimidating than the writing style of telecom
standards. Truth be told, theyre getting better, but its still a hard proposition to read them. Even the
pictures can be daunting. So I urge you: dont let this picture scare you out of reading the rest of this
article.

This picture comes from ITU-T Recommendation Y.2021. Look at the shaded round-cornered
rectangle. Theres core IMS written on it and it really is that. But were interested in a single entity
in there: the Call Session Control Function (CSCF), and its relationship with the user equipment
(desktop, laptop or handheld computers, smartphones, tablets, whatever) identified by UE in the
picture.
Each line connecting entities are called interfaces (formal terminology is: reference points, but doesnt
matter). Theyre the depiction of logical relationships between the entities, and each interface uses an
application-layer protocol (more than one, sometimes). The signaling interface between CSCF and UE
is identified as Gm in the picture. And the application-layer protocols used in the Gm interface are SIP
and SDP (Im not explaining some acronyms cause theyre already explained elsewhere I really
believe that youre following these articles from the beginning).
And what does CSCF do? Its the AAA server (and more) that weve talked about in the last article.
Since it looks that most of TelecomHall readers have a mobile background then we can explain CSCF
functionalities this way: its a kind of fusion of HLR (Home Location Registry) and AuC (Authentication
Center).

But theres actually three entities called CSCF, differing by a prefix letter: P (proxy); I (interrogating);
and S (serving). These three flavors of CSCF exist because were talking of telecom services here.
So there are operators own subscribers, and there can be roaming users.
Whatever the user is local or roamer, one of the first things he/she have to do when connecting to the
network is making contact with the P-CSCF. Item 5.1.1 of ETSI TS 123 228 offers two alternative
methods for P-CSCF discovery. I think that the practical way is combining both:

Dynamic Host Configuration Protocol (DHCP, for IPv4 or IPv6 networks) gives the UE the IP address (v4 or v6)
of the primary and secondary Dynamic Name System (DNS) servers which are capable of resolving I-CSCF
fully-qualified domain name (FQDN) to its IPv4 and/or IPv6 primary and secondary addresses;

During initial configuration, or in the ISIM (IMS Subscriber Identification Module SIM), or even via over-theair (OTA) procedures, the UE receives the FQDN of the I-CSCF.

The I-CSCF forwards all user requests to the S-CSCF thats assigned to serve it. If the user is local,
then thats all. If the user is a roamer, then the S-CSCF of the visited network acts as an I-CSCF and
forwards all user requests to the S-CSCF of the native network of the user.
To understand the remaining entities in the core IMS we have first to understand that NGN-based
services wont simply kick the present telecom services out of the market. Theyll have to live
together, side by side, for a long time yet. So theres a definite need for NGN and traditional telecom
services to interfunction. That is: there should be possible to calls originated in NGN-connected UEs to
terminate on common telephony devices, and vice-versa.
Since about ten
withsoftswitches.

years

ago

operators

started

to

substitute

traditional

telephony

switches

A softswitch is a distributed system (logically, and possibly also geographically), and can be built
(more or less) with an open architecture. Its main building blocks are:

One Media Gateway Controller (MGC), which handles signaling between the softswitch and the rest of the
network elements;

One or more Media Gateways (MGs), which make the translation of media streams between different physical
interconnections.

The MGC controls the MGs assigned to it through a IP-carried signaling protocol whose specifications
are found on ITU-T Reccomendation H.248.1 Gateway Control Protocol: version 3. The picture below
shows how the softswitch elements interconnect with IP and Public Switchet Telephony Network
(PSTN) and the signaling protocols used.

So the Media Gateway Control Function (MGCF) is the IMS element responsible for setting up the
Media Gateway which will bridge the IP data stream to a conventional telephony circuit. Every IMSenabled MGC have an instance of MGCF within it.
And that brings another question: since there can be many instances of MGCF available, in the
operator network and in other operators networks which are interconnected, which one is the best
option to bridge between the NGN and the PSTN for each call? This is the attribution of Breakout
Gateway Control Function (BGCF).
Last, but not least, theres the Multimedia Resources Function Controller (MRFC). Certain application
servers (see AS-FE in the picture) need help to deliver services to the UEs. Such help can be:

According to ITU-T Recommendation Y.2021 Multi-way conference bridges, announcement playback and
media transcoding;

According to TSI TS 123 228 mixing of incoming media streams (e.g. for multiple parties), media stream
source (for multimedia announcements), media stream processing (e.g. audio transcoding, media analysis),
floor control (i.e. manage access rights to shared resources in a conferencing environment).

Note that MRFC only does control of these activities. The actual execution is handled by Multimedia
Resources Function Processors (MRFPs) in ETSI parlance, or Multimedia Resources Processor
Functional Entities (MRP-FEs) in ITU-T jargon both names refer to the same software object.
And something very important to keep in mind: P-CSCF, S/I-CSCF, BGCF, MRFC and MGCF are logical
functions which are implemented in software, so they can exist in one single host machine, or can be
distributed among many host machines. Logically it doesnt matter, but physical implementations of
each vendor can vary, and can cast doubts if youre not aware of this

Now, this is getting a bit longer than I anticipated, so lets make a break here, and return with the
signaling protocols in the next article, ok? I apologize if this is becoming a little too hard to follow, but
I really dont know how to put this in simpler terms.

IP Packet switching in Telecom - Part 2


So far so good Like promised, lets start our journey about IP networking in the telecom context
from the ceiling and going down. So lets understand what the heck is an application.

Note: My blog Smolka et Catervarii (portuguese-only content for the moment)

Technically we call an application any program that runs under control and taking advantage of the
services of the operating system. Thats a fairly reasonable definition for our purposes, since all
networking architectures are devised to allow communication between applications, not people. Each
application has its own way of human-machine interaction handling (if it exists at all). Were not
concerned with this here. All we want to explain is how applications can reliably exchange data among
them.

And here we arrive at the first paradigm-breaking aspect of the change from the circuit-switchingbased plain old telephony service (POTS) and the IP-packet-switching-based next-generation network
(NGN).
POTS networks are organized in such a way that youve got dumb (and reasonably cheap) user
terminals connected through a smart (and very, very expensive) network. Everytime the user wants
to use network services and for a very long time thered be only one: telephony he/she has to ask
the network for it. By means of sound-based network-to-user signaling and key-pressing user-tonetwork signaling (see DTMF and ITU-T recommendation Q.23 and Q.24) the user says I want to talk
with this user, and the network makes the arrangements to provide the end-to end circuit which the
communicating parties will use.
IP-based networks, of which the Internet is the major example, were built assuming the user
terminals are smart (and not overwhelmingly expensive) and the network doesnt have to have more
smartness than necessary to perform a single function: take the data packets from one side to the
other with reasonable reliability. All the aspects of communication that telephony engineers are used
to name as call-control are negotiated directly between user applications. This is the function of the
so-called application-layer protocols.
So we have, so to speak, two different philosophies to handle the call control (which is another way
to say session control): the network-in-the-middle approach, and the end-to-end principle. The
schematic call-flow diagrams below give an example of the differences between them.

Generally speaking there are two ways of application interaction, both widely used: peer-to-peer and
client-server. On peer-to-peer sessions the communicating parties have the same status, and any of
them can request or offer services to the other. Client-server sessions, on the other hand, have a
clear role distinction between the parties: one requests services (the client) and the other fulfill the
services requests (the server).
Most of Internet applications use the client-server model, and that goes quite well with the end-toend principle. Otherwise NGN telecommunication services go both ways. Theres services that are a
clear fit to the client-server model, like video or audio streaming, and theres services that use peerto-peer, like voice and video telephony (by the way, videoconferencing can go both ways).

This and a few other issues (security, mostly) forced NGN call-control architecture to use client-server
interactions for signaling, and peer-to-peer or client-server for data exchange, according to service
characteristics. The diagram below is an example of this.

The packet routers between the elements are not shown. And this picture is a gross oversimplification
of NGN architecture. I will not go into details about this, but if you want to get a more rigorous
approach to this subject I recommend you strt reading ITU-T recommendations Y.2001 General
overview of NGN and Y.2011 General principles and general reference model for Next Generation
Networks.
Roughly speaking, the AAA (authentication, authorization and accounting) server role goes to the IP
Multimedia Subsystem (IMS), which was initially standardized by 3GPP/ETSI (see ETSI TS 123 228
V9.4.0 IP Multimedia Subsystem), and later adopted by ITU (recommendation Y.2021 IMS for
Next Generation Networks). Actually it does much more than simply AAA functions. Its the entry door
to all NGN signaling which are based on Session Initiation Protocol SIP, and Session Description
Protocol SDP (see ETSI TS 124 229 V9.10.2 IP multimedia call control protocol based on SIP and
SDP; IETFRFC 3261 SIP: Session Initiation Protocol; and IETF RFC 4566 SDP: Session Description
Protocol).
On the next part of this article series well take a closer and more formal look at IMS, SIP and SDP.
Hope itll be soon. Auf wiedersehen.

IP Packet switching in Telecom - Part 1


Let me start saying thanks to Leonardo Pedrini for the privilege of writing this series of articles for
TelecomHall. He doesnt do this frequently that I know of. If you like the articles and want to read
more then go visit my blog: Smolka et Catervarii (portuguese-only content for the moment).

Id better warn you right now that youll find my writing style quite different from Leonardos. While
he emphasizes simplicity Im a bit more fond of rigorousness. So Ill make a sincere effort to keep
closer to his style than of mine. But there will be some rough spots along the way, and I expect this
wont discourage you.
Very well... You probably always heard that telecom networks are based on the circuit switching
paradigm. And that was correct up to about 15 years ago. Then started a movement to change
networks to the packet switching paradigm. This has been a long, long way, which will be practically
complete with 4G mobile networks deployment. Our first step is to understand why this paradigm
change was deemed necessary.
Circuit switching means that the communication channels between user pairs are rigidly allocated for
all the communication session duration. Although theres statistical formulae for circuit switching
networks capacity planning see this Wikipedia article about the Erlang traffic unit theres a
capacity waste everytime any of the parties isnt using their communication channel (which is fullduplex, usually).

On the other hand, packet switching doesnt allocate full-session circuits. Transmission capacity in
either direction is granted to users just for the time needed to forward a single data packet. This
packet interleaving allows minimum capacity wasting of transmission media.

Unfortunately theres no such thing as a free lunch. Packet switching adoption has its trade-offs. The
major one is accepting the possibility of congestion, because any network node can suddenly have
more packets to send through an interface than its transmission capacity allows. Usually thats dealt
with using transmission buffers, so were in the realm of queuing systems statistics (Erlang C) instead

of the more familiar blocking systems statistics (Erlang B). This and a few other details were the basis
for wrong notions about the unfeasibility of carrier-class telecom services particularly telephony
over packet switching networks. And, with these articles I expect to bury them at last.
Next basic question to answer is: why IP and not any other packet switching network architecture?
Why not full-fledged OSI, for instance? The answer is quite simple: other network architectures were
considered and discarded because their adoption would be too difficult, or too expensive. The Internet
Protocol suite, on the other hand, was immediately available and was reliable, cheap and simple. With
the Internet boom of the 1990s the option for IP became unquestionable and quite irreversible.
Here in TelecomHall theres a brief explanation of the 7-layer OSI Reference Model. Likewise, the IP
network architecture is structured on 4 layers which match all the functionalities of the OSI-RM layers.
Look at the diagram below.

First thing youll probably say is: wait a minute! Youve said four layers, and this diagram shows five.
Why so? The answer is quite simple: the sockets API isnt a real layer thats why its shown in a
dotted box. When TCP/IP architecture was first deployed thered been need of something to keep
different user sessions properly separated. Sockets API was devised to that effect, and became a de
facto standard, and was ported to all kinds of operating systems.
Talking of operating systems, one of the great advantages of the TCP/IP network architecture is the
simple scheme of work division among hardware (network interface card) and software (operating
system and user application). Its easy, its simple, and above all, it works.
On the next articles of this series I will talk with you about the working principles and the main
protocols used, with a focus on the use of all this to build the so-called Next-Generation Networks
(NGNs). Unlike the usual explanations that you can find about this, I wont take a bottom-up
approach, but will make a top-down description of this environment.

See ya later. Hope well have fun together on this journey.

Goodbye IPv4... Hello IPv6!


You've probably heard of IPv6. These little letters, increasingly known, will bring up a number of
innovations and changes that should occur gradually over the World.

Changes that for sure will affect you, directly or indirectly - mainly due to the benefits that it
provides.
In Telecommunications area, the universe related to IPv6 is also increasingly in focus, and this subject
for sure will be present to All of us in near future.
So let's talk a little about it?
Note: All telecomHall articles are originally written in Portuguese. Following we translate to English
and Spanish. As our time is short, maybe you find some typos (sometimes we just use the automatic
translator, with only a final and 'quick' review). We apologize and we have an understanding of our
effort. If you want to contribute translating / correcting of these languages, or even creating and
publishing your tutorials, please contact us: contact.

What is IP?
To begin, let's first remember a little about IP. Simply put, IP (Internet Protocol) is the standard that
controls the routing and structure of data transmitted over the Internet. It is the protocol that
controls the way devices such as computers communicates over a network.
For two devices to communicate, each one must have an identification. In a cellular network, each cell
has unique number (eg 8 digits).

With computers we have the same case, only the identification 'number' is a little different. Each
'number' has 4 parts, and each part can have up to three digits (and again, each part can vary from 0
to 255).

As in cellular networks, we can't have two devices with the same number on the computers network,
each one must have a unique IP that identifies it.
It turns out that today, we have not only computers using IP. And this finite number of possible
combinations is no longer sufficient to meet the great demand for these new devices.
And that's where the problems start: IPv4 ...

IPv4
The 'current' version of IP is the version 4, so IPV4. It has the format shown above, and was
standardized at a time was more than enough to connect Research Centers and Universities - the
initial goal of the Internet.
In more technical terms, IPv4 is a sequence of 32 bits (or four sets of 8 bits). The 8 bits can range
from 0 to 255 (from 00000000 to 11111111), which gives us a total of up to 4 billion different IP's (or
more precisely 4,294,967,296 IP's).
Although it is a very large number, we know that is running out.
By the early 90's for example, most user's connections to the Internet was through dial-up modems.
Currently, with the popularization of the Internet, the picture is quite different. Virtually everybody
use 'Always-On' broadband connections: The growth of addresses consumption is exponentially
increasing.
So, what to do?

Extending the life of IPv4


An alternative, which is not really a solution, is to create ways to avoid conflicts.
In this case, it is common to use techniques or tricks to increase the number of addresses and allow
the traditional client server setup, such as:

NAT (Network Address Translation)

CIDR (Classless Interdomain Routing)

Temporary Addresses Assignment (such as DHCP - Dynamic Host Configuration Protocol)

However, these techniques do not solve the problem, only help to temporarily minimize the problem
IPv4 limitation. That's because they do not meet the main requirements of True Network and User
Mobility.
Existing applications require an increasing amount of bandwidth, while the NAT represents a
considerable impact on the performance of network equipment.
And as mentioned earlier, we now have equipments that needs to be 'Always On', that is, ensuring
that anyone can be connected at any time. This requirement is an impediment to this address
translation.
We also have the problem of plug-and-play equipments, each time more numerous, and with even
more complicated protocol requirements.
In short, what happens is that we ended up having a problem: we must choose between 'allow new
services' or 'increase network size'.
But we need both, and then what to do?

IPv6
The solution is quite natural: creating a new format, larger than the current one, to meet future
demand. And this new format, or new version is the 6. Hence, IPv6 - The Next Generation Internet
Protocol. Even that IPv6 is also known as IPng (next generation).
Although the 'solution' is apparently simple, its implementation isn't. Unfortunately, things are not
nearly so easy to make that change. Certainly, much work remains to be done, but the bigger
problem is just due to those responsible for configuration, or Network Administrators.
There is much controversy about when the world will be ready for IPv6, but it's certainly the path that
must be followed. We should probably have an episode like 'Millennium Bug' in 2000, where some
people predicted chaos in computer networks.
But back to talking about the new format, it is now a sequence of 128 bits. Using the same calculation
used above, we arrive at a total 340,282,366,920,938,463,463,374,607,431,768,211,456 different
combinations of IP.
Now yes, a 'very' large number: To have an idea, it is 4 billion times larger than the number of
current IPv4 format!
To shorten some the format, it will be used hexadecimal notation instead of decimal, used in IPv4.
The new format will look like this:
FDEC: 239A: BB26: 7311:3 A12: FFA7: 4D88: 1AFF
Note that an address is still very large, and possibly emerge a way of shortening it.

Advantages of IPv6

To clarify the advantages of IPv6, let's enumerate some of them.

* Much more addresses


The main advantages of IPv6 is the simplest to understand: more addresses available!

* Mobility!
In Mobile IPv4, the transmission of data packets is usually based on a triangular routing, where
packets are sent to a proxy server before reaching their final destination.
In IPv6, the degree of connectivity is improved (since each one has its unique IP), and each device
can communicate directly with other devices, making this type of communication much more efficient.

* Auto Configuration
A new feature in IPv6 standard (non-existent in IPv4) allows IPv6 hosts to automatically configure to
each other. It is the SLAAC.
The SLAAC (Stateless Address Auto Configuration) helps in the design of networks, making remote
settings far more simplified.

* Simpler Packet Format


Although IPv6 is much more complex than IPv4 in many other aspects, the format of the packet is
simpler in IPv6 - Header has fixed size, and fewer fields.
Thus, the process of forwarding packets for example turn out to be simpler, which increases the
efficiency of routers.

* Jumbograms
The data flow in a network is not continuous: it is done through discrete transmission of packets.
Depending on the information being transmitted, several packets are needed. Because each packet
must carry information other than the data itself, we ended up having a 'wastage' with these traffic
control information, such as those used for routing and error checking.
In IPv4, there is a limit of 65,535 bytes of payload (recalling, octet is a group of 8 bits, eg
11111111).
Today, this limit of 64kB is extremely low compared to the transmitted data. For example, in a simple
video transmission, thousands of packets needs to be transmitted - each one with its 'extra traffic'.
In IPv6, this limit is much higher: 4,294,967,295 octets.
That is, we can send up to 4 GB in a single packet, Jumbograms!

* Native Multicasting, Anycast


The transmission of packets to multiple destinations in a single send operation is one of the basic
specifications of IPv6.
In IPv4, this implementation is optional.

In addition, IPv6 defines a new type of service, the Anycast. As the multicast, we have groups that
receive and send packets. The difference is that when a packet is sent to an anycast group, it is
delivered only to one of group member.

* More Security - Network Layer


In IPv4, IPsec, an Network Layer Authentication and Encryption Protocol is not required, and is not
always implemented.
In the IPv6, we have native support for IPsec, and this implementation is mandatory.

That is, VPNs and secure networks are much easier to build and manage in the future.
IPv6 also does not rely, or has no need for fields of type 'checksum' to ensure that the information
was transmitted correctly. Now the error checking is responsibility of transport layers (such as UDP
and TCP protocols), and one reason is that the current infrastructure is more robust and reliable than
several years ago, that is, we have fewer errors during transmission.
The result: easier to implement, greatly facilitating the development of systems such as for the house
network-enabled devices.

IPv4 to IPv6 Transition


The transition from IPv4 to IPv6 must happen slowly and gradually. It will only end when there is no
more IPv4 device. In other words, this transition will take years.
IPv6 was not designed to replace IPv4, but to solve its problems. It does not have interoperability
with IPv4, that is, they don't 'match', but both will exist in parallel for a long time.

So one of the main challenges will be regarded to communication between these networks, which
should take advantage of existing IPv4 infrastructure.
Although there's no 'interoperability' with IPv4 not IPv6, they need some way to communicate, ie,
IPv6 needs a certain 'compatibility' with the previous version.
Suppose two IPv6 hosts wish to communicate with each other, but among them there are only IPv4
hosts. And then, what to do?
One technique that can be used is 'tunneling', as shown in figure below.

In this case, the IPv6 packets are re-packetd in IPv4 format, sent through the IPv4 hosts and
unpacked when they reach their IPv6 destination.
Of course, in this example, we will not have such features as priority and flow control.
Anyway, this is only a possible technique, and a lot has changed since IPv6 was designed. As more
people come to deal with IPv6, it is possible that better solutions emerge.

Test your IPv6


If you want to know if you're ready for IPv6, a good site is as follows.
http://test-IPv6.com/
There you can get an idea of your IPv6 connectivity through a series of automatic tests, as shown
below.

Conclusion
Today we saw a brief overview of IPv6, aa Next Generation Internet Protocol.
But now, a very important observation: IPv6 is not totally different from IPv4, or in other words,
everything you learned over IPv4 will be very useful when dealing with IPv6.
IPv6 brings many new features compared to the current protocol IPv4.
In summary IPv6 is much better than IPv4 for addressing, routing, security, network address
translation, administrative tasks and design, and support for mobile devices.
Of course, at first glance, IPv6 seems to be the solution to all problems. But remember that its
implementation will require a lot of work. Anticipating this scenario, IPv6 has a last feature, the
definition of a set of possible plans for migration and transition - one of the biggest challenges to be
coping in the near future.
The explanations above represent only a simplified summary of this protocol, so you can get an idea
of what lies ahead.

Anda mungkin juga menyukai