Anda di halaman 1dari 123

UMTS and WLAN interoperability

by Anja Louise Schmidt

Supervisors Henrik Christiansen and Lars Dittmann

Technical University of Denmark Research Center COM 31 July 2004

Abstract
This research studies different approaches to how interworking between the two network technologies Universal Mobile Telecommunications System (UMTS) and Wireless Local Area Network (WLAN) best can be achieved. The different approaches being discussed are the network layer mobility protocols Mobile IPv4 and Mobile IPv6, the transport layer mobility protocol mSCTP and the application layer mobility protocol SIP. Conceptual and practical comparisons showed that in a here and now situation, mSCTP is considered the best approach for achieving interworking between UMTS and WLAN followed by SIP, Mobile IPv4 and Mobile IPv6 in that order. In an ideal situation where IPv6 has been implemented on a large scale and SCTP is commonly supported, Mobile IPv6 is considered the best approach followed by mSCTP, SIP and Mobile IPv4, respectively.

ii

Acknowledgments
This masters thesis project was developed at Research Center COM at Technical University of Denmark in the period from January 5 to July 31, 2004, with assistance from my two supervisors Henrik Christiansen and Lars Dittmann. A number of people have helped me during the work process. Especially, I would like to thank Henrik Christiansen for his indispensable help with my numerous questions as well as with OPNET. It is difficult to imagine, what I would have done without the help. Also, the various people at COM, employees as well as fellow students, deserve my thanks for helping me with answers to my questions as well as technical and moral support whenever needed. The help was greatly appreciated.

Anja Louise Schmidt, s020271 July 31, 2004

iii

Table of contents
1 2 INTRODUCTION.......................................................................................................... 1 UNIVERSAL MOBILE TELECOMMUNICATIONS SYSTEM ...................................... 3
2.1 NETWORK ARCHITECTURE ..................................................................................................................... 3 UE domain .......................................................................................................................................... 4 UTRAN domain................................................................................................................................... 4 CN domain .......................................................................................................................................... 4 2.2 USAGE SCENARIOS ............................................................................................................................... 5 Network attachment ............................................................................................................................ 5 Circuit-switched connections .............................................................................................................. 5 Circuit-switched idle ............................................................................................................................ 6 Packet-switched connections ............................................................................................................. 6 Packet-switched idle ........................................................................................................................... 8 Network detachment ........................................................................................................................... 8 2.3 MOBILITY MANAGEMENT ........................................................................................................................ 8 Location management ........................................................................................................................ 8 Handover management ...................................................................................................................... 9

WIRELESS LOCAL AREA NETWORK..................................................................... 11


3.1 NETWORK ARCHITECTURE ................................................................................................................... 11 Basic service set ............................................................................................................................... 11 Distribution system............................................................................................................................ 11 3.2 USAGE SCENARIOS ............................................................................................................................. 12 Network attachment .......................................................................................................................... 12 Packet-switched connections ........................................................................................................... 14 Packet-switched idle ......................................................................................................................... 16 Network detachment ......................................................................................................................... 17 3.3 MOBILITY MANAGEMENT ...................................................................................................................... 17 Location management ...................................................................................................................... 17 Handover management .................................................................................................................... 17

4 5

UMTS AND WLAN COMPARISON ........................................................................... 19 HANDOVER............................................................................................................... 23


5.1 HANDOVER REQUIREMENTS ................................................................................................................. 23 Terminal requirements ...................................................................................................................... 23 Network requirements....................................................................................................................... 23 5.2 HANDOVER PROCEDURE ...................................................................................................................... 24 Measurements .................................................................................................................................. 24 Decision ............................................................................................................................................ 25 Execution .......................................................................................................................................... 27

MOBILITY PROTOCOLS .......................................................................................... 28


6.1 6.2 6.3 6.4 6.5 MOBILE IP .......................................................................................................................................... 28 DYNAMIC DOMAIN NAME SYSTEM (DDNS) .......................................................................................... 32 MOBILE STREAM CONTROL TRANSMISSION PROTOCOL (MSCTP)......................................................... 33 SESSION INITIATION PROTOCOL (SIP).................................................................................................. 34 MOBILITY PROTOCOL COMPARISON ...................................................................................................... 37

NETWORK MODELLING .......................................................................................... 41


7.1 PROJECT MODEL ................................................................................................................................. 41 7.2 NODE MODEL ...................................................................................................................................... 42 7.3 PROCESS MODEL ................................................................................................................................ 43 Context definition .............................................................................................................................. 43

iv

Process level decomposition ............................................................................................................ 44 Enumeration of events ...................................................................................................................... 44 Event response table development .................................................................................................. 44 Specification of process actions ....................................................................................................... 45 7.4 INPUT VALUES ..................................................................................................................................... 49 Code input values ............................................................................................................................. 49 Attribute input values ........................................................................................................................ 53 7.5 NETWORK MODEL OVERVIEW ............................................................................................................... 54 7.6 VALIDATION ........................................................................................................................................ 56

NETWORK SIMULATION ......................................................................................... 61


8.1 AVERAGE APPLICATION RESPONSE TIME PER LOCATION UPDATE INTERVAL ........................................... 61 8.2 AVERAGE APPLICATION RESPONSE TIME PER LOCATION UPDATE DELAY ................................................ 65 No location update ............................................................................................................................ 65 One location update.......................................................................................................................... 68 Two location updates ........................................................................................................................ 73 8.3 NETWORK SIMULATION OVERVIEW ....................................................................................................... 77

DISCUSSION............................................................................................................. 78
FUTURE WORK .......................................................................................................................................... 80

10 CONCLUSION ........................................................................................................... 80 11 REFERENCES........................................................................................................... 81 12 APPENDICES ............................................................................................................ 84


APPENDIX A: OPNET SOURCE CODE ......................................................................................................... 84 APPENDIX B: SEED VALUE AND RUN TIME TEST RESULTS ............................................................................ 84 APPENDIX C: ADDITIONAL AVERAGE APPLICATION RESPONSE TIMES PER LOCATION UPDATE INTERVAL......... 84 APPENDIX D: ADDITIONAL FUNCTIONS FOR PROBABILITY DENSITY OF APPLICATION RESPONSE TIME ............. 84 APPENDIX E: MODIFIED PROCESS MODEL FOR SIMULATION WITH NO LOCATION UPDATE ............................... 84 APPENDIX F: MODIFIED PROCESS MODEL FOR SIMULATION WITH ONE LOCATION UPDATE ............................. 84 APPENDIX G: MODIFIED PROCESS MODEL FOR SIMULATION WITH TWO LOCATION UPDATES........................... 84

1 Introduction
The telecommunication landscape has been dramatically changed during the last two decades by powerful forces; among them the emergence of wireless mobile communication and the growth of wireless networking. There has been an explosive growth in the use of different communication technologies, as mobile telephony has offered mobile communication between people, and wireless networking has provided flexible communication between computers. This change of the technological landscape also means that the radio systems of today are challenged by the increasing amount of capacity-demanding services. The services span from traditional conversational audio to conversational video, voice messaging, streamed audio and voice, fax, telnet, interactive games, web browsing, file transfer, paging and e-mailing. No single radio system can effectively cover all these services from a multi-service point of view, if QoS requirements are to be met. Consequently, the development moves towards interworking between different but complementary radio systems that together can provide this unparalleled level of services. There are many alternatives to an interworking solution but research [1] has shown that the complementary characteristics of Universal Mobile Telecommunications System (UMTS) and Wireless Local Area Network (WLAN) make them ideal for interworking. UMTS provides a low-bandwidth circuit- and packet-switched service to users with relatively high mobility in large areas whereas WLAN provides a high-bandwidth packetswitched service to users with low mobility in smaller areas. The WLAN therefore complements UMTS on the packet-switched services. The natural trend today is to utilise the high-bandwidth WLANs in hot spots and switch to UMTS networks when the coverage of WLAN is not available or the network condition in the WLAN is not good enough. This, however, also implies that some sort of handover mechanism must be in place to ensure that any ongoing connections are handed over to the new network without breaking or deteriorating the connection. At present, no such mechanism exists by nature. The only way to switch between the two networks is to sign off the first network and then sign on to the next network and start a new connection. The interworking between UMTS and WLAN, where packet-switched services can be used interchangeably across network borders, is therefore far from realised. The objective of this research is to study the two network technologies UMTS and WLAN and how interworking best can be achieved between the two. This involves describing the two technologies to the extent that is necessary for understanding the basic network dynamics and the mobility management capabilities. It also involves comparing the two network technologies as to how they relate and differ not only to substantiate why they are a perfect match for interworking but also to uncover the complications of such match. Moreover, it involves a discussion and comparison of different approaches to achieve interworking as well as a more practical evaluation to determine what approach is the most suited for interworking between UMTS and WLAN.

Anja Louise Schmidt (s020271)

Page 1

The research is organised into 10 chapters. Chapter 1 provides an introduction to the research as regards the background for the research area as well as the objective of the research. Chapter 2 presents an overview of UMTS in terms of the basic network architecture, different usage scenarios and mobility management. Chapter 3 presents an overview of WLAN similar to UMTS in terms of the basic network architecture, different usage scenarios and mobility management. Chapter 4 compares UMTS and WLAN with regards to similarities and differences in order to substantiate why they are ideal for interworking and to uncover the complications of such interworking solution. Chapter 5 goes more into details with the aspects of interworking between the two network technologies in terms of handover, more specifically what is required to perform a handover, what is the procedure of handovers and what will be the specific area of interest in the handover procedure for this research. Chapter 6 discusses and compares different approaches to interworking in terms of different mobility protocols. Chapter 7 takes a more practical approach and focuses on developing a network model that can evaluate the different approaches to interworking. Chapter 8 continues the efforts from chapter 7 and performs a range of network simulations based on the network model to evaluate and compare the practical performances of the different approaches to interworking. Chapter 9 discusses the conceptual and practical findings with the purpose of addressing the objective of this research. Chapter 10 finally sums up on the findings and presents a conclusion as well as suggestions for future work.

Anja Louise Schmidt (s020271)

Page 2

2 Universal Mobile Telecommunications System


Universal Mobile Telecommunications System (UMTS), also referred to as Wideband Code Division Multiple Access (WCDMA), is one of the most significant advances in the evolution of telecommunications into third-generation (3G) networks. It represents an evolution in terms of services and data speeds from today' second generation (2G) s mobile networks such as Global System for Mobile Communications (GSM) and the enhanced 2G (2G or 2+) mobile networks such as General Packet Radio Services (GPRS). The following three sections go much more into details with interesting aspects of UMTS in terms of the UMTS network architecture, different UMTS usage scenarios and UMTS mobility management.

2.1 Network architecture


The UMTS network architecture (Release 99) consists of three domains: The User Equipment (UE) domain, the UMTS Terrestrial Radio Access Network (UTRAN) domain and the Core Network (CN) domain. See Figure 1.

Figure 1. UMTS network architecture

The UE domain represents the equipment used by the user to access UMTS services while the UTRAN domain and the CN domain, together known as the infrastructure domain, consist of the physical nodes which perform the various functions required to terminate the radio interface and to support the telecommunication services requirements of the user. The three domains are further described in the following sections.

Anja Louise Schmidt (s020271)

Page 3

UE DOMAIN
The UE domain encompasses a variety of equipment types with different levels of functionality such as cellular phones, PDAs, laptops etc. These equipment types are typically referred to as user equipment. The UE domain consists of two parts: The UMTS Subscriber Identity Module (USIM) and the Mobile Equipment (ME). The USIM is a smartcard that contains user-specific information and the authentication keys that authenticates a users access to a network. The USIM is physically incorporated into a SIM card and linked to the ME over an electrical interface at reference point Cu. The ME is a radio terminal used for radio communication with the UTRAN domain over the Uu radio interface. [2][3][4].

UTRAN DOMAIN
The UTRAN domain handles all radio-related functionality. It consists of one or more Radio Network Sub-systems (RNS) where each RNS consists of one or more Node Bs and one Radio Network Controller (RNC). The Node B, also known as a Base Station and equivalent to the Base Transceiver Station (BTS) from GSM, converts the signals of the radio interface into a data stream and forwards it to the RNC over the Iub interface. In the opposite direction, it prepares incoming data from the RNC for transport over the radio interface. The area covered by a Node B is called a cell. The RNC is the central node in the UTRAN and equivalent to the Base Station Controller (BSC) from GSM. It controls one or more Node Bs over the Iub interface and is responsible for the management of all the radio resources in the UTRAN. The RNC interfaces the CN domain over the Iu interface. If there are more than one RNC, they can be interconnected via an Iur interface. [2][3][4].

CN DOMAIN
The CN domain is responsible for switching and routing calls and data connections between the UTRAN domain and external packet and circuit switched networks. It is divided into a Packet Switched network (PS), a Circuit Switched network (CS) and a Home Location Register (HLR). The PS network consists of a Serving GPRS Support Node (SGSN) and a Gateway GPRS Support Node (GGSN). The SGSN is responsible for routing packets inside the PS as well as handling authentication and encryption for the users. The GGSN serves as the gateway towards external packet switched networks like the Internet, Local Area Networks (LANs), Wide Area Networks (WANs), General Packet Radio Service (GPRS) networks, Asynchronous Transfer Mode (ATM) networks, Frame Relay networks, X.25 networks etc., and thus completes the routing function of the SGSN. The CS network consists of a Mobile Services Switching Centre (MSC)/Visitor Location Register (VLR) and a Gateway MSC (GMSC). The MSC/VLR serves as a switch and database. The MSC part is responsible for all signalling required for setting up, terminating, and maintaining connections, and mobile radio functions such as call rerouting, as well as the allocation/deallocation of radio channels, i.e. the switching function. The VLR part is controlled by the MSC part and is used to manage users that are roaming in the area of the associated MSC. It stores information transmitted by the responsible HLR for mobile users operating in the area under its control, i.e. the database function.

Anja Louise Schmidt (s020271)

Page 4

The GMSC serves similar to the GGSN as the gateway towards external circuit switched networks like other Public Land Mobile Networks (PLMNs), Public Switched Telephone Networks (PSTNs) and Integrated Service Digital Networks (ISDNs) etc. The HLR is a database located in the users home system that stores all important information relevant to the user, e.g. telephone number, subscription basis, authentication key, forbidden roaming areas, supplementary service information etc. The HLR also stores the UE location for the purpose of routing incoming transactions to the UE. [2][3][4].

2.2 Usage scenarios


To understand the dynamics of UMTS, it is useful to consider the different usage scenarios. The usage scenarios describe how the different parts of the UMTS network interact before, during and after communication. The usage scenarios for UMTS are found based on the UE service states. The UE exists in three service states: detached, connected and idle. The UE is in the detached state when it is switched off and there is no communication between the UE and the network. The UE cannot send or receive anything. In order for the user to make use of the network the UE needs to attach to the network by switching on, selecting a cell to which it can attach, and attaching to that cell. When the UE is attached to the network, it moves to the connected state and starts communication, or moves on to the idle state and becomes inactive. [5]. The states differ depending on whether the UE is in CS or PS mode. Thus there are six important usage scenarios to consider: network attachment, CS connection, CS idle, PS connection, PS idle and network detachment. These are all further explained in the following.

NETWORK ATTACHMENT
The network attachment process starts when the user turns on the UE. The user must then enter a personal PIN code to authenticate to the USIM. If the USIM authentication goes through, the UE starts searching for a cell (Node B) to attach to. The attach procedure is always initiated by the UE. When the UE finds a Node B to attach to, it synchronises with it, and then attempts to attach to it by sending an attach request to the network (RNC). The network responds by sending the UEs 15-digit USIM identification number to the HLR to inform the HLR of the UEs network attachment request. The HLR and the USIM share a 128-bit secret key, which the HLR applies to a random number. The result and the random number are then sent to the network. The network subsequently challenges the USIM with the random number. Similarly, the USIM applies the secret key to the random number and returns the result to the network. If the USIM replies with the same result as the one sent by the HLR the network accepts the UE and attaches it to the network. Finally, the network downloads any user data there must exist from the HLR to the VLR to prepare for upcoming network connections. [6].

CIRCUIT-SWITCHED CONNECTIONS
After the network attachment process the UE can proceed with the CS connection process. The CS connection process covers both the setting up a call and the receiving of a call. Common for both is, however, that a signalling connection must be established between the UE and the CN.

Anja Louise Schmidt (s020271)

Page 5

To set up a call, a CS connection must first be established. The UE therefore signals to the MSC that it requires a CS connection to a particular number. The MSC looks up the downloaded user data in the VLR to see if the user has permission to call the number. If the call is permitted, the MSC checks whether circuits are available at the MSC, and whether the UTRAN has the resources to support the call. If this is the case, it sets up the circuit connection from the UE, over the air interface, across the UTRAN, to the MSC in the CN. The MSC then switches the call to the GMSC, which switches it into the external CS network. The external CS network then performs the necessary switching functions to direct the call to the destination. When the call terminates, both the MSC and the GMSC produce a Call Detail Record (CDR). The CDR contains information about the called and calling party identity, resources used, time stamps etc., and is forwarded to the billing server to make an appropriate entry on the users billing record. To receive a call the procedure is much different. First, the call is routed through the external CS network to the GMSC. The GMSC then determines the HLR in which the user data is store in based on the telephone number. The HLR knows the location area of the UE, i.e. a group of cells throughout which the UE will be paged, and is therefore able to send a query for a roaming number indicating the destination MSC to the VLR responsible for this area. The VLR responds with the number of the MSC, and the HLR then forwards the number to the GMSC. The GMSC is now able to route the call to the MSC. Through the VLR, the MSC knows the RNC responsible for the UE location area and can therefore request that this RNC sets up a channel to the UE. The RNC then pages the UE in the last known location area and sets up a connection to the UE over the Node B when the UE responds to the page. Once the transmission link is established, the UE starts to ring. When the user picks up the phone the connection is switched through. When the signalling connection for CS services is released, e.g. at call release or radio link failure, the UE can be triggered to move to the CS idle state. Alternatively, the UE can move to the network detachment state either triggered by the user or the network. To be explained further on. [4][5][6].

CIRCUIT-SWITCHED IDLE
If the signalling connection for CS services is released, the UE moves from the CS connected state to the CS idle state. The network stops tracking the UE and the UE simply listens to the broadcast channel of the cells. As long as the UE remains inside the same location area the situation remains unchanged. Only if the UE moves into a new location area, it informs the MSC of the change of location. The new location update is stored in the HLR and copied to the VLR. If the user wants to make a call, the UE reverts to the CS connection state and performs the call set up procedure. If there is an incoming call for the UE, the RNC pages the UE. When the UE responds to the page, the RNC sets up the connection and the phone starts to ring. When the user picks up the phone, the connection is switched through. Common for both is that the UE must move from the CS idle state to the CS connection state and establish a connection. Alternatively, the UE can move to the network detachment state. [4][5][6].

PACKET-SWITCHED CONNECTIONS
An alternative to the CS connections is the PS connections. The PS connection process covers similar to the CS connection process both the setting up a call and the receiving of
Anja Louise Schmidt (s020271) Page 6

a call. Common for both is that, similar to before, a signalling connection has been established between the UE and the CN. To set up a call a PS connection must be established. The UE first activates a Packet Data Protocol (PDP) context in the GGSN. A PDP context is a range of settings that defines which packet data networks a user may use for exchanging data. The list of permitted PDP contexts is stored in the HLR. To activate the PDP context, the UE establishes a connection over the RNC to the SGSN and sends a message that the user would like to establish an external PS connection. The SGSN forwards the query to the GGSN, which then sends a query to the HLR to check if the user is authorised to access external PS connections. If the user is authorised, the GGSN activates the context and informs the UE including an IP address. The activation of the context creates a fixed IP tunnel to which outgoing data packets are sent to the RNC over the SGSN to the GGSN. The GGSN then switches the call into the external PS network, which performs the necessary switching functions to direct the call to the destination. The tunnel is active until the UE deactivates the context either by closing the application or disconnecting from the SGSN. The SSGN is continuously informed about the UEs current routing area, i.e. PS equivalent to the CS location area. If the user changes routing area to an area with a new responsible SSGN, the route in the GGSN is adapted to this. From the HLR query the SGSN and the GGSN are aware of the Quality of Service (QoS) requested for the packet transfer and are able to set up parts of the packet transfer path in advance. The QoS categories for PS connections are conversational (voice), streaming (streaming video), interactive (web browsing) and background (file transfer, emails). When the call terminates, the SGSN generates a billing record from the PDP context (based on e.g. the duration of call or the amount of data) and sends it to the billing server that makes an appropriate entry on the users billing record. To receive a call another process is required. First, the incoming call is routed through the external PS network to the GGSN. The GGSN then determines the HLR in which the user data is stored based on the telephone number. The GGSN can next look in the HLR and determine whether the UE is attached to the network and has an active PDP context. If the UE is not attached the call is rejected. If the UE is attached but does not have an active PDP context, the UE needs to be located and paged to set up an active PDP context. The HLR knows the location of the UE within the accuracy of the routing area. It therefore also knows the destination switching node (SGSN). The GGSN obtains this information at the same time it checks the HLR for UE network attachment and PDP context status. The GGSN is now able to route the call to the SGSN. The SGSN knows the RNC responsible for the UE routing area and requests that this RNC sets up a channel to the UE. The RNC pages the UE in the last known routing area and sets up a connection the UE over the Node B used by the UE when it responds to the page. Once the transmission link is established the UE receives the call and the PS connection is switched through. If the UE already has an active PDP context the packet transfer can be transmitted directly to the UE. When the signalling connection for PS services is released e.g. at release of PS service, because of very low level of activity, or at radio link failure, the UE can be triggered to move to the PS idle state. Alternatively, the UE can move to the network detachment state. [4][5][6].

Anja Louise Schmidt (s020271)

Page 7

PACKET-SWITCHED IDLE
If the UE has been idle for PS traffic for a while it goes from PS connected state to PS idle state as the network timer expires. The network stops tracking the UE, and the UE simply listens to the broadcast channel of the cells. Only if the UE moves into a new routing area it informs the SGSN of the change of location. The new location update is stored in the HLR and copied to the VLR. The logical connection between the GGSN and the UE is maintained also in PS idle state if the PDP context has not been de-activated. If the user wants to make a call and the PDP context is still active, the UE simply reverts to PS connected state and starts the call. If the PDP context is inactive, the UE first needs to revert to the PS connected state and activate the PDP context before proceeding with the call. If there is an incoming call for the UE and the PDP context is still active, the UE automatically reverts to PS connected state when receiving the call. If the PDP context is inactive at an incoming call, the UE is paged by the RNC in the last known routing area. When the UE responds to the page, the RNC sets up a connection to the UE and the incoming call is directed to the UE. Alternatively, the UE can move to the network detachment state. [4][5][6].

NETWORK DETACHMENT
When the UE no longer requires the services of the UMTS network, it can explicitly move to the network detachment state and detach from the network by sending a detach request. Network detachment can also be initiated by the network either explicitly by requesting detachment or implicitly by the network detaching the UE without notifying the UE a configuration-dependent time after the mobile reachable timer expires or after an irrecoverable radio error causes disconnection of the logical link. When network detachment is invoked all buffered data is removed. [5].

2.3 Mobility management


Mobile communication systems like UMTS are by definition meant to handle mobility management. Mobility management involves two mechanisms: location management and handover management. Location management is the mechanism of keeping track of a users location outside an active connection, while handover management is the mechanism of handing over an active connection from one cell to another. Both mechanisms are further studied in the following two sections.

LOCATION MANAGEMENT
To transfer an incoming connection to an inactive user, the network must continuously be up-to-date with the users location. The location update process is defined for both CS and PS services. In terms of CS services, the network is divided into Location Areas (LA). A LA consists of a number of cells between which the user can move without updating his location. All Node Bs in such a LA beam a specific number, a Location Area Index (LAI), which is intercepted by the UE. The UE becomes aware that it has changed LA, when this parameter changes. It consequently executes a Location Area Update (LAU) with the MSC, which then forwards the information to the HLR. In terms of PS connections, the UE will receive short data packets more frequently than is the case with CS connections. This means an increased and often unnecessary amount of paging. Consequently, the location update for PS connections divides the network into

Anja Louise Schmidt (s020271)

Page 8

even smaller areas called Routing Areas (RA) to limit the amount of paging. A RA is simply a smaller area of cells completely surrounded by a LA. The principle is the same as with LA. When the UE becomes aware of a change in RA is executes a Routing Area Update (RAU) with the SGSN, which similar to the MSC forwards the information to the HLR. [4].

HANDOVER MANAGEMENT
To forward an active connection from one cell to another, the network must perform a handover. Similar to location management, the handover process is defined for both CS and PS services. In terms of CS services, handovers can be implemented as soft handovers, softer handovers and hard handovers. Soft handover can take place in the following scenario. The UE initially only communicates with the Node B in its current cell, cell one. Then the UE starts to move in the direction of another Node B in another cell, cell two, and starts to receive the same information as on its own physical channel. Physically, the UE is in two overlapping sectors from separate Node Bs. The UE can communicate simultaneously with up to three Node Bs. As the UE moves, the UE continuously monitors the signal quality from the other cell. If the received signal strength from the Node Bs in cells one and two differ by a maximum of an amount called the handover margin during a certain period of time, a connection is also established to the Node B in cell two. When the received signal strength from Node B in cell one is smaller by a certain amount than that of the Node B in cell two, the connection to the first Node B in cell one is cleared. A soft handover has then taken place. A softer handover functions in principle the same way as a soft handover in that transmission also run in parallel over different sectors but of the same Node B. The UE initially communicates with Node B in sector one of cell one. As the UE starts to move it also starts to receive a signal from the same Node B in the same cell one but from another sector, sector two. The signal from sector two is a reflected signal of the direct signal. This can happen if for example a large building is in the line of the direct signal and thus unintentionally relays the signal in another angle. If the received signal strength from sector one and two differ by a maximum of the handover margin, a connection is established to sector two. When the received signal strength from sector one is smaller than that of sector two, the connection from sector one is cleared and a softer handover has taken place. Hard handover takes place when the connection to the current cell is broken before a connection to a new cell is made, i.e. from one frame to the next one. There are different sub-types of hard handover: inter-frequency, intra-frequency and inter-system. An interfrequency hard handover is made between two different frequencies within the same cell or adjacent cells. An intra-frequency hard handover is performed in situations where the Iur interface between two RNCs is not available for a soft handover. A hard handover is then performed from one cell belonging to one RNC to the next cell belonging to another RNC using the same frequency. Finally, an inter-system hard handover is performed when it is required to change the radio access technology from UMTS to GSM. In terms of PS services, there is one type of handover defined for a UMTS network: cell reselection. Cell re-selection takes place in the following situation. The UE continuously monitors the signal quality from other cells as the user relocates. Typically, the UE is instructed to send a measurement report to the serving RNC, when the quality of a neighbouring cell exceeds a given threshold and the quality from the current cell is unsatisfactory. When the RNC receives the measurement report, it initiates a
Anja Louise Schmidt (s020271) Page 9

handover, given that all the criteria for handover have been fulfilled. It then asks the drift RNC to reserve resources. The drift RNC returns a handover command message including the details of the allocated resources via the core network and current air interface to the UE. When the UE receives the handover command, it moves to the new cell and establishes the radio connection in accordance with the parameters included in the handover command message. The UE indicates successful completion of the handover by sending a handover complete message to the drift RNC, after which the drift RNC initiates the release of the old radio connection. Finally, when the cell re-selection has been completed, the UE initiates the routing area update procedure as described before. Even though the network solely communicates with the UE using one access technology at a time, the UE needs to perform measurements on the new cell while communicating in the current cell. Since UMTS uses continuous transmission and reception in the PS connected state, a regular UE cannot measure other cells while communicating in UMTS, if the UE has a single radio receiver. To overcome this obstacle, compressed mode is introduced. Compressed mode is a method that creates short gaps or idle spaces in transmission and reception. To maintain a perceived constant bit rate, the actual transmission bit rate is increased or compressed just before and after the gap. A constant bit rate is required for services such as voice, but for data services, a constant bit rate is not necessary. The transmission can therefore just be delayed to create a gap. The UE makes use of compressed mode to measure other cells, if the UE has a single radio receiver. If the UE, however, contains separate network radio receivers, it can use each receiver in parallel, performing measurements on one network while communicating on the other without compressed mode in the downlink. If the UE, however, contains separate network radio receivers, it can use each receiver in parallel, performing measurements on one network while communicating on the other without compressed mode in the downlink. The downside of both hard handover and cell-reselection is that so far they only work for handovers between UMTS and GSM. For handovers between UMTS and WLAN support from other protocols is required. [3][4].

Anja Louise Schmidt (s020271)

Page 10

3 Wireless Local Area Network


Wireless networking targeted at computer networks, especially the 802.11b Wireless Local Area Network (WLAN) specification, has encountered increasing recognition over wired networking the last few years. This is a result of the world becoming progressively more mobile. The following three sections go much more into details with interesting aspects of WLAN in terms of the WLAN network architecture, different WLAN usage scenarios and WLAN mobility management.

3.1 Network architecture


The 802.11b WLAN network architecture basically consists of one or more Basic Service Sets (BSSs) and a Distribution System (DS). See Figure 2 below. Both are further described in the following.

Figure 2. WLAN network architecture [7]

BASIC SERVICE SET


The basic part of the network architecture is the BSS. It consists of a group of Stations (STAs) that are under direct control of a single coordination function. The STAs are computing devices (laptops, handheld computers etc.) with wireless network interfaces. The STAs communicates through a wireless medium. The geographical area covered by the BSS is known as the Basic Service Area (BSA). The BSA is analogous to a cell in the UMTS network. In an infrastructure network all STAs communicate by channelling all traffic through a centralised Access Point (AP). The AP controls the communication in the BSS as well as providing network connectivity between other BSSs and thus has a bridging function. The AP is analogous to the Node B in the UMTS network. Opposite in an independent BSS (IBSS), also known as an ad hoc network, any STA can communicate with any other STA without channelling the traffic through an AP. When interconnecting a wireless to other networks an AP is required. Therefore only the infrastructure wireless network is of interest and to be considered in this research. [7][8][9].

DISTRIBUTION SYSTEM
A common distribution system (DS) integrates multiple BSSs. The DS does not specify any particular backbone technology and can be wired to a wide range of mediums. The integration of multiple BSSs using a DS is called an Extended Service Set (ESS). The ESS provides not only access for multiple wireless users but also gateway access for wireless users into a wired network such as the Internet. This is done through a portal device that incorporates functions analogous to a bridge. [7][8][9].

Anja Louise Schmidt (s020271)

Page 11

3.2 Usage scenarios


To understand the dynamics of WLAN, it is useful to consider the different usage scenarios, like in the UMTS case. The usage scenarios for WLAN are found based on the services states, the STA can exist in. The STA can exist in the three states detached, connected and idle. The detached state is when the STA is switched off and there is no communication between the STA and a network. To make use of a network the STA must first switch on and detach to a network through a process of scanning for networks, deciding on a network to join and authenticating and associating with the chosen network. When the STA is connected to a network it can start sending/receiving packet-switched data frames, or move to the powersaving mode, the idle state. Finally, the STA can choose not to make use of the network resources anymore and detach from the network or the network can choose to detach the STA from the network for various reasons. [9]. This gives the four usage scenarios network attachment, packet-switched connections, packet-switched idle and network detachment that are all further explained in the following.

NETWORK ATTACHMENT
In order to attach to a network, the STA must first be switched on by the user. In most cases this also includes entering a password to authenticate to the STA. If the authentication goes through, the STA can proceed with the network attachment. A STA must then identify a compatible network. This process of identifying existing networks in the area is called scanning. The scanning procedure is based on several parameters that can be either default values or user specified. The parameters include: BSSType, BSSID, SSID, ScanType, ChannelList, ProbeDelay, and MinChannelTime and MaxChannelTime. BSSType scans for ad hoc networks, infrastructure networks, or all networks. BSSID scans for a specific network to join (individual) or for any network (broadcast) that is willing to allow the STA to join. SSID assigns a string of bits to an ESS, most often the network name, allowing the STA to scan for a specific network. ScanType allows for both passive and active scanning, more about this later. ChannelList specifies a list of channels the STA can scan through. ProbeDelay is a specified delay interval before the procedure for an active scanning of a channel begins. And finally MinChannelTime and MaxChannelTime are time values of the minimum and maximum amount of time the scan works with any particular channel. The scanning can be either passive or active. In passive scanning the STA saves battery power because it does not transmit. It simply moves from channel to channel on the channel list and waits for beacon frames from nearby APs. The beacons contain the necessary information needed for the STA to match with a BSS and begin communication. In active scanning the STA takes a more active role and attempts to find the network instead of listening for the network to announce itself. Probe request frames are used to solicit responses from a networks AP that in return sends back a probe response frame. The probe request frame is targeted at all the networks belonging to the STAs own ESS but can also be targeted at all networks in the area by using the broadcast BSSID. When the scanning procedure has been completed a scan report is generated. It contains all the BSSs the scanning discovered and their parameters. The parameters include in addition to the BSSID, SSID and BSSType, beacon interval, DTIM period, timing parameters, PHY and CF parameters, and BSSBasicRateSet. The beacon interval specifies each BSS interval in which it can transmit beacon frames. The DTIM frames are

Anja Louise Schmidt (s020271)

Page 12

used as part of a power-saving mechanism. The timing parameters contain some timing information used to synchronize the STAs timer to the BSS timer. The PHY and CF parameters contain channel information and contention-free operation information. Finally, the BSSBasicRateSet contains a list of data rates the STA must support in order to join the network. When the scan report has been generated, the STA can choose to join one of the BSSs. Choosing to join a BSS does however not enable network access. The STA must also go through authentication and association. The process of choosing a BSS to join is an implementation-specific decision that can be triggered by some network default values or even by user intervention. The most common decision criteria are power level and signal strength. When the STA has decided to join a BSS, the next step is authentication. There are two major authentication approaches: open-system authentication and shared-key authentication. Open-system authentication is the only required authentication method in wireless networks. It involves the AP accepting the STA without actually verifying the STAs identity. The STA sends an authentication request frame to the AP with its MAC address as unique identifier/source address. The AP then processes the authentication request and returns an authentication response to the STA using the source address. Shared-key authentication is an optional authentication method. If this authentication method is used the entities involved must implement the Wired Equivalent Privacy (WEP) protocol, which is an 802.11 security protocol that encrypts data. Shared-key authentication involves a shared key be distributed to the STA before attempting authentication. The STA first sends an authentication request frame to the AP similar to the open-system authentication approach. The AP then returns either an authentication denied response frame or an authentication challenge response frame. The challenge frame contains a 128-bytes challenge text. The STA responds to the challenge frame by encrypting the frame body of the challenge text with its shared-key and returning it to the AP. The AP then decrypts the challenge text frame with its shared-key and verifies the integrity of the frame. A positive authentication message is returned to the STA, if the integrity of the frame is intact. A STA must authenticate with an AP before associating with it. The authentication is, however, not required to take place immediately before the association. A STA can authenticate with several APs during the scanning process so that the STA is already authenticated when association is required. This is called pre-authentication. Preauthentication means both time savings and smoother roaming operation relative to authentication. When a STA has authenticated or pre-authenticated itself to an AP, it can associate or reassociate with the AP to gain full access to the network. Association is a recordkeeping procedure of the STAs location, which is used by the DS to forward frames destined for the STA to the correct AP. A STA can only associate with one AP at a time. The association procedure is initiated by the STA, which sends an association request frame to the AP. The AP then processes the request based on some implementation-specific parameters. There are no specifications on how to determine whether an association should be granted. Most often the amount of space required for frame buffering is being considered. If the AP grants the association request, it responds

Anja Louise Schmidt (s020271)

Page 13

with a positive association message containing an Association ID (AID). The AID is used to logically identify the STA when buffered frames need to be delivered. Finally, the AP can begin to process frames for the STA, and the association is completed. Re-association is the process of moving an association from an old AP to a new AP. It is different from association in the sense that the APs interact with each other. Reassociation is initiated by the STA. The STA monitors the signal quality from its current AP as well as from other APs in the same ESS. If the STA decides that another AP is a better choice, the STA sets off re-association. The decision to make a re-association is based on product-dependent factors. The STA sends a re-association request to the new AP containing the address of the old AP. The new AP then communicates with the old AP to determine whether a previous association existed. If the old AP does not verify that it authenticated the STA, the new AP returns a de-authentication frame to the STA and end the procedure. Otherwise, the new AP responds with a positive re-association message containing an AID. The new AP then contacts the old AP and finish off the re-association procedure. The old AP sends any buffered data frames for the STA to the new AP and terminates its association with the STA. The new AP begins to process frames for the STA and the re-association is completed. Notice that the STA during the re-association process is only associated with one AP at a time during the whole process. [8][9].

PACKET-SWITCHED CONNECTIONS
Once the STA has associated/re-associated with an AP it can begin to send and receive data frames also known as Mac Protocol Data Units (MPDUs). To send data frames in an infrastructure WLAN, all frames must go through the AP, including frames to other STAs in the same service area. The STAs make use of the MAC Service Data Unit (MSDU) delivery service to send the data frames. The MSDU delivery service defines two coordination functions: the distributed coordination function (DCF) and the point coordination function (PCF). The DCF is a compulsory coordination function for both infrastructure as well as ad hoc WLANs while the PCF is an optional function for infrastructure WLANs only. The DCF is a fundamental access method all STAs must support. It supports asynchronous time-insensitive data transfer (e.g. e-mail, web browsing, file transfers etc.) on best effort basis. The DCF is based on the contention principle, which means that all STAs with data queued for transmission must contend for access to the medium. Once a STA has transmitted its data frame it must recontend for access to the medium for the remaining frames. This ensures fair access to the medium for all STAs. The DCF is based on the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) access protocol. Carrier sensing involves monitoring the medium to determine whether it is idle or busy. Two types of carrier sensing have been specified: the physical carrier-sensing and the virtual carrier-sensing. Physical carrier-sensing is provided by the physical layer, which detects the presence of other users by analyzing all the detected packets and by measuring the relative signal strength from other sources. Virtual carriersensing is provided by the Network Allocation Vector (NAV).The NAV is a timer set by the STAs specifying the amount of time the medium will be reserved. When the NAV equals zero the virtual carrier-sensing function indicates that the medium is idle. The medium is marked busy if either of the two carrier-sensing mechanisms indicates the medium is busy. If the medium is idle the source STA initiates transmission preparations. The access to the medium is controlled through the use of interframe space (IFS) time intervals. The IFS intervals are specified in three different priority classes: short IFS (SIFS), PCF-IFS (PIFS)
Anja Louise Schmidt (s020271) Page 14

and DCF-IFS (DIFS). SIFS is the shortest time interval and thus has highest priority access to the medium followed by PIFS and DIFS. For basic access, the STA waits a DIFS time interval and then senses the medium again. If the medium is still idle, the STA sets the duration field in the data frame and then transmits it. The duration field is used to let all STAs in the BSS know how long the medium will be busy and adjust their NAV. The duration field includes the expected time of transmission for the data frame, the IFS time interval for the acknowledgment frame and the acknowledgment frame. When the receiving STA receives the data frame it calculates the checksum to see whether the frame was received correctly. For positive acknowledgment (ACK) the receiving STA waits a SIFS time interval and then transmits the ACK to the source STA. Since the higher priority SIFS time interval is shorter than the DIFS interval the ACK frame will not collide with data frames. If the source STA does not receive an ACK, the transmission is considered lost and the STA must retransmit the data frame by contending for the medium again. If a source STA wants to transmit a data frame but senses the medium to be busy the STA waits until the medium becomes idle for a DIFS time interval and computes a random backoff time to schedule a reattempt. The STA then decrements the backoff timer until the medium becomes busy again or the timer reaches zero. If the medium becomes busy and the timer has not reached zero the STA freezes its timer. The timer is reactivated when the medium has been idle for a DIFS time interval again. When the timer reaches zero the STA transmits its data frame. If there are multiple STAs that want to transmit when the medium is busy they all wait a DIFS time interval and computes a random backoff time. The STA with the lowest backoff time gets to transmit first and the other STA freezes their timers. In case the timers of two STAs decrement to zero at the same time a collision will occur when they start to transmit at the same time. Both STAs will then have to generate a new backoff time. WLANs cannot handle long transmissions due to the relatively large error rates. Large data frames exceeding a certain fragmentation threshold are therefore broken into multiple fragments to increase the transmission reliability. A STA transmitting fragmented data frames is required only to wait a SIFS time interval. This STA therefore has a higher priority compared to other STAs that are required to wait a DIFS time interval and can be assured a continuously transmission of the data frame fragments. The DCF can be improved by implementing the Request to Send (RTS)/Clear to Send (CTS) control frames. A STA cannot hear if a collision occur and therefore continues to transmit the complete data frame. Especially in the case of large data frames a large amount of bandwidth can be wasted. If the source STA instead sends an RTS control frame after a DIFS time interval to the destination STA before it sends any data frame other STAs can adjust their NAV and less collisions will occur. The destination STA responds with a CTS control frame after a SIFS time interval. The source STA the waits another SIFS time interval and proceeds with the transmission. As for any data frame transmission, the destination STA responds with an ACK frame after a SIFS interval. The advantage of the RTS/CTS implementation is that should a collision occur with an RTS or CTS control frame, far less bandwidth is wasted in comparison with a large data frame. The PCF is an optional access method. Since the point coordinator (PC) resides in the AP, the PCF is restricted to infrastructure WLANs. In contrast to the DCF, the PCF supports delay-sensitive data transfer (e.g. packetized voice and video, streamed packetized audio and video etc.). The PCF is based on the contention-free principle. The PC controls all

Anja Louise Schmidt (s020271)

Page 15

traffic by polling the STAs in turns. Only after being polled a STA is allowed to transmit a frame. This way there is no contending for the medium. When the PCF and DCF coexist the capacity is shared between contention-free traffic and contention traffic in a contention-free period (CFP) and a contention period (CP) respectively. Alternating intervals of contention-free services and contention-based services repeat at regular intervals called the repetition intervals. The CFP starts when the medium has been idle for a PIFS time interval. The PC, i.e. the AP, transmits a beacon frame including the maximum duration of the CFP to help the STAs synchronise and update their NAV to the maximum length of the CFP. The PC then starts polling the STAs on its polling list by sending a polling frame. The STAs may only transmit if they have received a polling frame and then only after a SIFS time interval. The STAs may piggyback an ACK of the received polling frame together with the data frame. The CFP ends when all STAs have been polled. The PC announces this by issuing a CF End frame. The STAs reset their NAV and the CP begins according to the DCF. The CFP can be shortened in case it is lightly loaded and provide the remaining bandwidth to the CP. When the AP receives the data frames, it transfers the frames to the destination STA if it is located in the same BSS. If the data frames are destined for a STA in another BSS but in the same DS, it forwards the frames across the DS to the appropriate AP, which then transfers the frames to the destination STA. Finally, if the data frames are destined for a STA in an integrated network outside the DS, the frames are transferred to the AP, across the DS on to the portal and through the Internet, where it is routed to the destination STA using standard routing mechanisms. When the DS receives frames for a STA, either through the portal from an integrated network or from an AP in the DS, it delivers the frames to the right AP. The AP then relays the frames to the intended associated destination STA. If the destination STA is in power saving mode (explained in the next part), the AP buffers the frames for it. [7][8][9].

PACKET-SWITCHED IDLE
The main advantage of wireless networks is mobility. Mobility, however, implies that the STAs run on batteries, which is a scarce resource. To deal with this problem wireless networks operate with powering down the transceiver causing great power savings. When the transceiver is off it is sleeping, dozing, in power-saving or idle mode. When it is on it is awake, active or simply on. The optimal power saving in wireless networks is obtained by spending the maximum time in power-saving mode and the minimum time in the opposite. All this without sacrificing connectivity. The AP plays the key role in power management. First of all, it is assumed to have access to continuous power as it must remain active at all times. Second of all, it is per definition aware of the location of the STAs and has access to the power management state of the STAs through the STAs themselves. This key role has lead to two power-managementrelated tasks for the AP. Since the AP is aware of the power management state of every STA associated with it, it can also determine whether a frame should be delivered to the network because the STA is awake or should be buffered because the STA is in powersaving mode. Also, the AP has to periodically announce which STAs have frames waiting for them. This announcement of buffer status helps contributing to the power savings. It requires much less power to powering up the receiver and listening to the buffer status than periodically transmitting polling frames. During the association process the STA and AP agree on a listen interval. The listen interval is the number of beacon periods for which
Anja Louise Schmidt (s020271) Page 16

the STA may choose to sleep. When the listen interval runs out the STA must power up to active mode and listen to the buffer status of waiting frames. Failing to do so, may result in the AP discarding the frames without notification. When a STA powers up, listens to the buffer status and finds out that there are waiting frames it uses PS-poll control frames to retrieve the buffered frames. One PS-poll frame retrieves one buffered frame. The retrieved buffered frame must be positive acknowledged before it is removed from the buffer and before the STA can retrieve the next waiting frame in line. The STA must remain awake until the polling transaction has concluded, i.e. until all the waiting frames have been delivered or discarded. [8][9].

NETWORK DETACHMENT
When a STA no longer requires the services of the AP, it can terminate the existing association by using the disassociation service. Disassociation is a polite task to do, however, the network is designed to cope with STAs leaving the network without proper disassociation. When the STA invoke the disassociation service, any buffered data in the AP is removed. The disassociation service can also be used by the AP to inform the STA that the AP no longer provide the link either because of resource restraints or because the AP is shutting down or being removed from the network for a variety of reasons. Disassociation is a one-frame notification and can be invoked by either associated party, i.e. the STA or the AP. Neither party can refuse termination of the association. If a STA wishes to be removed from a BSS, it can send a de-authentication frame to the AP to notify the AP of the removal from the network. Once a STA is de-authenticated, it has no longer access to the network since de-authentication terminates any current association. To make use of the network resources, it must therefore perform the authentication function again. The de-authentication service can also be used by the AP to eliminate a previously authorised user from any further use of the network. De-authentication is a one-frame notification that can be invoked by either associated party. Neither party can refuse the de-authentication. [8][9].

3.3 Mobility management


The WLAN specification handles mobility management in a very simple way. There are no distinct definitions of location management and handover management but to illustrate the differences to UMTS mobility management this section looks more into how WLAN actually deals with location management and handover management.

LOCATION MANAGEMENT
The WLAN location management is quite different from UMTS location management in the sense that when a STA has first associated with an AP, the AP continually knows the location of the STA, as the STA is required to be within the reach of the AP, and no further mechanisms are required in order for the AP to be able to forward frames destined for the STA.

HANDOVER MANAGEMENT
WLAN also handles handover management but in terms of transitions. There are three different types of transitions defined: no transition, BSS transition and ESS transition.

Anja Louise Schmidt (s020271)

Page 17

No transition occurs when the STA does not move out of the APs service area. There are two sub-states of this state: no movement and local movement. When the STA is static there is no movement, and when the STA only moves within the APs BSS there is local movement. BSS transition occurs if the STA moves from one BSS in one ESS to another BSS within the same ESS. What basically happens is that the STA moves its current association from one AP to another within the same ESS without loosing the connection. This is also known as re-association, described in an earlier section. Finally, ESS transition occurs when the STA moves from a BSS in one ESS to a BSS in another ESS. The WLAN specification supports this transition in the sense that the STA may move but no guarantees are made as to whether the connection is maintained. In fact, the connection must be expected to break. This means that higher-layer connections are almost guaranteed to be interrupted. In order to maintain higher-level connections and provide seamless ESS transition, support from other protocols is required. [8][9].

Anja Louise Schmidt (s020271)

Page 18

4 UMTS and WLAN comparison


The two previous chapters described some fundamental characteristics of UMTS and WLAN networks in terms of network architecture, usage scenarios and mobility management. This chapter will be more specific as to how the two network technologies relate and differ by comparing them. Table 1 presents an overview of comparative differences followed by explanatory remarks.
Characteristics Services Data rates UMTS Circuit- and packet-switched services 144 kbps satellite and rural areas min 120 km/h, max 500 km/h 384 kbps urban outdoor environments max 120 km/h 2 Mbps indoor and low range outdoor min 0 km/h, max 10 km/h Cellular, national/international coverage Flexible power control High, global (UMTS, UMTS-GSM) Expensive Closed standardisation body Telecommunication WCDMA 5 MHz 10 ms 3.84 Mcps Regulated frequency spectrum WCDMA FDD WCDMA TDD New band 1920-1980 MHz (up link) 2110-2170 MHz (down link) 12 channels 1900-1920 MHz 2010-2025 MHz 7 channels 2500-2690 MHz WLAN Packet-switched services 1 Mbps max 100 m (indoor) max 450 m (outdoor) 2 Mbps max 90 m (indoor) max 300 m (outdoor) 5.5 Mbps max 70 m (indoor) max 150 m (outdoor) 11 Mbps max 30 m (indoor) max 100 m (outdoor) Non-cellular, local coverage Max. effect of 100 mW required Low, local (WLAN) Cheap Open standardisation body Data communication HR-DS 5 MHz Variable 11 Mcps Unregulated frequency spectrum 2.412-2472 GHz 13 channels

Coverage Power control Mobility Deployment costs Standardisation bodies Technological origin Air interface Channel bandwidth Frame length Chip rate Frequency regulations Frequency band

Table 1. UMTS and WLAN comparison

The first difference between the two network technologies especially from the users point of views is the range of supported services. The UMTS standard supports a variety of circuit- and packet-switched services such as voice, video telephony, video games, video conferencing, streamed voice and video, SMS, MMS, email, fax, telnet, interactive games, web browsing, ftp etc., while the WLAN specification only supports the corresponding packet-switched services. [1][8]. Another and more significant difference is the data rates. UMTS supports data rates ranging from 144 kbps up to 2 Mbps according to the specific environment and speed of travelling. High mobility users, classed as users travelling over 120 km/h and max 500 km/h in satellite and rural areas can expect data rates of 144 kbps. Full mobility user, those travelling at less than 120 km/h and in urban outdoor environments, can expect 384

Anja Louise Schmidt (s020271)

Page 19

kbps. Finally, low mobility users, those based indoor or at low range outdoor travelling at less than 10 km/h or stationary, can expect data rates of up to 2 Mbps. [1][10]. In comparison, WLAN supports data rates ranging from 1 Mbps up to 11 Mbps also according to the specific environment. For 1 Mbps data transmission the estimated maximum range indoor is 100 meters while the estimated maximum range outdoor in lineof-sight is 450 meters. For 2 Mbps the estimated maximum range for indoor and outdoor is 90 and 300 meters, respectively, while the estimated maximum indoor and outdoor ranges for 5.5 Mbps are 70 and 150 meters, correspondingly. Finally, for 11 Mbps the estimated maximum ranges for indoor and outdoor are 30 and 100 meters, respectively. [8][11]. A fundamental difference between the two network technologies is the coverage. UMTS builds on the cellular concept, which means that instead of covering a large area with one Node B, the large area is divided into a number of smaller areas or cells each covered by a separate Node B. By splitting the area up into a number of smaller cells, the same frequency can be reused over relatively small distances and thus enabling coverage for a greater number of users. Also, by reducing the area to be covered by a Node B, the transmitter power can be lowered. By the use of handover mechanisms and roaming agreements, the network can moreover provide national and even international coverage. The advantages of cellular networking are therefore increased capacity, reduced transmitter power and better coverage. [4]. In comparison, WLAN incorporates a non-cellular concept in terms of much smaller locally situated network islands, the so-called hot spots or WLAN cells, not tailored to large coverage. Usually these WLAN cells cover homes, small enterprises, campuses, hotels, hospitals, airports, restaurants etc. Since the WLAN only covers smaller limited areas, the coverage is local. [7]. Another aspect close connected with coverage is power control. UMTS communication has the flexibility to optimise the range of communication with suitable effect, while WLAN requires a maximum effect of 100 mW. This means that while a typical WLAN cell has the range of approximately 50 meters, a UMTS cell can reach up to 35 kilometres. [12]. Mobility is also a related aspect. UMTS handles mobility by performing handover/cell reselection. The handover/cell re-selection mechanisms work well as long as within the same network technology. For mobility between network technologies, inter-system hard handover and cell re-selection also work as long as between UMTS and GSM networks. There are, however, no mechanisms defined for handover between UMTS and other networks e.g. between UMTS and WLAN. UMTS is therefore defined to provide high and global mobility within UMTS networks and between UMTS and GSM networks. [3][4]. In comparison, WLAN handles mobility quite differently. WLAN supports user relocation within the BSS and even between BSSs in the same ESS by means of the transition mechanism also known as re-association. There are, however, no mechanisms defined for transition between one BSS in one ESS to a BSS in another ESS why transitions between WLAN and other networks or even between independently operated WLANs simply are not possible. This means that WLAN only provides a low and local mobility. [8][9]. The costs of deploying the two network technologies also differ. The UMTS deployment requires an expensive Node B in every network cell as well as an expensive license and frequency use fee. The license is additionally provided with obligations. In contrast, the WLAN deployment requires inexpensive access points and no license or frequency use fee or obligations. [12].

Anja Louise Schmidt (s020271)

Page 20

The standardisation bodies that handle development of the two network technologies are also very different. UMTS is defined by a closed standardisation body consisting of International Telecommunication Union (ITU), European Telecommunications Standards Institute (ETSI), and national radio frequency administrations and national license providers, while WLAN is defined by an open standardisation body consisting of Institute of Electrical and Electronics Engineers (IEEE) and national radio frequency administrators. [12]. A fundamental difference is the dissimilar technological origin. UMTS origins from the traditional large, monopoly-encumbered telecommunication sector where implementations tend to follow a traditional top-down oriented development approach corresponding to e.g. its predecessor GSM. In comparison, WLAN has its roots in the more dynamic, younger and fragmented data communication sector, where the innovation and diffusion patterns may be characterised as much more un-coordinated, like e.g. development of the Internet. [12]. Finally, there are also some more technical differences. UMTS utilises the air interface standard WCDMA, which is based on Code Division Multiple Access (CDMA). UMTS implements the special CDMA technique called Direct Sequence (DS-CDMA) that spreads the user data bit stream over a wide bandwidth by multiplying each user data bit with a (chipping) sequence of 8 code bits called chips derived from CDMA spreading codes. A chip is mathematically a bit, which means that the chipping sequence is basically a bit sequence, but to distinguish the original user data bit stream from the spread signal the term chip is used. The result of multiplying each bit of the user data bit stream with a chipping sequence is a chip stream with flattened amplitude across a relatively wide frequency band. This flattening of the amplitude over a wide band means that fairly large channels are required. While other systems use a smaller channel bandwidth of about 1 MHz, the UMTS system consequently uses a channel bandwidth of 5 MHz, hence the name Wideband CDMA. The chip stream is transmitted simultaneously with other chip streams in the same frequency range in frame lengths of 10 ms and at a transmission rate of 3.84 Mcps. This translates into the actual data rates ranging from 144 kbps up to 2 Mbps. In the receiver, the bits of the user data stream are recovered from the chip stream with a correlator, which simply inverts the spreading process. UMTS operates within a regulated frequency spectrum around 2 GHz. Since UMTS supports two basic duplex modes of operation, Frequency Division Duplex (FDD) and Time Division Duplex (TDD), the European spectrum allocation has been reserved for WCDMA FDD in the bands 1920-1980 MHz (up link) and 2110-2170 (down link) and for WCDMA TDD in the bands 1900-1920 MHz and 2010-2025 MHz. A new frequency spectrum in the frequency band 2500-2690 MHz has additionally been identified but has not yet been taken in use. For the paired channels in the frequency bands 1920-1980 and 2110-2170 MHz a total of 12 channels are available. This implies that for each 5 MHZ channel in the up link band, another channel in the down link band exists. For the unpaired channels in the frequency bands 1900-1920 and 2010-2025 MHz a total of 7 channels are available. [3][4]. In contrast, the 802.11b WLAN specification utilises the High Rate Direct Sequence (HRDS) air interface standard for data transmission. HR-DS is developed from the 802.11 DS encoding method, where the user data bit stream is spread by applying an 11-bit Barker word, a special-defined bit sequence, by a modulo-2 adder to each bit or two bit in the user data bit stream. Similar to with UMTS, this flattening of the amplitude calls for fairly large channels, why WLAN also operates with 5 MHz channel bandwidth. The outcome of the

Anja Louise Schmidt (s020271)

Page 21

multiplication procedure is a chip stream, which is transmitted in variable frame lengths at a transmission rate of 11 Mcps. This translates into actual data rates of 1 or 2 Mbps depending on whether the 11-bit Barker word is applied to each bit or two bits. To provide for higher data rates, HR-DS was released as an extension to the 802.11 specification, the 802.11b specification. Achieving higher data rates requires that each code symbol carry more information than a bit or two. The 802.11 DS encoding process did not prove suitable for carrying more bits, which led to the use of an alternate encoding method Complementary Code Keying (CCK) for the 802.11b HR-DS. CCK uses 8-bit sequences to encode 4 or even 8 bits per code word. This translates into actual data rates of 5.5 Mbps or 11 Mbps according to the specific environment. Furthermore, WLAN operates within an unregulated frequency spectrum around 2.4 GHz. A total of 13 channels, each 5 MHz wide, are located in the frequency range 2412-2472 GHz. [8][9][13][14]. Based on the listed differences there are some characteristics that are striking. First of all, the degree of offered services at first seems unequal. However, since it has never been the intention that WLAN should complement UMTS on the circuit-switched services, WLAN in fact matches up with UMTS on the packet-switched service level. Also, in the long run the development is expected to proceed towards all-IP networks where all services are delivered via packet-switched networks which eventually will equalise any differences of supported services. Next, the difference in data rates is also significant. UMTS supports data rates from 144 kbps to 2 Mbps of which the average user is expected to receive data rates of 384 kbps. WLAN oppositely supports much higher data rates in the range from 1 Mbps to 11 Mbps and thus unquestionably outdistances UMTS. Also important is the level of coverage and mobility. UMTS offers wide coverage and high mobility whereas WLAN only offers very local and therefore low coverage and has limited mobility. Common for both is, however, that they have limited mobility in terms of relocating beyond the network boundaries. UMTS handles mobility only within UMTS networks and between UMTS and GSM networks, and WLAN handles mobility only within the BSS and between BSSs in the same ESS. To extend mobility to other networks, e.g. between UMTS and WLAN, the use of additional mobility protocols are therefore required. These complementary characteristics have meant that UMTS and WLAN are considered just the right candidates for interworking to provide wide-spread multi-service wireless access by utilising high-bandwidth WLANs in hot spots and switching to UMTS networks when the coverage of WLAN is not available or the network condition in the WLAN is not good enough. One complication is, however, the shortcomings of both UMTS and WLAN with regards to mobility between the two network technologies.

Anja Louise Schmidt (s020271)

Page 22

5 Handover
The shortcomings of mobility between the two network technologies must first be resolved in order to realise any interworking. Since UMTS handles mobility in terms of handovers and cell re-selections and WLAN handles mobility in terms of transitions the mobility handling between the two networks is referred to simply as handover henceforth for consistency. The following two sections will go much more into details with the requirements for handling handovers as well as the actual handover procedure.

5.1 Handover requirements


In order to perform handovers there are a few basic requirements that must be in place. The requirements can largely be divided into two: terminal and network requirements. These are further discussed in the following.

TERMINAL REQUIREMENTS
The mobile terminal, in the following referred to as the client instead of UMTS UE or WLAN STA, must be a dual-mode terminal set up for mobile radio communication and wireless networking in terms of incorporated USIM and 802.11b wireless LAN card, necessary software, and adjusted configurations, and signed up with one or more network operators. There are no requirements for it to be able to operate on both networks at the same time, but it must support basic one-at-a-time UMTS and WLAN operation as well as support handover between the two networks. This evidently leads to modifications of existing mobile terminals in the direction of some hybrid device incorporating features from both cellular phones and laptops. Going into details with the design and functionality of this hybrid device is however out of scope with this research.

NETWORK REQUIREMENTS
The network interworking involves two possible types of ownership/management configurations: the cellular operator configuration and the wireless Internet service provider (WISP) configuration. The first configuration is the case in which the cellular (UMTS) operator owns and manages the WLAN. The cellular operator can enhance its data service capabilities with high-speed data connectivity in strategic locations such as airports and hotels by augmenting its cellular data system with WLAN. This allows the users to roam, i.e. to access another operators network (nationally as internationally) at the price of a local call or at a charge considerable less than the regular long-distance charges, outside the UMTS network. The cellular operator has the advantage (over the WISP) of an established customer base to which it can market such capabilities. Additionally, the cellular operator has authentication and billing mechanisms in place for its users, which can be reused in the WLAN. The second configuration is the case in which a WISP or enterprise owns and manages the WLAN. Although the WISP or enterprise can not offer the same service set, the same user experience may be achieved by multilateral roaming agreements between the WISP/enterprise and cellular operators. The authentication and billing mechanisms can be provided by the cellular operator in case of a WISP-owned configuration whereas the authentication mechanism would be limited and the billing mechanism non-existing in case of an enterprise-owned configuration.
Anja Louise Schmidt (s020271) Page 23

Irrespective of network ownership/management configuration roaming between the networks is assumed possible and in effect. Billing is close related to roaming in the sense that all operators involved in establishing communication expect to receive a share of the earnings. How and how much to bill all depends on what the operators have agreed on beforehand. Billing models can be comprehensive and very complex and is out of scope with this research. As a result, billing issues are not considered any further but assumed to be in place and function.

5.2 Handover procedure


There are many parameters to take into account when dealing with handovers as the procedure of handovers is both complex and comprehensive. This section will be much more specific as to how a handover is performed and what aspects of the handover procedure are being focused on in this research. The procedure of handover is threefold. First some measurements need to be performed according to some parameters and the results gathered in a measurement report. Then a handover decision is taken based on the measurement report. Finally the handover is executed if the handover decision is positive. The following considers each of the three steps in details.

MEASUREMENTS
The first step of the handover procedure is the measurements of several parameters that are required to assess the current status of the existing connection between the client and the serving cell and of the quality of other available cells. The measurements can in theory be performed by one of the two entities: the client or the network. In practice is it, however, only the client that performs the measurements because it is the only entity that has the full and exact access to and overview of the measurements. The measurements are usually carried out continuously. [15]. The measurements include both some static user preferences and profiles and some dynamic measurements. [16] suggests an approach, which organizes the static and dynamic parameters into four classifications: application, user preference and user context, terminal, and network. The static input parameters of the application classification contain a list of the services the user has subscribed to, and a preference list indicating the priority of the services in case the resources are scarce. The dynamic input parameters contain a list of services that are supported/unsupported in the network, a list of services that are active/suspended, and an indication of the delivered service quality. The static input parameters of the user preference and user context classification contain user preferences for specified contexts in terms of role-dependence as on business or in leisure time, time-dependence as in time-of-day or day-of week, situation-dependence as in driving a car or waiting for a plane, and finally location-dependence as in a particular city or in the countryside etc. Other static input parameters are cost related preferences (often related to the context, in particular the role of the user), reliability related preferences (also often related to the context, in particular the role of the user), provider related preferences (taking into account e.g. good/bad experiences or low/high prices), radio access technology preferences (preferred/excluded radio access technologies), terminal related preferences (request for preferred terminal behaviour or configuration, e.g. the power consumption of different modes), and user notification preferences (automatic or manual

Anja Louise Schmidt (s020271)

Page 24

user notification). The dynamic input parameters include automatic context determination and manual context determination. The static input parameters of the terminal classification contain general terminal capabilities in terms of type of CPU, performance of CPU, total size of memory space, total capacity of battery, display size, colour depth of the display etc. The static input parameters also contain capabilities of the transceiver in terms of number of transceivers, supported radio access technologies, terminal capabilities for specific radio access technologies, maximum power consumption per mode etc. The dynamic input parameters include transceiver usage and available terminal resources in terms of e.g. available processing power, free memory space, available battery charge etc. Finally, the static input parameters of the network classification contain available modes (UMTS/FDD, UMTS/TDD, WLAN etc) and a list of available QoS bearer services. The dynamic input parameters include available modes obtained from mode monitoring, quality of available modes obtained via radio access stratum parameters (signal level, signal quality, bit error rate, block error rate, etc), and load information obtained from monitoring of cell loads for local and neighbouring cells. [16]. The area of measurements is a complex and comprehensive field of study. Despite being a very important part of the handover procedure, its complexity and comprehensiveness speak against further discussion in this research. When the measurements are performed, they are gathered in a measurement report and sent to the entity that decides on the handover, the handover decision entity. The client is typically instructed to send the measurement report to the handover decision entity when the quality of a neighbouring cell exceeds a given threshold and the quality of the current cell is unsatisfactory. The measurements allow for analysing the actual situation as well as predict the future situation in order to determine if it is possible or necessary to realise a handover. The decision-making process is the second step of the handover procedure.

DECISION
Based on the measurement report from the measuring entity the handover decision entity decides whether a handover is required or not. The handover decision can be made by one of the two entities: the client (mobile terminal-controlled mode) or the network (network-controlled mode). In the mobile terminal-controlled mode, the client measures the signal strength and quality from the serving base station and candidate base stations. If it finds a better candidate than the current on e.g. in terms of signal strength, it initiates a handover. The network can broadcast various parameters to influence this process, but it is the client that makes the handover decision. The mobile terminal-controlled mode is therefore a completely decentralised mode without any centralised handover controller logical entity. All handover intelligence is distributed at the network edge in terms of base stations and clients. This makes the overall network architecture simple, highly scalable, and fault tolerant. The mobile terminal-controlled mode is ideally suited for pure packet-switched networks with distributed control and intelligent end devices. Additionally, it is harmonious with IP-based mobility management protocols such as Mobile IP, and consistent with multiple handover scenarios, e.g. inter-technology handovers. [15][17][18]. In the network-controlled mode, the network measures the signal strength and quality from the client and explicitly orders the client to connect to a specific cell if required. The client makes no measurements. This method results in intense network signalling and limited radio resources, which leads to latency and eventually long handover times. Furthermore,
Anja Louise Schmidt (s020271) Page 25

it is a centralised approach that implicitly assumes dumb-mobiles and smart-network. This does not fit well with IETF decentralised mobility management protocols such as Mobile IP and is best suited for circuit-switched networks. The centralised structure also means limitations on scalability and flexibility since the network entities only can handle up to a certain amount of traffic. Finally, in terms of inter-technology handovers, the network needs some kind of information from the client to base its decision on, since it is the only entity that has knowledge of multiple networks. This is where the network-controlled mode fails. This is, however, compensated for in the mobile-assisted network-controlled mode. [15][17][18]. A variant of the network-controlled mode is the mobile-assisted network-controlled mode. In mobile-assisted network-controlled mode, the network measures the signal strength and quality analogous to in the network-controlled mode. The network measurements are additionally supplemented with measurements from the client in order to make a more competent decision and compensate for the short-comings of the network-controlled mode in terms of inter-technology handovers. Even thought the network entity is influenced by information from the client, it controls the decision about a handover itself and the execution of a handover. The result is a system which reacts more quickly to changes in the radio environment and brings down the latency and derived long handover times. [19]. In UMTS networks the handover decision is based on the mobile-assisted networkcontrolled mode, where the network entity, the RNC, decides on the handover based on measurement results from the client. [19]. Opposite in WLAN networks, the handover decision is based on mobile terminal-controlled mode, where the client decides on the handover. [15]. There are many static and dynamic parameters to take into account in the handover decision procedure as seen in the measurements section. Not all are as important as others and as a result not all parameters are applied in every network configuration. It is the network decision algorithm that decides which parameters to include as well as their priority relative to one another. As there are many different algorithms in use, practically one for every network configuration, it is beyond the scope of this research to go more into details with the details of the algorithms. Given changes in the parameters, the decision entity decides whether a handover is required or not. The decision process may be initiated by one of the following handover triggers: signal quality, terminal mobility, cell load, and applications. If the signal quality of the serving cell lies below a given threshold, and the signal quality of an alternative cell is higher than the current one, then a handover may be triggered. Also, if the client moves faster or slower than what is optimal for the serving cell, a handover may be triggered. This could be in the case where the client moves high speed into a WLAN cell that innately does not handle fast speed very well. Alternatively, the client can move out of the coverage of the radio system and loose radio connection. A handover to another radio system will then be required to maintain the radio connection. Additionally, if the overall load of the serving cell is high and the available bandwidth thus becomes too small, the network can trigger a handover to a less loaded cell to balance the load. Finally, if the serving cell cannot support a starting application e.g. in terms of bandwidth or if the application running on the client downgrades the offered QoS e.g. in terms of guaranteed bit rate, then the network can trigger a handover to another network that can support the application. The user can also implicitly trigger a handover by changing application from e.g. a low-bandwidth application to a high-bandwidth application. [19].

Anja Louise Schmidt (s020271)

Page 26

Incited by the handover triggers, the handover decision entity can decide to perform a handover. A handover should be avoided if the expected gain, e.g. QoS, is too low. However, if a handover is required, the handover decision entity orders the handover execution entity to perform the handover, which is the third and final step of the handover procedure.

EXECUTION
When the handover decision has been made, the handover execution entity is notified by the handover decision entity. The handover execution entity can be one of the two entities: the client or the network. In UMTS networks the handover execution is performed by the network in terms of the drift RNC, whereas in WLAN networks, the handover is performed by the client. [20]. Usually the handover execution entity initiates handover, once being notified by the handover decision entity. However, since none of the two network technologies support handover execution between the UMTS and WLAN, alternatives are required in order to accomplish the handover execution. This is the subject matter for the next chapter that will look much more into how handover between UMTS and WLAN can be executed.

Anja Louise Schmidt (s020271)

Page 27

6 Mobility protocols
As mentioned in previous chapters, have neither UMTS nor WLAN any inherent functions for performing inter-technology handovers between UMTS and WLAN network technologies. Consequently, some sort of mobility protocol is required in order to execute such handovers. Mobility protocols exist at different layers of the Internet protocol stack. Each layer has its distinct responsibilities and thus exhibits individual behaviour. Likewise, the mobility protocols also exhibit individual behaviour when comparing them across the layers. Among the most discussed mobility protocols are the network layer protocols Mobile IP and Dynamic Domain Name System (DDNS), the transport layer protocol Mobile Stream Control Transmission Protocol (mSCTP), and the application layer protocol Session Initiation Protocol (SIP). These are further discussed and compared in the subsequent sections.

6.1 Mobile IP
The Internet Protocol version 4 (IPv4) is a fundamental network layer protocol that contains addressing information and some control information that enables data packets to be routed. The addressing information consists of a 32-bit unique identifier called the IP address. Every client on the Internet is identified by a unique IP address. The routing information contains the IP destination address plus some extra information used for finding a route between the source and destination. The dual-functions of the IPv4 create some mobility conflicts. In order for a client to be stably identifiable to others, it needs a stable IP address. However, if the IP address is stable, the routing to the client is also stable and the routing path is essentially also always the same. This means no mobility. Mobile IP version 4 (Mobile IPv4) is a supplementary network layer mobility protocol that addresses this problem by allowing the client to effectively utilise two IP addresses, one for unique identification, and the other for routing in terms of specifying the current location in the network. The home address is a static address used for identification, and the care-ofaddress is an address that changes at each new point of attachment. The upper transport and application layers keep using the home address, allowing them to remain ignorant of any mobility taking place, and thereby keeping the TCP connection alive. This means that Mobile IPv4 can provide transparency to the upper layers while providing seamless mobility using the care-of-addresses. [21][22][23]. When the client is in its home network, standard IP routing mechanisms deliver incoming and outgoing data packets to and from the client, respectively, using the home address. If the client changes location during the communication from its home network to another (foreign) network, standard IP routing mechanisms do not suffice, as it can no longer be reached by its home address. Instead it must obtain and register a care-of-address using Mobile IPv4 in order to continue communication unaffected. [21]. Figure 3 illustrates Mobile IPv4 mobility management.

Anja Louise Schmidt (s020271)

Page 28

Figure 3. Mobile IPv4 mobility management

The client is first located in its home network at location A where it has established communication with a server in a foreign network through the Internet (1). The client then changes position from location A in the home network to location B in a foreign network (2). In order to sustain communication Mobile IPv4 now enters the picture. Mobile IPv4 employs two new network elements: a home agent in the home network and a foreign agent in the visited network. These two types of agents are new network elements in terms of software updates to existing routers, usually the border routers, or alternatively new routers including the agent software incorporated in the different networks. The foreign agents as well as the home agent advertise their availability by attaching a special extension to the router advertisement, the so-called agent advertisements. The advertisements are usually broadcasted at regular intervals, e.g. once a second or once every few seconds. Alternatively, the client can send a router solicitation to ask the router to send router advertisements if impatient. When the client receives an agent advertisement, it determines whether it is in its home network or a foreign network (3). If it finds out that it is in a foreign network, it registers its new care-of-address found in the agent advertisement, or alternatively found by contacting Dynamic Host Configuration Protocol (DHCP) or Point-to-Point Protocol (PPP), with its home agent through the foreign agent (4)(5). Whenever the client moves to a new foreign network, it registers its new care-of-address. The home agent responds to this request by updating its routing table with the new careof-address (typically performed though optional), approving and authorising the request, and finally returning a registration reply to the client through the foreign agent (6)(7). The reply includes a registration lifetime, which specifies how long the registration will be honoured by the home agent. The home agent associates the home address of the client with the care-of-address until the registration lifetime expires. The triplet of a home address, a care-of-address, and registration lifetime is called a binding for the client. A registration request is therefore considered a binding update sent by the client and the registration reply a binding acknowledgment. After successful registration, the communication between the client and server can continue unaffected. When the client has data packets destined for the server, it sends the packets to the foreign agent, which then forwards them directly to the server (8)(9). In the reverse direction, when the home network receives data packets destined for the client, it intercepts the packets and encapsulates them by preceding each packet with a

Anja Louise Schmidt (s020271)

Page 29

new IP header, so-called IP-within-IP encapsulation (10). The new IP header contains among other things the destination address, which is the care-of-address. The home agent then tunnels the encapsulated data packets to the foreign agent using the care-of-address (11). The foreign agent receives the data packets, decapsulates them, and forwards them to the ultimate destination (12). This asymmetric way of routing data packets to and from the client has given it the name triangular routing. [21][22][23]. Mobile IPv4 was originally defined as an add-on for IPv4. For the emerging IP version 6 (IPv6), the mobility support (Mobile IPv6) has been an integrated feature from the beginning. This means that some of the identified problems from Mobile IPv4 have been addressed in Mobile IPv6. The major problems with Mobile IPv4 are deployment, triangular routing, tunnelling overhead, and security. [22][24]. Each of them is described separately below as is the Mobile IPv6 approach to solve the problems. A Mobile IPv4 deployment requires the implementation of foreign agents in each potential foreign network. However, the requirements for foreign agents means extensive network configuration. Mobile IPv6 deals with this problem by completely eliminating the foreign agents. It retains the ideas of a home network, home agent, and the use of encapsulation to deliver packets from the home network to the clients current point of attachment. Figure 4 illustrates Mobile IPv6 mobility management.

Figure 4. Mobile IPv6 mobility management

The scenario for Mobile IPv6 is similar to the Mobile IPv4 scenario. The client is first located in its home network at location A where it has established communication with a server in a foreign network through the Internet using standard IP routing mechanisms (1). The client then changes position from location A in the home network to location B in a foreign network (2). Instead of listening to the foreign agents advertising their availability through agent advertisements, the client listens to the router advertisements. The router advertisements in IPv6 have been extended with some extra bits of which the prefix information option format has received one bit extra, allowing the router to efficiently advertise its global IPv6 address instead of the link local address. The client can determine whether it is in its home network or a foreign network from the network prefix in the router advertisement. If the network prefix equals the network prefix of the home address of the client, it is on its home network. If the client finds out that it is in a foreign network, it obtains a care-of-address

Anja Louise Schmidt (s020271)

Page 30

and registers it with its home agent. The care-of-address is obtained using either stateful or stateless address auto-configuration. In the first situation, the client obtains the care-ofaddress by contacting e.g. a DHCPv6 server. In the latter, the client extracts the network prefix from the router advertisement and adds a unique interface identifier of the client to form the care-of-address. When the client has obtained a care-of-address, it sends a binding update to its home agent (3). The home agent responds with a binding acknowledgement (4). The Mobile IPv6 registration process thus mainly differs from the Mobile IPv4 registration process by the lack of foreign agents. [22][24][25]. Triangular routing involves that all data packets sent to the client are routed through the home agent adding delay to the traffic towards the client (not from the client) and increasing the load on the network. This problem is addressed in Mobile IPv6 by implementing route optimisation. Route optimisation was initially specified as a nonstandard extension for Mobile IPv4. Route optimisation is, however, a fundamental part of the Mobile IPv6 protocol and not just an extension. In route optimisation, the client first registers its care-of-address with the home agent as described above. It then sends a binding update directly to the server to notify the server of its new care-of-address (7), see Figure 4. The server responds with a binding acknowledgment and the two nodes can continue their communication unaffected (8)(9). The home agent can also receive data packets from the server before the client has registered its care-of-address with the server (5). In this case, the home agent receives the data packets, encapsulates them, and forwards them to the client (6). When the client receives the first encapsulated packet from the home agent, it sends a binding update to the server, which is responded with a binding acknowledgement from the server (7)(8). After this, the server and client can continue communication on a direct path without any interaction with the home agent. By disregarding the home agent and by this also triangular routing, the transport delay is reduced and the network capacity preserved. [22][24][25]. When the server sends packets to the client, the packets go via the home agent that intercepts the packets and encapsulates them, and on to the foreign agent through a tunnel. The tunnelling means that an overhead of typically 20 bytes is added to each packet (IP-within-IP encapsulation). Mobile IPv6 approaches the tunnelling overhead problem simply by removing the tunnelling function. [22][24][25]. Finally, there are some security concerns. When the client registers a care-of-address with its home agent, the home agent must be certain that the request was originated by the client and not some malicious node pretending to be the client. A malicious node could cause the home agent to alter its routing table in such a way that the client would be unreachable to all incoming communications, and in worst case that these communications would be directed to the malicious node. Mobile IPv4 employs a security association between the home agent and the client by using the Message Digest 5 algorithm with 128-bit keys to create unforgeable digital signatures for registration requests. Mobile IPv4 does, however, not require that the foreign agents authenticate themselves to the client or home agent. Mobile IPv6, in contrast, implements strong authentication and encryption features in all nodes using IP Security (IPsec). [22][24][25].

Anja Louise Schmidt (s020271)

Page 31

It might seem obvious just to implement Mobile IPv6 instead of Mobile IPv4 with its inherent problems. However, it is not as uncomplicated as it initially may look like. The main reason is that Mobile IPv6 has not yet been implemented other than on a small scale of test networks and similar. Implementing Mobile IPv6 on a full scale will still take some years. Nevertheless, many people foresee the arrival of Mobile IPv6 sooner or later, why this research will investigate both.

6.2 Dynamic Domain Name System (DDNS)


Dynamic Domain Name System (DDNS) is an alternative network layer mobility protocol, which has its root in the Domain Name System (DNS). The DNS is a network architecture that was built to support the use of domain names, e.g. studpc1.staff.com.dtu.dk, as an alternative to IP addresses, e.g. 192.38.69.57. Humans prefer to work with names rather than numbers. All nodes on the Internet are assigned both an IP address and a corresponding domain name. Just as the IP addresses are unique worldwide so are the domain names. The IP addresses and the corresponding domain names of all registered nodes on the Internet are gathered in the DNS database. The DNS database is a distributed system of many servers. No one server holds the entire database out of capacity concerns. The DNS is organised in a hierarchical system. The top of the DNS hierarchy consists of top-level nonpurchasable domains maintained by a set of 13 servers called name servers. Below are the second-level domains, which are purchasable and administered by organisations. The second-level domains are maintained by a number of servers. And finally, below the second level are the local domains that are defined and administered by the overall domain owner. The way DNS works is that when a client wants to communicate with a server it needs the IP address of the server. It therefore creates a query for the IP address based on the domain name of the server. The query begins at the client where it is passed to the client resolver, i.e. client-based software used by the client to retrieve information, for resolution. The client resolver first tries to answer the query locally using cached information obtained from a previous query. If this does not provide an answer, the query is passed on to the next level in the hierarchy, the second-level. At the second level, the server tries to answer the query using its own cache of resources record information. Should the server not be able to answer the query, the query is passed on to the top-level of the hierarchy, the name server. Similar, should the name server not be able to answer the query, it can contact other name servers on behalf of the requesting client to fully resolve the name. These name servers can then pass on the query downwards their hierarchies trying to resolve the query. When the query has been resolved, the IP address of the server is returned to the client. The client is then able to initiate connection with the server based on the newly-acquired IP address. DDNS is a DNS extension that provides another approach for resolving dynamic IP addressing. The client needs to install a small piece of software called a dynamic domain name daemon, whose role is to dynamically updating the new users information to the DSN. The daemon runs in the background and is activated only when needed. When the client gets a new IP address, the daemon contacts the DNS and updates it with the new IP address. DDNS then uses the normal DNS procedures to resolve the IP addresses. In this way, even though a domain names IP address changes often, other clients do not have to know the changed IP address in order to connect with the other client. They can simply use the fixed domain name to look up the always updated dynamic IP address of the client. [26][27][28].

Anja Louise Schmidt (s020271)

Page 32

DDNS provides excellent dynamic location update functions. It, however, lacks the functions for providing mobility in terms of seamless handover. The session inevitably breaks when the IP address changes if it is not combined with another seamless handover mechanism like e.g. Mobile IP. DDNS is therefore not a mobility solution as it requires the help from other supporting protocols and consequently will not be considered any further in this research.

6.3 Mobile Stream Control Transmission Protocol (mSCTP)


Transport layer mobility is proposed as an alternative to network layer mobility for seamless mobility. Mobility management in the transport layer is solely accomplished by use of Stream Control Transmission Protocol (SCTP) and its currently proposed Dynamic Address Reconfiguration (DAR) extension. SCTP with its DAR extension is called Mobile SCTP (mSCTP). mSCTP is a transport layer protocol similar to Transmission Control Protocol (TCP) that operates on top of the unreliable connection-less packet network. It provides unicast endto-end communication between two or more applications running in separate hosts and offers connection-oriented, reliable transportation of independently sequenced message streams. The biggest difference to TCP is multi-homing, the concept of several streams within a connection (multi-streaming) and the transportation of sequence of messages instead of sequence of bytes. mSCTP is capable of handling several multiple IP addresses at both endpoints while keeping the end-to-end connection intact. These addresses are considered as logically different paths between the endpoints. During initiation of the connection lists of addresses are exchanged between the endpoints. Both endpoints must be able to receive messages from any of the IP addresses related to the endpoints. One address is chosen as the primary address and is used as the destination for normal transmission. The other addresses are used for retransmissions only. The SCTP DAR extension enables the endpoints to add, delete and change the primary address dynamically in an active connection without affecting the established connection. [29][30][31][32][33][34]. Similar to previously, the mSCTP scenario starts with a client located in its home network at location A where it has established communication with a server in a foreign network through the Internet. Figure 5 illustrates the mSCTP connection initialization.

Figure 5. mSCTP connection initialization

In order to set up a transport layer connection with the server, the client first sends an init request to the server including a list of IP addresses and port number that will be used by the client (1). The server responds with an init acknowledgment including a state cookie and a list of IP addresses and port number that will be used by the server if it accepts the request (2). It can also contain indication about which is the primary IP address. The client must then return the state cookie from the init acknowledgement in what is known as a cookie echo (3). When the server receives the cookie echo, it moves to established state,

Anja Louise Schmidt (s020271)

Page 33

and responds with a cookie acknowledgment (4). The exchange of cookies is security enhancements. Finally, the client moves to established state (5). A connection has now been set up and a link has been established in order for the two nodes to start communicating. Similar to TCP, mSCTP automatically produces an acknowledgement in between each sequence of messages. [32]. The client then changes its position from location A in the home network to location B in a foreign network during the communication. To keep the connection, mSCTP is now introduced. Figure 6 illustrates mSCTP mobility management.

Figure 6. mSCTP mobility management

During ongoing communication between the client and server (1), the client moves from location A to location B (2). As the client moves into the coverage area of the foreign network, it receives an IP address from the local space at location B either by contacting DHCP or by IPv6 address auto-configuration. The client is now able to establish a link with the server with its second IP address and thus become multi-homed, i.e. reachable by the way of two different networks. The client therefore tells the server via the first link that it is reachable by a second IP address (3). In other words, it adds the newly assigned IP address to the association identifying the connection to the server. The server responds to this by returning an acknowledgment (4). As the client leaves the coverage area of the home network, the client tells the server to set the newly assigned IP address to the primary IP address (5), which the server responds to with an acknowledgment (6). The new primary IP address now becomes the destination address for further communication and the communication can continue unaffected over the new link (7). Finally, the client tells the server to delete the first IP address, i.e. remove it from the association (8), which produces an acknowledgment from the server (9). [29][30][32].

6.4 Session Initiation Protocol (SIP)


An alternative to network and transport layer mobility is application layer mobility. A viable application-layer mobility protocol is the IETF-developed signalling protocol Session Initiation Protocol (SIP). SIP is a signalling protocol mainly used to establish, modify, and terminate multimedia sessions consisting of multiple media streams, unicast as well as multicast. The multimedia streams include audio, video, and any Internet-based mechanisms such as distributed games, shared applications, shared text editors etc.
Anja Louise Schmidt (s020271) Page 34

SIP users are addressed using email-like addresses like user@host, where user is the user name and host is the domain name or numerical address. The SIP address changes when e.g. the user changes the network provider, moves to another job or changes organisation, not necessarily when the user changes location. For temporary change of location purposes, the user can have multiple SIP addresses and redirect calls to the current location. A SIP user can thus be represented by multiple SIP addresses, each of which can furthermore point to multiple devices. A SIP address being able to concurrently relate to multiple devices is something no other signalling protocol currently does. SIP defines four logical entities, namely user agents, registrars, redirect servers and proxy servers, and an abstract service known as the location service. The user agent has two roles: a user agent client that issues requests and receives responses and a user agent server that receives requests directed to it and issues responses by accepting, rejecting or redirecting the request. The registrar is responsible for maintaining user agent access information based on incoming modification requests from the user agent. The registrar only manages requests targeted at SIP addresses within its managed domain. Typically, such requests concern the change of location of the user. All incoming requests are communicated onward to the location service that maintains this information. The redirect server keeps track of the users location and manages redirecting contacts to the user agents that are out of the registrars domain. The redirect server returns only the location of the user; it does not relay any messages. The proxy server is responsible for relaying the messages. The proxy servers are classified in two ways. The first classification is where the proxies are classified by the location of the proxy in the path from the source user agent to the destination user agent. The closest proxy to the source user agent is the outbound proxy, while the closest proxy to the destination user agent is the inbound proxy. All proxies in between these two are the intermediate proxies. The second classification is statefulness. Stateless proxies forward requests and responses without actively generating new types of requests and responses and thus without ensuring the requests reliability. Stateful proxies respond to the user agent client requests with the response closest to the user agent clients requirements and maintain state for the transaction. Finally, the location service is a database that contains location information of the user agents. The location service is used by the proxy and redirect services to locate the user agent client and user agent servers. Typically, a physical SIP server implements a redirect and proxy server with information provided by a built-in registrar. The location service can be stored either locally at the SIP server, or in a dedicated location server. [35][36][37]. The SIP scenario starts with a client located in its home network at location A where it has established communication with a server in a foreign network through the Internet. Figure 7 illustrates the SIP connection initialization.

Anja Louise Schmidt (s020271)

Page 35

Figure 7. SIP connection initialization

As the client attaches to the home network, its user agent sends a location update to the registrar in the home SIP server (1) The registrar processes the update message and forwards it to the location service, which stores the information. The home SIP server in return sends an acknowledgement (2). The client now wants to communicate with a server located in a foreign network, so its user agent sends an invite request to its home SIP server. The home SIP server recognises that the request is not meant for it and forwards it to the SIP server belonging to the servers domain (3). The redirect server in the servers SIP server receives the request and consults the location service to find the location of the server. Most often the location service is able to find the address in the registration table. In some cases the location service can however only return an address of another redirect server. The location service returns the address to the redirect server, which returns it to the clients user agent (through its home SIP server) (4). The clients user agent confirms the response with an acknowledgement (5). The clients user agent now has the latest address of the server and is able to send its invite request to the servers user agent (6). Alternatively, instead of redirecting the request (4-6), a proxy server could forward the request to the server (7). The servers user agent acknowledges the request (8) regardless of it has gone through a redirect or a proxy server, and the two nodes can begin communicating (9). [35]. The client now changes its position from location A in the home network to location B in a foreign network during the communication. To maintain the connection, SIP implements a two-fold location update. Figure 8 illustrates SIP mobility management.

Figure 8. SIP mobility management

Anja Louise Schmidt (s020271)

Page 36

As the client moves from location A in the home network to location B in a foreign network during communication with the server (1)(2), it must update its location. Its user agent therefore sends a location update to the home SIP server so that new invite requests can be redirected correctly (3). The registrar processes the update message and forwards it to the location service, which stores the information. The home SIP server responds with an acknowledgment (4). Then the client sends a new invite request to the servers user agent using the same call identifiers as in the original connection setup (5). The request contains the new address, which tells the servers user agent where it wants to receive future SIP messages. The servers user agent acknowledges the request (6) and the communication continues unaffected (7). [35]. Mobile IP has some shortcomings when it comes to delay-sensitive multimedia applications. The triangular routing adds handover delays and the tunnelling overhead adds extra bytes to the packet header. It is more suited for long-lived TCP connections like telnet, ftp, etc. In comparison, SIP is much more suited for real-time communication over UDP. It is, however, less suited for TCP-based application. SIP is therefore often used either as a partially replacement for Mobile IP or as a complement, where SIP handles UDP connections and Mobile IP handles TCP connections. [35][36].

6.5 Mobility protocol comparison


The previous sections have discussed each of the four mobility protocols. This section goes more into details with the comparative differences. It is important to notice that there is no real straightforward solution to the problem of choosing the right layer and thus protocol for mobility. Approaches used at different layers often complement rather than exclude each other. The environment and markets greatly dictate which aspects are most appreciated at a given time. Nevertheless there are some benefits and drawbacks of each layer and hence mobility protocol that can help to point in a direction. Table 2 lists the most distinct differences between the mobility protocols. The table relates to the situation referred to previously where a client and server have established communication and where the client moves from its home network to a foreign network during the ongoing communication.
Protocol Layer Transparency Transport services Deployment Mobile IPv4 Network Yes TCP/UDP Home agent in home network Foreign agent in foreign network Mobile IPv4 supported by home agent, foreign agent and client Mobile IPv6 Network Yes TCP/UDP Home agent in home network Mobile IPv6 supported by home agent and client mSCTP Transport Yes SCTP mSCTP supported by client and server SIP Application Yes UDP SIP server in home network SIP supported by SIP server, client and server

Table 2. Comparison of mobility protocols

Anja Louise Schmidt (s020271)

Page 37

The mobility protocols first and foremost differ by the layer they operate at. As mentioned in the beginning of this chapter, each layer has its distinct responsibilities and acts based on these. Mobile IPv4 and Mobile IPv6 are network layer protocols. The primary function of the network layer is to transport packets from a sending host through a number of intermediate routers to a receiving host. Thus both end hosts as well as every intermediate router must implement network layer protocols. Routing algorithms determine which route the packets must take from source to destination. As a result the routers need not to worry about other than forwarding packets in terms of moving the packets from the routers input port to an appropriate output port. This also means that the routers have no state about end-to-end connections. mSCTP is a transport layer protocol. The primary function of the transport layer is to provide logical communication between application processes running in the end hosts. No third party other than the end hosts participate in providing the transport layer services. It is end-to-end controlled. Therefore only the end hosts need to implement transport layer protocols. This means that no additional network entities are required nor are modifications of existing network entities. Finally there is SIP, which is an application layer protocol. The primary function of the application layer is to define messages exchanged between processes running in different hosts. The types of messages exchanged are e.g. request and response messages. As a result application layer protocols need only to be implemented in the end hosts. As can be seen, there are quite some differences between the different layers and their protocols. In general it can be seen, that the lower level of the Internet protocol stack the more network entities are involved. This means that if a network layer mobility protocol was chosen in preference to a higher level mobility protocol more network entities would be influenced. Also, choosing a lower level mobility protocol would mean a hop-by-hop approach as opposed to a higher level mobility protocol that implements an end-to-end approach The mobility protocols also differ by transparency. Transparency is the ability to appear unaltered to the upper layers so they do not notice a difference in receiving services from a new lower layer protocol in comparison with a traditional one. All the protocols provide transparency though at different levels. Mobile IPv4 and Mobile IPv6 provide transparency to both the transport and the application layer. At the level higher, mSCTP also provides transparency, although only to the application layer. Finally, SIP provides a different type of transparency in the terms of transparency relative to the user. Consequently, transparency in different shapes is provided by all protocols. Another difference between the mobility protocols is their transport services. The network layer protocols provide a common service both for TCP and UDP at the upper transport layer. The transport layer protocol mSCTP, however, differs by providing a new transport service SCTP. SCTP is a transport service designed for eventually replacing TCP and in the longer run possibly also UDP. It is in many aspects better than the two; however, it suffers from almost all current software implementations relying on TCP or UDP. Choosing SCTP as transport service will evidently cause some problems and reconfigurations. Finally, the application layer protocol SIP relies on UDP as the main transport service. SIP is not very suited for TCP-based services why it is often used together with Mobile IP. As a result of all this, the choice of mobility protocol needs to take into consideration what type of traffic is to be expected. Mobile IP accommodate well for both TCP and UDP, while

Anja Louise Schmidt (s020271)

Page 38

mSCTP and SIP handles SCTP and UDP the best, respectively. Mobile IP might therefore seem the best choice initially. There are, however, other aspects that must be taken into consideration also, e.g. deployment. The deployment of the mobility protocols also differs significantly. Mobile IPv4 is probably the most extensive mobility protocol when it comes to deployment. To provide complete client mobility, a home agent in the home network as well as a foreign agent in every foreign network the client visits is required. Furthermore, the home and foreign agent(s) as well as the client are required to support Mobile IPv4. As a result, Mobile IPv4 involves quite some network modifications. The clients home network operator is responsible for setting up and maintaining the home agent including the Mobile IPv4 support, while the network operator(s) of the foreign network(s) is responsible for the foreign agent. The home network operator does not necessarily self have to set up a foreign agent, however, in order to provide client mobility the operator must have some agreement with other network operators to use their foreign agents. So far it is the user that must make sure to download a Mobile IPv4 add on to the client and set it up, but in the near future Mobile IPv4 is expected to be an innate part of the client operating system. Mobile IPv6 is similar to Mobile IPv4, however on a smaller scale. It requires a home agent in the home network as well as the support for Mobile IPv6 in the home agent and the client. Evidently, it does not require as extensive network modifications as Mobile IPv4. Similar to Mobile IPv4, it is the home network operator that is responsible for the home agent in terms of set up, maintenance and Mobile IPv6 support, and the user that is responsible for the client in terms of downloading Mobile IPv6 and setting it up. mSCTP, in contrast, does not require any additional entities or modifications of intermediate entities. The only thing required is the support for mSCTP in the client and server. This makes the network architecture simple and easier to deploy. The users of the client and server are similar to Mobile IP responsible for downloading mSCTP and setting it up on the client and server, respectively. As with Mobile IP, mSCTP can be expected to be an innate part of the client/server operating system in some years. It is, however, not as likely as it is with Mobile IP since it has a strong competitor in the established TCP/UDP transport protocols. Finally, SIP requires an additional SIP server in the home network in order to provide mobility. This comparison targets a scenario where communication between the client and server already has been established. To establish communication, a SIP server in the servers network would also be required. Also, the SIP server as well as the client and server are required to support SIP. The home network operator is responsible for set up, maintenance and SIP support of the SIP server while the users of the client and server are responsible for downloading SIP and setting it up on the client and server, respectively. It is also expected that SIP in a few years will be an innate part of the operating systems; however, at present it must be downloaded and set up manually. Summing up on the findings mSCTP seems to have the simplest deployment followed by SIP. SIP, however, does not provide mobility for all types of traffic and is therefore used in conjunction with Mobile IP. This adds to the deployment complexity somewhat in range with Mobile IP. As can be seen, choosing the right layer and, as a consequence, mobility protocol is not straightforward. In some situations one may look better than the others while in other situations it is certainly not. To make a rational decision, a more practical approach is valuable; a practical approach that evaluates and compares the performance of the four mobility protocols.

Anja Louise Schmidt (s020271)

Page 39

To evaluate and compare the performance of the four mobility protocols in question there are a number of alternatives available. One apparent alternative is to build a complete physical network and measure on site. Physical measurements are, however, not feasible if the network is large. A large number of input variables can make it difficult to study. Furthermore, the costs of setting up the network and conducting the measurements can be significant. Alternatives to physical measurements are emulations, analytical models and simulations. These are not as expensive to conduct and have the advantage of being able to be repeated over and over again without breaking anything and without adding to the costs. Of the three alternatives, simulations have the advantage of working even for very large system. Consequently, simulations became the preferred approach to assess the value of the mobility protocols in terms of handover. Simulation is a two-fold process of network modelling and network simulation. The outcome of the simulation all depends on the network modelling, which consequently takes up an important part of the process. Network modelling and network simulation are therefore treated separately in the two following chapters.

Anja Louise Schmidt (s020271)

Page 40

7 Network modelling
Network modelling is the process of defining a feasible network topology, which can then be used in the network simulation for executing a simulation and analysing and evaluating the performance of the network. Basic network modelling involves three stages: project modelling, node modelling and process modelling. Project modelling is where the general network topology is defined in terms of e.g. the scale of the network (e.g. world, enterprise, campus, office etc), size of the network (x and y span in degrees, metres, kilometre etc), technologies to be used (e.g. ATM, Ethernet, UMTS etc), and nodes and links. Node modelling defines the behaviour of each network object defined in the project model. Behaviour is defined using different modules, each of which models some internal aspects of node behaviour such as data creation, data storage etc. Modules are connected through packet streams of statistic wires. A network object is typically made up of multiple modules that define its behaviour. Finally, process modelling defines the underlying functionality of the node model. It is represented by Finite State Machines (FSMs), and is created with icons that represent states, and lines that represent transitions between states. The following sections go more into details with each of these three stages resulting in a network model that can simulate the performance of the four mobility protocols. Also, some reasonable input values are defined before the network model is finally validated.

7.1 Project model


As mentioned, the objective of project modelling is to define the general network topology. To define the network topology it is useful to start from the network scenario. The network scenario of this research consists of two communicating nodes. The two nodes could in practice be different combinations like two clients, two servers or a client and a server but for practical reasons the client-server combination was chosen. The client is initially located in a WLAN network where it has established communication with the server. The client could equally have been located in a UMTS network but for practical reasons WLAN was chosen as the initial network. During the ongoing communication with the server, the client changes location from the WLAN to a nearby UMTS network. Both nodes are theoretically movable but for this scenario only the clients movements are considered. When the client changes location, it triggers a location update. To assure that the two nodes can continue their communication unaffected of the clients change of location some sort of mobility protocol is consequently implemented in the network. The degree of successful continued communication depends on the performance of the mobility protocol. A good mobility protocol ensures that the ongoing communication can continue unaffected without the user noticing anything. An average mobility protocol can also ensure ongoing communication; however, the user may experience some delays in the communication. Finally, a poor mobility protocol can not even ensure ongoing communication as the connection may fail during change of location. The usual way to implement this kind of scenario would be to set up a network consisting of a client and a server and implement the mobility protocol in the two nodes. This would nevertheless implicate substantial work just to get the protocol implemented and also add to the complexity of the network modelling.

Anja Louise Schmidt (s020271)

Page 41

Instead this network modelling pursues another and much simpler approach in terms of implementing the effects of the mobility protocol and not the protocol itself. By implementing only the effects, the network model is kept simple and results with regard to development, validation and simulation are provided much faster. The network modelling and simulation tool OPNET Modeler, educational version, release 10.5.A PL3, build 2570 (OPNET henceforth) was chosen as the preferred tool to implement such a network model. The resulting project model is therefore two communicating nodes, a client and server, that communicate influenced by the effects of an intermediate mobility protocol. Figure 9 shows the project model as implemented in OPNET.

Figure 9. Project model

The client is an Ethernet workstation set up for client-server applications running over TCP/IP and UDP/IP at data rates of up to 100 Mbps. The client was set to the profile type engineer, which is one of the five types defined in the profile definition. The engineer type was defined to make use of four different applications: heavy http 1.1 web browsing (heavy browsing), light email (medium load), light telnet (low load) and heavy file transfer (heavy browsing). The server is an Ethernet server set up for server application running over TCP/IP and UDP/IP at data rates of up to 100 Mbps. The three nodes are interconnected by a 100BaseT Ethernet connection operating at 100 Mbps.

7.2 Node model


Next stage of the network modelling process is the node model. The three network objects in the project model are all described by underlying node models. The underlying node models for the client and server are OPNET-defined node models whereas the underlying node model for the mobility protocol is to be defined. Figure 10 shows the resulting node model implemented in OPNET.

Anja Louise Schmidt (s020271)

Page 42

Figure 10. Node model

The node model consists of different modules that each defines the behaviour of the upper mobility protocol network object. The modules are connected through packet streams. The upper branch is the downlink connection (server-client), where the data is received from the server at the right receiver (pr_0) and forwarded through a queue (server-client) to the left transceiver (pt_0) and on to the client. The lower branch is consequently the uplink connection (client-server), where the data is received from the client at the left receiver (pr_1) and forwarded through a queue (clientserver) to the right transceiver (pt_1) and on to the server. The two branches are interconnected by two logical connections.

7.3 Process model


The last stage of the network modelling process is the process model. The process model defines the underlying functionality of the two server-client and client-server queue modules in the node model. The process model is represented by a FSM, which defines the states of the modules and the criteria for changing the states. To build a FSM it is useful to follow the FSM Modelling Methodology (FMM), which is a systematic approach to building FSMs. The FMM involves five steps [38]: context definition, process level description, enumeration of events, event response table development, and implementation. Each step is further considered in the following.

CONTEXT DEFINITION
The context definition is the first step of the FMM. The purpose of this step is to identify the different modules of the network scenario and how they interact as well as developing a system diagram. The modules of the outlined scenario are narrowed down to a client, a server and a mobility protocol. The client and the server implement two-ways communication where the client sends packets to the server and the server sends packet to the client. In between exists some mobility protocol that ensures that the packets arrive at the destination also during location updates. The mobility protocol relates to both the client and the server although the server in most situations is not likely to move about. However, the process model is defined also to fit a client-client combination where both nodes are movable. The resulting system diagram follows as illustrated in Figure 11.

Anja Louise Schmidt (s020271)

Page 43

Figure 11. System diagram

PROCESS LEVEL DECOMPOSITION


The second step of the FMM is the process level decomposition. This step helps to provide an overview of the complexity of the processes in terms of a single process or multiple processes. The process level decomposition of this research consists of two processes: one process from the client to the server and another process from the server to the client. Both processes involve the sending and arriving of packets supported by some mobility protocol. In fact, the two processes only differ on source and destination, which makes the processes reversely alike and thus reduces the complexity. Further work takes advantage of this and develop only one process model, which applies for both processes.

ENUMERATION OF EVENTS
The third step of the FMM is the enumeration of events. The purpose of this step is to define a list of all the logical events that may occur during the process. For each event is furthermore defined the event implementation method (interrupt type). Table 3 lists all possible events for the outlined scenario. The events represent all the actions that can occur in the mobility protocol module as packets are sent from either the client or server to either the server or client, respectively. The events only cover events introduced to the mobility protocol module and not events introduced by the mobility protocol module.
Event name Power up Packet arrival Send packet Incoming registration request Registration request complete Table 3. Enumeration of events Event description Initialization A packet has arrived from the source (client/server) The packet is sent to the destination (server/client) A registration request (location update) has arrived (client/server) The registration request (location update) is completed Interrupt type Begsim Stream Self Self Self

EVENT RESPONSE TABLE DEVELOPMENT


The fourth step of the FMM is the event response table development. This step goes more into details with the different states of the process as well as the events (and possibly conditions) that trigger an action that may lead to a new state (or back to the same). Table 4 lists the resulting event response table.

Anja Louise Schmidt (s020271)

Page 44

State init idle idle idle

Event Power up Packet arrival Packet from queue Incoming registration request Packet arrival Packet from queue Registration request complete

Condition Always Always Always Always

loc loc loc

Always Always Always

Action Schedule event (first location update) Queue packet Schedule event (client/server server/client delay) Take packet from queue Send packet Schedule event (location update delay) Schedule event (next location update) Destroy packet Take packet from queue Send packet None

Final state idle idle idle loc

loc loc idle

Table 4. Event response table

SPECIFICATION OF PROCESS ACTIONS


The fifth and final step of the FMM is the specification of process actions. This step visualises the outcome of the event response table in a process model and implements it in OPNET. Figure 12 shows the process model that corresponds to the event response table followed by explanations to the OPNET implementation.

Anja Louise Schmidt (s020271)

Page 45

Figure 12. Process model

The first state is the init state. The init state is a forced (green) state, so that it immediately executes the enter executives and transitions to the next state, the idle state. The enter executives of the init state define some attribute values in terms of client-server and server-client delay, location update delay, location update delay parameter, fixed/random location update, location update interval, and some end point values for calculating location update interval and delay distributions. The client-server and server-client delays denote the time it takes to send a packet between client-server and server-client, respectively. Similar, the location update delay represents the time it takes to perform a location update, i.e. from the client relocates to it has informed the relevant accomplices of its new location. The location update delay parameter is a parameter used later on to calculate new location update delay values. When the client relocates, the distances between the client and the other network objects change. This also means that the different delay values change. The client-server and server-client delay values and the location update delay values are proportional, which means that when the client-server and server-client delay values change, the location update delay value also change by some fixed parameter, the location update delay parameter. Furthermore, the fixed/random location update attribute indicate that the location update interval can be set to either a fixed or a random value. If it is set to fixed

Anja Louise Schmidt (s020271)

Page 46

the location updates occur with the same fixed interval, and if set to random the location updates occur with a random interval. Next, the location update interval denotes the time between the location updates. If the location update interval is set to a fixed value, the location update interval attribute value represents the fixed interval value. Opposite, if the location update value is set to a random value, the location update interval attribute value represents the first interval value but is then changed by some distribution value. Finally, the end points values designate some input parameters for calculating location update interval and delay distributions that are used later on for calculating new (random) location update interval values and new client-server and server-client delay values. These attribute values remain the same until the first location update takes place. The enter executives furthermore schedule an event for the first location update based on the location update interval value. Also, a technology distribution function returning uniformly distributed values between 0 and 1 is implemented to allow for that when the client performs a location update it also changes network technology. The change of network technology can be from UMTS to WLAN, WLAN to UMTS, UMTS to UMTS or from WLAN to WLAN. What is characteristic about these changes of network technologies is that an extra delay is added to or subtracted from the client-server, server-client and location update delay values. The network technology is initial set to WLAN, while the probability of changing network technology from WLAN to UMTS is set to 0.8 (and thus 0.2 from UMTS to WLAN). Finally, a range of statistics is defined. The statistics include the start time for each location update, the value of the location update delay, the end time for each location update, the location update delay values for the network technology changes, the packet delays, the packet counts and the packet drops. When the enter executives have been executed, the system transitions to the next state, the idle state. There is no condition, which means that the action is always The idle state is an unforced (red) state, which means that it executes its enter executives and then pauses, allowing the simulation to turn its attention to other entities and events in the model. The idle state can be interrupted by three different events at any time: packet arrival, packet from queue and incoming registration request (location update). If a packet arrives, the first event packet arrival takes place and the system transits to the forced state arrival. There is no condition, which means that the action is always performed. The action involves queuing the packet and scheduling a delay event. The delay event corresponds to the delay that is introduced when sending a packet from the client to the server or from the server to the client. Also, the statistics for packet delays and packet counts are updated. When the scheduled event times out, the system returns to the idle state. Returning to the idle state, the second event packet from queue then takes place as the previous event queued a packet, and the system transits to the forced state send. Since there is no condition here either, the action is performed right away. The packet is taken out of the queue and sent on to the destination. Similar to before, the system then returns to the idle state. Finally, the last event incoming registration request takes place as the scheduled event first location update in the init state comes about. There is also no condition so the action is always performed. The system responds to the event by transitioning to the unforced state loc. During the transition the system executes the function location update from the function block. The function location update involves the following steps.

Anja Louise Schmidt (s020271)

Page 47

First, a new location update interval value is set. If the location update is chosen to be a fixed value, the new location update interval is set to the same and the following location updates take place at regular intervals. If the location update is chosen to be a random value, a delta location update interval value is found from a random distribution function and added to the previous location update interval value and the following location updates will then take place at variable intervals. Irrespective of a fixed or random interval, the location update interval is checked against the location update delay value to ensure that it is not scheduled before the ongoing location update is done. Next, the client-server and server-client delay values are altered to reflect the change of location. A positive or negative delta delay value is found from a random delay distribution function and added to the previous delay values. After that, the client-server and server-client delay values are further changed to reflect the change of network technology. From the technology distribution function defined in the init state, it is also decided what type of network technology change has occurred. The init state sets the first network to WLAN. Based on the theory, it was estimated that the probability of a client being in a UMTS network amounts 0.8 as UMTS has a wider coverage. If the outcome of the technology distribution function therefore falls within 80%, i.e. less than 0.8, the network technology has changed. As the initial network technology is a WLAN, the delay value increases by 31 ms because the network technology changes to UMTS. The delay value increase of 31 ms comes from the maximum Ethernet frame length of 1526 bytes, which equals 12208 bits. Transmitting a maximum Ethernet frame on an UMTS network with the average bit rate of 384 kbps thus takes 0.0318 s (12,208 bits divided by 384,000 bps) or 32 ms. Opposite, transmitting a maximum Ethernet frame on a WLAN with the average bit rate of 11 Mbps takes 0.0011 s (12,208 bits divided by 11,000,000 bps) or 1 ms. The UMTS network is therefore estimated to add an extra 31 ms delay (32 ms minus 1 ms) compared to the WLAN. The network technology is now UMTS and the probability of changing network technology, i.e. to WLAN is set to 0.2. At the next location update, the same procedure is repeated. If the outcome of the technology distribution function falls within 20%, the network technology has changed from UMTS to WLAN and the delay value decreases by 31 ms. The network technology is consequently set to WLAN and the probability of changing network technology, i.e. to UMTS is set to 0.8. Finally, if the outcome of the technology distribution function does not fall below the probability for changing network technology value, there is no network technology change. Instead, the client has moved from a UMTS network to another UMTS network or from a WLAN to another WLAN. The delay value still increases/decreases according to the network technology. Next step of the location update function is to update the location update delay value. As the location update delay value follows the client-server and server-client delay values by some fixed location update delay parameter, the location update delay value is additionally altered proportional to the client-server and server-client delay values. After that, two events are scheduled. The first event schedules how long the current location update takes i.e. the location update delay, based on the updated location update delay value. When the location update delay has been simulated, the system transits to the idle state. The second event schedules the time for the next location update based on the fixed/random location update interval value. When the next location update arrives, the system performs a new location update and returns to the loc state after executing the location update function. Finally, the statistics for location update start time, location update delay value and location update delay values for the network technology changes are updated.

Anja Louise Schmidt (s020271)

Page 48

Having executed the location update function, the system arrives at the loc state. Similar to the idle state, the loc state executes its enter executives, in this case none, and then pauses while waiting for events to interrupt. The system remains in the loc state for the duration of the location update delay scheduled in the location update function and then returns to the idle state. During this period three events can take place: packet arrival, packet from queue and registration request complete. If a packet arrives during location update, the event packet arrival takes place and the system transits to the forced state loc_arrival. There is no condition, which means that the action is always performed. The action involves destroying the packet. Also, the statistic for packet drop is updated. When that is done, the system returns to the loc state. If there are any packets in queue during the location update, the event packet from queue takes place and the system transits to the forced state loc_send. There is no condition so the action is always performed. The system takes the first packet out of the queue and sends it on to the destination. It then returns to the loc state. When the scheduled event location update delay from the location update function comes to an end, the event registration request complete takes place. There is no condition so the action is always performed. The system consequently transits to the idle state. During the transition the system executes the function location update complete from the function block. The function location update complete involves updating the location update end time statistic. Returned to the idle state the system is complete and the before mentioned pattern can be repeated continuously as packets arrive and location updates are performed. Appendix A displays the OPNET source code. Having visualised and implemented the process model in OPNET, the network model is more or less set for the actual network simulation. Before proceeding with the simulation, two things are however necessary in order to obtain reliable results: realistic input values and validation of the network model. These are dealt with in the next following sections.

7.4 Input values


In order to get some reliable results out of the network simulation, realistic input values are required. This section goes much more into details with how the implemented input values are determined. There are two types of input values: the code input values and the attribute input values. The code input values are the values implemented in the actual network model code while the attribute input values are the values the user can decide on and adjust. Both types of input values are explained more detailed in the following.

CODE INPUT VALUES


The code values include the client-server and server-client delays (delay), the location update delays (loc_update) as well as the location update parameters (loc_param). Each is further described in the following. Table 5 lists the estimated client-server and server-client delay values as well as the location update loc_update delay values employed by the four mobility protocols followed by explanatory remarks.

Anja Louise Schmidt (s020271)

Page 49

Mobile IPv4 Client-server Server-client Location update Mobile IPv6 Client-server Server-client Location update mSCTP Client-server Server-client Location update SIP Client-server Server-client Location update

Path Client foreign agent server Server home agent foreign agent client Client foreign agent home agent foreign agent client Client server Server client Client home agent client and client server client (client-initiated) Server home agent client server client (server-initiated) Average Client server client Server client server Client server client server client server client Client server Server client Client home SIP server client and client server client

Delay [ms] 50 75 100 25 25 (50) (100) 75 50 50 150 25 25 50

Table 5. Delay values

To estimate the different kinds of delay values, the paths used for client-server and serverclient packet transmission as well as for location update were first identified from the theory in chapter 6. Based on examples from the literature and rational assumptions, the delay values could then be obtained. [35] estimates the typical one-way delay between a client and a home agent in Mobile IPv4 to about 20-50 ms. As a worst case assumption, the one-way delay between the client and home agent was therefore set to 50 ms including processing time etc. Based on this, the delays between the home agent and foreign agent and the foreign agent and client were assumed to be half the client and home agent delay, a total of 25 ms each. Also, the delays between the server and home agent and the foreign agent and server were equally assumed to be half the delay of the client and home agent delay, i.e. 25 ms each. These estimates are illustrated in Figure 13 below.

Figure 13. Mobile IPv4 delay values

By adding together these estimated intermediate delay values, the client-server, serverclient and location update delay values in Table 5 were obtained.
Anja Louise Schmidt (s020271) Page 50

The delay values for Mobile IPv6 were similarly estimated based on the previous findings. Since Mobile IPv6 eliminates the use of foreign agents, the one-way delay between the client and home agent was assumed to be half the delay of that in Mobile IPv4, i.e. 25 ms. The delay between the server and home agent was assumed to be similar to Mobile IPv4, i.e. 25 ms, and the delay between the client and server was assumed to be similar to the delay between the foreign agent and server in Mobile IPv4, i.e. 25 ms. The estimates are illustrated by Figure 14.

Figure 14. Mobile IPv6 delay values

Similar to before, the client-server and server-client delay values in Table 5 were obtained by adding the estimated intermediate delay values. The location update delay value was obtained by taking into account that Mobile IPv6 supports client-initiated and serverinitiated route optimisation. In client-initiated route optimisation, the client registers its careof-address with its home agent as well as with the server. This is assumed to be done simultaneously, which means that the location update delay value totals 50 ms. In serverinitiated route optimisation, the server sends data packets to the home agent before the client has registered its care-of-address with the server. When the client receives the first data packet from the home agent it therefore immediately registers its care-of-address with the server. The location update delay value consequently totals 100 ms. To allow for both situations an average location update delay value of 75 ms was therefore estimated. Based on the Mobile IPv6 delay value, the delay introduced between the client and server in mSCTP was also assumed to take on a value of 25 ms. Figure 15 illustrates the estimate.

Anja Louise Schmidt (s020271)

Page 51

Figure 15. mSCTP delay value

Finally, [35] estimates that the location update delay introduced by SIP is approximately equal to the delay introduced between the client and home agent in Mobile IPv4. As a worst case assumption, the location update delay was therefore set to 50 ms. The SIP location update procedure involves two processes, a home SIP server location update (client-home SIP server-client) and a server location update (client-server-client), which are assumed to be carried out simultaneously as in the case with Mobile IPv6. The home SIP server and the server location update therefore equally take on a delay value of 50 ms. Based on this, the delay between the client and home SIP server was assumed to be half of the home SIP server location update delay, i.e. 25 ms, and the delay between the client and the server was assumed to be half of the server location update delay, i.e. 25 ms. Figure 16 illustrates the estimates.

Figure 16. SIP delay values

The last code input values are the location update loc_param parameters. Table 6 lists the derived parameters followed by explanatory remarks.

Anja Louise Schmidt (s020271)

Page 52

Mobile IPv4 Client-server Server-client Location update Mobile IPv6 Client-server Server-client Location update mSCTP Client-server Server-client Location update SIP Client-server Server-client Location update

Delay [ms] 50 75 100 25 25 75 50 50 150 25 25 50

Proportional parameter 2X 3X 4X X X 3X 2X 2X 6X X X 2X

Location update clientserver parameter 2

Location update serverclient parameter 4/3

Table 6. Location update parameters

As mentioned in the previous section describing the process model, the location update delay is proportional with the client-server and server-client delay values by some fixed parameter. By studying the delay values in Table 6 it can be seen that the values can be represented by a proportional parameter X, where X equals 25 ms. This means that e.g. the Mobile IPv4 client-server delay value of 50 ms can be represented by 2X (2 times 25 ms). The subsequent parameters are derived similarly. This means that the Mobile IPv4 location update client-server parameter equals the location update proportional parameter divided by the client-server proportional parameter (4X divided by 2X), a total value of 2. Similar, the Mobile IPv4 location update server-client parameter equals the location update proportional parameter divided by the server-client proportional parameter (4X divided by 3X), a total value of 4/3. The following location update parameters are derived analogously. Whenever the client-server or server-client delay changes, the location update delay can then be proportional updated also by multiplying the client-server or server-client delay with the location update parameter.

ATTRIBUTE INPUT VALUES


The second type of input values are the attribute input values that the user can decide on and adjust. The attribute input values include delay distribution end points (delay_p1 and delay_p2), location update interval (location_update_interval), location update type (location_update_type), mobility protocol (mobility_protocol) and random location update interval distribution end points (reg_req_p1 and reg_req_p2). Each is further described in the following. The delay distribution end points delay_p1 and delay_p2 are used as minimum and maximum outcome argument, respectively, for a loaded uniform distribution function that generates uniformly distributed streams of stochastic values. The values returned are stored in a state variable delay_dist that is passed on to a related function that generates a floating-point numeric outcome delta_delay. The delta_delay outcome is then added to the delay value to reflect a change in the delay value as a result of a location update. It is assumed that the relocation of the client only imposes minor delay values. Since the delay values, the client-server, server-client and location update delay values, are relatively small delay values of some micro seconds, the delta_delay outcome should for that reason only be a fraction of the delay values. As a result, the end points delay_p1 and

Anja Louise Schmidt (s020271)

Page 53

delay_p2 were estimated to -0.005 and 0.005, respectively, reflecting a 5 ms variation of the delay values. The location update interval location_update_interval attribute input value specifies the amount of time between the location updates for the fixed location updates as well as for the first random location update. The location updates must be expected to vary greatly depending on the users individual mobility pattern and are consequently difficult to estimate. Nevertheless, the location updates were initially estimated to take place at an average of every five minute or every 300 second. The location update intervals are though expected to be changed during the network simulation. The location update type location_update_type attribute input value indicates whether the location update interval is fixed or random. A random location update interval would most likely reflect a real life scenario with a user relocating at variable intervals. However, simulation-wise there is no difference in the outcome whether the interval is fixed or random as it is the exact time for location update time that is interesting not the interval in between. To ease the evaluation of the simulation results, the location update type was set to fixed. This way the location updates are more easily identified as the exact time for location update is known. The mobility protocol mobility_protocol attribute input value specifies what kind of mobility protocol is being simulated and therefore also indirectly what input values to use. In this case, four scenarios were created with each their mobility protocol. Finally, there are the random location update interval distribution end points reg_req_p1 and reg_req_p2, which are defined very similar to the delay_p1 and delay_p2 end points but with a different purpose. The location update interval distribution end points reg_req_p1 and reg_req_p2 are used as minimum and maximum outcome argument, respectively, for a loaded uniform distribution function that generates uniformly distributed streams of stochastic values. The values returned are stored in a state variable reg_req_dist that is passed on to a related function that generates a floating-point numeric outcome delta_next_reg_req. The delta_next_reg_req outcome is then added to the location_update_interval value to reflect a change in the location update interval value as a result of a random location update. It is assumed that the relocation of the client only imposes minor changes to the random location update interval value. Since the location update interval values are relatively small values of five minutes, the delta_next_reg_req outcome should thus only be a fraction of the location update interval values. As a result, the end points reg_req_p1 and reg_req_p2 were estimated to -60 and 60, respectively, reflecting a 1 min variation of the location update interval values.

7.5 Network model overview


Table 7 provides an overview of the network model configuration and input values derived from the previous four sections.

Anja Louise Schmidt (s020271)

Page 54

Configuration 100 Mbps Ethernet workstation Profile: Engineer Application: Heavy http 1.1 web browsing, heavy browsing HTTP Specification: HTTP 1.1 Page Interarrival Time (seconds): exponential (60) Page Properties: constant (1000), constant (1), HTTP Server, Not Used, Not Used Medium Image, constant (5), HTTP Server, Not Used, Not Used Server Selection: Initial Repeat Probability: Browse Pages Per Server: exponential (10) RSVP Parameters: None Type of Service: Best Effort (0) Light email, medium load Send Interarrival Time (seconds): exponential (720) Send Group Size (seconds): constant (3) Receive Interarrival Time (seconds): exponential (720) Receive Group Size: constant (3) E-Mail Size (bytes): constant (1000) Symbolic Server Name: Email Server Type of Service: Best Effort (0) RSVP Parameters: None Back-End Custom Application: Not Used Light telnet, low load Inter-Command Time (seconds): normal (120, 10) Terminal Traffic (bytes per command): normal (10, 4) Host Traffic (bytes per command): normal (5, 2.778) Symbolic Server Name: Remote Login Server Type of Service: Best Effort (0) RSVP Parameters: None Back-End Custom Application: Not Used Heavy file transfer, high load Command Mix (Get/Total): 50% Inter-Request Time (seconds): exponential (360) File Size (bytes): constant (50000) Symbolic Name Server: FTP Server Type of Service: Best Effort (0) RSVP Parameters: None Back-End Custom Application: Not Used 100 Mbps Ethernet server 100BaseT 100 Mbps Ethernet link Input values Client-server delay Server-client delay Client-server loc_update Server-client loc_update Client-server loc_param Server-client loc_param Client-server delay_p1 Client-server delay_p2 Server-client delay_p1 Server-client delay_p2 Client-server location_update_interval Server-client location_update_interval Client-server location_update_type Server-client location_update_type Mobile IPv4 0.050 s 0.075 s 0.100 s 0.100 s 2 4/3 -0.005 s 0.005 s -0.005 s 0.005 s 300 s 300 s Fixed Fixed Mobile IPv6 0.025 s 0.025 s 0.075 s 0.075 s 3 3 -0.005 s 0.005 s -0.005 s 0.005 s 300 s 300 s Fixed Fixed mSCTP 0.050 s 0.050 s 0.150 s 0.150 s 3 3 -0.005 s 0.005 s -0.005 s 0.005 s 300 s 300 s Fixed Fixed SIP 0.025 s 0.025 s 0.050 s 0.050 s 2 2 -0.005 s 0.005 s -0.005 s 0.005 s 300 s 300 s Fixed Fixed To be continued

Anja Louise Schmidt (s020271)

Page 55

Continued Client-server mobility_protocol Server-client mobility_protocol Client-server reg_req_p1 Client-server reg_req_p2 Server-client reg_req_p1 Server-client reg_req_p2 Table 7. Network model overview

Mobile IPv4 Mobile IPv4 -60 s 60 s -60 s 60 s

Mobile IPv6 Mobile IPv6 -60 s 60 s -60 s 60 s

mSCTP mSCTP -60 s 60 s -60 s 60 s

SIP SIP -60 s 60 s -60 s 60 s

7.6 Validation
As mentioned, it is necessary to validate the network model in order to obtain reliable results. This section will go much more into details with that before proceeding with the actual network simulation. To validate the network model is a two-fold process. The first step is to set up some goals as to how the model should perform and the second step is then to verify that the model actually performs according to those goals. The developed network model has two main functions: packet transmission and location update. Under normal conditions the model should perform simple packet transmission in terms of receiving packets from both the client and server and then sending them on to the appropriate destinations. Occasionally, location updates can, however, occur as a result of the clients relocation. When that happens, the model should respond by changing the delay value not only to reflect a change of location but also to reflect a change of network technology. If the location update occurs during packet transmission, the model should furthermore drop the incoming packets to reflect the changeover between networks. The goals as to how the model should perform can therefore be summarised as Receive and send packets Perform location updates Change delay as a result of the location update Drop incoming packets during location update

To verify that the model performs according to these goals, three different tests were carried out. The first test consisted of two scenarios: one scenario implementing the developed network model and a duplicate network not implementing the developed network model. By simulating these two scenarios and comparing the throughputs in terms of packets received at the client, the outcome should verify a difference in throughputs. The scenario not implementing the network model is expected to show an accurate picture of the throughput while the other scenario is expected to show a somewhat similar but on average lower throughput as a result of the dropped packets during location updates. Before running the simulation, the client application definition was changed to handle only one type of application, namely file transfer, in order to make it easier to compare the results. Also, the file transfer application definition was changed to increase the traffic load by escalating the inter-request time (seconds) from exponential (360) to exponential (60) and the file size (bytes) from constant (50,000) to constant (3,000,000). The simulation run time was set to 30 minutes. The outcome of the simulation is illustrated in Figure 17 below.

Anja Louise Schmidt (s020271)

Page 56

Figure 17. Validation of throughput

The blue graph represents the average throughput for the scenario not implementing the network model, while the red graph represents the average throughput for the scenario implementing the network model. The result clearly shows that the average throughput for the scenario implementing the network model is slightly lower than the actual throughput. This indicates that the network model actually works in terms of receiving and sending packets as well as dropping packets. The test does, however, not indicate whether the packet drops relate to the location update. The second test goes more into details with the aspects of location updates and packet drops. The network model is expected to perform location updates occasionally. When this happens, the delay value should be changed to reflect the change of location. Furthermore, if the location update happens during packet transmission, the incoming packets are expected to be dropped. This test will look into whether the network model actually performs location updates and also if the packets are dropped if coming in during the location update. The next and final test will then look into whether the delay value changes as a result of the location update. The second test employed a scenario similar to the previous scenario implementing the network model specified for Mobile IPv4. The increased file transfer load was maintained in order to assure plenty of traffic both before during and after the location updates. The location update interval attribute was changed from 300 seconds to every 120 seconds to assure that location updates take place even within short simulation runs. Location updates usually takes place within milliseconds so in order to register anything, the location update was changed to go on for 90 seconds. Finally, the simulation run time was set to 10 minutes. The outcome of the simulation is illustrated in Figure 18.

Anja Louise Schmidt (s020271)

Page 57

Figure 18. Validation of location update and packet drop

The yellow graph well hidden behind the blue graph represents the packets sent by the server, while the blue graph represents the packets received by the client. The green and red dots represent the start time and end time of the location updates, respectively. Finally, the light blue dots represent the packet drops. The result shows that the sending and receiving of packets take place unaffected before the first location update and in between some of the following location update. This once more verifies that the network model actually works in terms of receiving and sending packets under normal conditions. The result furthermore verifies that location updates actually take place by the green and red dots. What is interesting about the result is what happens during the location updates. As the first location update takes place after 120 s and until the end of the location update, the client does not receive any packets. The same goes for the following location updates. Instead, packet drops start to occur. The packet drops are the result of the server still sending packets to the client. This can be seen more clearly by a close-up on the two first location updates as illustrated in Figure 19.

Anja Louise Schmidt (s020271)

Page 58

Figure 19. Close-up of location updates and packet drops

For every packet drop, small escalating fluctuations can be observed on the yellow graph representing packets sent by the server. The result clearly shows that not only does the network model perform location updates it also drops any packets coming in during location updates. Finally, the third and last test looks more into whether the delay values change as a result of the location updates. When a location update takes place, the network model is expected to respond to this change of location by changing the delay value. First, a minor delta_delay is calculated and added to the delay value in order to reflect the physical change of location. Then an extra delay value of 31 ms is added to the delay value in order to reflect the change of network technology, as mentioned in section 7.3. This last operation is of such significance that it can be used to identify whether the network model performs the location update and change the delay value as well as to identify what type of network the client has relocated to. The third test employed a scenario that implements the network model specified for Mobile IPv4. The simulation run time was set to one hour. Figure 20 illustrates the result.

Anja Louise Schmidt (s020271)

Page 59

Figure 20. Validation of location update and delay

The blue vertical lines represent the delay values after location updates, while the red dots represent the location updates. The result clearly shows that for every location update, the delay value also changes. It even shows a remarkable difference between certain delay values. This noteworthy difference is due to the change of network technology and the extra 31 ms. It is therefore possible to identify the network in use. Due to the relatively low bit rate, UMTS implies an extra added delay, while WLAN with the relatively higher bit rate implies just the opposite. From the result it therefore shows, that the client changes from the initial set WLAN to an UMTS network after the first location update, and then on to another UMTS network. The delay value then drops quite a lot, which indicates a change to WLAN, on to another WLAN, and then back to different UMTS networks for the remaining location updates. The average result also matches with the probability ratio for changing network technology, which was set to 80/20 for UMTS/WLAN, respectively. From the three tests carried out above, it therefore seems reasonable to conclude that the network model performs according to the set goals and that the model is valid. As a result, the network model is now ready to be used for the network simulation, which is the focus area of the next chapter.

Anja Louise Schmidt (s020271)

Page 60

8 Network simulation
Network simulation is the process of setting up some scenarios and run a simulation to see how the different scenarios perform. In this research, the scenarios were focused on the four mobility protocols and their performance during location updates i.e. in terms of mobility. To evaluate the performance of the mobility protocols different approaches were at hand depending on what layer to evaluate from. However, since the users perception of the communication is the determining factor in terms of network survivability it seemed reasonable to measure the performance from the users perspective in terms of application layer evaluation, specifically the application response times for http web browsing, file transfer and emailing. The applications can be influenced by different parameters during location updates. The two most important are the location update interval and the location update delay. To evaluate the performance of the four mobility protocols as a function of these parameters, two sets of simulations were performed: the average application response time per location update intervals and the average application response time per location update delays. These simulations are further treated in the two following sections.

8.1 Average application response time per location update interval


One parameter that can have effect on the application is the interval between location updates. The larger interval between the location updates, the less packets are being dropped and thus the better throughput. This means that the larger interval between the location updates, the better application response time, and reversely the smaller interval between the location updates, the worse application response time. What can be read from these kinds of measurements is therefore which mobility protocol handles location updates best in terms of the lowest application response time as the user relocates. Before proceeding with the simulation setup it is important to consider whether the results will be reliable. The network model has been tested and validated in the previous chapter showing reliable results. What has not been tested and validated are the outside coming parameters. The simulation result can be influenced by two outside coming parameters: the seed value and the run time. The seed value is a random number generator that can be configured as desired. Because a particular random seed selection can potentially result in anomalous or non-representative behaviour, it is important to run some tests to determine a seed value that exhibits standard or typical behaviour. In terms of average application response time, the typical behaviour would be some large fluctuations in the beginning of the simulation turning into a linear graph. The run time also has an influence on the typical behaviour in the sense that a certain time of simulation is required in order to claim a representative average value. The basic principle applied is that if a typical behaviour exists, and if many independent trials are performed, it is likely that a significant majority of these trials will fall within a close range of the standard. Therefore, to ensure a reliable result, some test simulations on seed value and run time were performed to find a stabile average application response time. The scenario setup was the standard configuration and standard Mobile IPv4 input values from the network model as listed in Table 7. A number of simulations with random
Anja Louise Schmidt (s020271) Page 61

seed values (97, 112, 128, 129, 132, 147, 159 and 178) were then simulated with a run time of 10 hours. See Appendix B for all the results. The test simulations showed that a seed value of 129 returned a typical behaviour, see Figure 21 below.

Figure 21. Seed value and run time test simulation (seed value = 129, run time = 10 hours)

From the test simulation can also be seen that a run time of 2 hours is sufficient in order to provide a stable result. The conclusion on this test of seed value and run time was therefore that with a seed value of 129 and a run time of 2 hours a reliable simulation result can be expected. Having also validated the out coming parameters, the simulation of the average application response time per location update interval proceeded. The scenario setup was the standard network model configuration and input values from Table 7. The location update interval and mobility protocol attributes were promoted to a higher level in the edit attributes menu of the mobility protocol node. In the choose statistics (advanced) menu two attribute probes named location update interval and mobility protocol were then created as was some different application response time node statistic probes in order to collect data in a scalar file. Then the simulation was run with the seed value 129 and run time 2 hours for each mobility protocol in location update intervals from 0 to 600 seconds in steps of 30 seconds, a total of 84 simulation runs. The results from the simulation were collected in a scalar file. By loading the output scalar file it became possible to create a graph of two scalars with a third parameter, in this case with location update interval as horizontal scalar, average application response time as vertical scalar and mobility protocol as the third parameter. Figure 22 below displays the average http page response time for each mobility protocol as a function of location update interval.

Anja Louise Schmidt (s020271)

Page 62

Figure 22. Average http page response time per location update interval

The blue graph represents the average http page response time (in seconds) per location update interval (in seconds) for Mobile IPv4, while the red, green and light blue graphs represent the corresponding graphs for Mobile IPv6, mSCTP and SIP, respectively. From the result it can be seen that Mobile IPv6 and SIP are very alike and so are Mobile IPv4 and mSCTP. Of the four graphs, Mobile IPv6 and SIP, however, distinguish themselves from the other two with a somewhat lower average http response time and thus relatively better performance. Nevertheless, the comparative difference between the two sets of matching graphs is less than 0.5 seconds, maximum http average response time approximately 1.75 seconds minus minimum http average response time approximately 1.25 seconds, which most likely is something the user will not notice while web browsing. From a user perspective, there is therefore no difference as to what mobility protocol is in use. This leads to believe that the performance of the mobility protocol measured from application level after all does not have the conclusive say in this matter and that the performance of the mobility protocols must be determined from an implementation perspective. To have a reliable background to conclude on, two similar graphs were created for the average ftp download response time and average email download response time for each mobility protocol, respectively, as a function of the location update interval, see Appendix C. The two graphs showed the more or less same result with only minor deviations in terms of Mobile IPv6 and SIP as two matching graphs with relatively better performance than the two other matching graphs. The graphs are not completely reliable with regard to the absolute values, since TCP can have an effect on the results in terms of retransmissions caused by location updates as can the application settings in terms of e.g. type of web browser. However, they provide valuable testimony of the relative values.

Anja Louise Schmidt (s020271)

Page 63

An alternative way to illustrate the application response time per mobility protocol is by a probability density function. Such function describes the probability that the variable application response time takes on for different values. To create such a function, four different scenarios were prepared, one for each mobility protocol. The scenario setups were based on the standard network model configuration and input values from Table 7. The four scenarios were then simulated with a run time of 2 hours and a seed value of 129. Figure 23 below displays the compared results of the four probability density functions for http page response time.

Figure 23. Probability density of http page response time

As before, the blue graph represents Mobile IPv4, the red Mobile IPv6, the green mSCTP and the light blue SIP. The last graph is not visible as it is to be found accurately behind the graph for Mobile IPv6. The graphs shows that Mobile IPv6 and thus SIP have the highest probability for http page response times of approximately 1.2 seconds followed by mSCTP with http page response times of approximately 1.3 seconds and Mobile IPv4 with http page response times of approximately 1.5 seconds. As before, these values are only relative and not absolute values. This means that Mobile IPv6 and SIP in general have the best performance when it comes to the smallest http page response time followed by mSCTP and Mobile IPv4 in that order. Another thing that can be read from the result is the distribution of the functions, the interval within which the probability functions of the different http page response times are spread. Mobile IPv6 and SIP are distributed in an interval from approximately 0.5 seconds to 1.5 seconds, a total of 1 second, while Mobile IPv4 is distributed from approximately 0.7 seconds to 1.9 seconds, a total of 1.2 seconds. In contrast, mSCTP is distributed from approximately 0.6 seconds to 4.9 seconds, a total of 4.3 seconds. This rather large distribution for mSCTP indicates that there is a fairly small probability of http page
Anja Louise Schmidt (s020271) Page 64

response times of up to 4.3 seconds. The large distribution for mSCTP is, however, most likely caused by a single atypical representation and not a general representation. The typical would probably be three (four with SIP) rather similar distributions in the range of Mobile IPv4, Mobile IPv6 and SIP. Like before, two additional probability density functions for ftp and email download response times were created for each mobility protocol to have a more reliable background to conclude on. See Appendix D for the graphs. The two graphs showed a similar result with Mobile IPv6 and SIP having the best performance in terms of smallest http page response times followed by mSCTP and Mobile IPv4. The graphs also showed somewhat similar distributions supporting the idea that the distribution for mSCTP is similar to the other distributions. From the first set of simulations it was therefore concluded that Mobile IPv6 and SIP equally have the best performance in terms of relative average application response times per location update interval followed by mSCTP and Mobile IPv4 in that order. The simulations, however, also showed signs of that the performance of the mobility protocols measured from application level does not have the conclusive say in choosing the right mobility protocol, and that the performance of the mobility protocols must be determined from an implementation perspective.

8.2 Average application response time per location update delay


Another parameter that can have an effect on the application besides the interval between location updates is the duration of the location, the location update delay. The larger duration of the location update, the more packets are being dropped. This means that the larger location update delay, the worse application response time and reversely the smaller location update delay, the better application response time. To measure the average application response time per location update delay for the four mobility protocols, three different scenarios were set up. All three scenarios were configured to transmit a single ftp file. The first scenario was furthermore set up not to schedule any location updates, while the second and third scenarios were set up to schedule one and two location updates, respectively. The following goes much more into details with the three scenarios.

NO LOCATION UPDATE
The first scenario was configured according to the standard network model configuration and input values in Table 7, however, only for one application type, namely file transfer. The file transfer application definition was furthermore changed to a command mix (get/total) of 100%, inter-request time (seconds) of exponential (7200) and file size (bytes) of constant (50000000). Then the profile definition was changed to a start time offset (seconds) of uniform (0, 10), inter-repetition time of exponential (7200) and start time (seconds) of uniform (0, 10). Finally, the process model was altered slightly for the purpose of this test; see Appendix E (red markers). First, the scheduling of a location update event was removed. Then a location update delay attribute was created in order to be able to create a scalar file including different location update delay values. Finally, the related location update delay state variable was divided by a factor of 1000 to compensate for OPNETs insufficiency of

Anja Louise Schmidt (s020271)

Page 65

handling values of micro seconds. To recompense for this manoeuvre, the location update delay attribute values must subsequently be multiplied by a factor of 1000. After configuring the scenario, four test scenarios, one for each mobility protocol, were first simulated with a location update delay attribute of 0, a run time of 2 hours and a seed value of 129. Figure 24 displays the result of the test simulation for the Mobile IPv4 scenario.

Figure 24. Mobile IPv4 test simulation for a single ftp file without location update

The test simulation results verify that the entire ftp file is transmitted within the simulation time without the occurrence of any location updates, and that the run time in fact can be reduced to one hour. Similar test simulations were performed for Mobile IPv6, mSCTP and SIP, and all showed the same result. Next, the actual simulation followed. Since the run time for the simulations was reduced to one hour, the setup for the scenario with no location updates was subsequently changed accordingly. The inter-request time (seconds) in the application definition was changed to exponential (3600), while the inter-repetition time (seconds) in the profile definition was changed to exponential (3600). Next, the location update delay and mobility protocol attributes were promoted to a higher level. And finally, two attribute probes named location update delay and mobility protocol were created in the choose statistics (advanced) menu as was some different application response time node statistic probes in order to collect data in a scalar file. Then the simulation was run with the seed value 129 and run time one hour for each mobility protocol in location update delays from 0 to 500 milliseconds in steps of 500 milliseconds, a total of 8 simulation runs. Since there were no location updates scheduled, it was not necessary to make more intervals. The results from the simulation were

Anja Louise Schmidt (s020271)

Page 66

collected in a scalar file from which the average ftp download response time for each mobility protocol as a function of location update delay could be created. Figure 25 below displays the result.

Figure 25. Average ftp download response time per location update delay (no location update)

The blue graph represents the average ftp download response time (in seconds) per location update delay (in milliseconds) for Mobile IPv4 while the red graph represents the corresponding graph for Mobile IPv6. The green graph for mSCTP is hidden behind the graph for Mobile IPv4 while the light blue graph for SIP is hidden behind the graph for Mobile IPv6. From the result it can be seen that the average ftp response time for Mobile IPv6 and SIP are similar and so is the average ftp response time for Mobile IPv4 and mSCTP. This can be explained with the average http page response time per location update interval simulation result in Figure 22 that also displays two sets of somewhat similar graphs. The outcome for the two different sets of graphs is not necessarily entirely similar but differs in milliseconds. However, on a larger scale as in Figure 25 these differences tend to fade out. The result also indicates that there is a significant difference in the average ftp download response times with Mobile IPv6 and SIP having a relatively better performance than Mobile IPv4 and mSCTP. Again the values are only the relative and not the absolute values due to TCP retransmissions and applications settings.

Anja Louise Schmidt (s020271)

Page 67

ONE LOCATION UPDATE


Next, the scenario was changed to schedule exactly one location update during the file transfer. The process model was altered a little bit further to schedule exactly one location update after 300 seconds; see Appendix F (red markers). To test that the scenario transmits the ftp file and schedules one location update, a test simulation for Mobile IPv4 was performed with a run time of one hour and seed value 129. Figure 26 below displays the result.

Figure 26. Mobile IPv4 test simulation for a single ftp file with one location update

The test simulation result verifies that the entire ftp file is transmitted and that one location update occurs during the file transfer. Similar test simulations were performed for Mobile IPv6, mSCTP and SIP, and all showed the same result. Next, the actual simulation followed with seed value 129 and run time one hour for each mobility protocol in location update delays from 0 to 500 milliseconds in steps of 10 milliseconds, a total of 204 simulations. The results from the simulation were collected in a scalar file from which the average ftp download response time for each mobility protocol as a function of location update delay could be created. Figure 27 below displays the result.

Anja Louise Schmidt (s020271)

Page 68

Figure 27. Average ftp download response time per location update delay (one location update)

The blue graph represents the average ftp download response time (in seconds) per location update delay (in milliseconds) for Mobile IPv4 while the red graph represents the corresponding graph for Mobile IPv6. Similar to the simulation of one location update, the green graph for mSCTP is hidden behind the graph for Mobile IPv4 while the light blue graph for SIP is hidden behind the graph for Mobile IPv6. The result is almost identical to the previous result with no location updates in terms of a relative difference between the two set of graphs for Mobile IPv6 and SIP and for Mobile IPv4 and mSCTP. The result however differs in terms of a small deviation on all the graphs. To ensure that this deviation is not caused by faults in the network model, some safety checks were made on the graph for Mobile IPv4 with the assumption that if the check turns out reasonable, the other graphs must also be correct. To safety check the graph for Mobile IPv4, one point on each side of the deviating area were inspected more carefully. By doing so different parameters such as delays from changing network technology and TCP retransmissions could be studied and show if there are any differences explaining the deviation. Figure 28 below displays a close-up of the area in focus from which it can be seen that location update delays of 300 and 325 milliseconds represent two good points for inspection.

Anja Louise Schmidt (s020271)

Page 69

Figure 28. Close-up of the average ftp download response time per location update delay for Mobile IPv4

Two additional simulations were subsequently performed for these two location update delays to collect the data in vector files resulting in Figure 29 and Figure 30 on the next page. The blue dots represent the location updates, while the red graphs represent the throughputs from the mobility protocol to the client. The green graphs show the delays as a result of changing network technology while the light blue and yellow graphs hidden behind the green graphs show the packet drops and TCP retransmissions, respectively. Figure 31 and Figure 32 on the following page furthermore display a close-up of the area around the location update for both simulations. The two similar results first of all show a general drop in throughput after the location update. This may be explained by the extra delay added as a result of changing network technology from WLAN to UMTS. Another contributing factor could also be the delay added as a result of TCP congestion control. TCP uses a congestion window in the sender side to do congestion avoidance. The congestion window indicates the maximum amount of data that can be sent out on a connection without being acknowledged. TCP detects congestion when it fails to receive an acknowledgement for a packet within the estimated timeout. The inherent assumption is that the lack of an acknowledgment is due to network congestion. In such a situation, it decreases the congestion window to a maximum segment size, which can then cause extra delay. From the close-ups it can be seen that there are packet drops as a result of the location updates right after the location update and nowhere else during the file transfer. Also, the packet drops are equivalent from the first simulation with a location update delay of 300 ms to the second simulation with a location update delay of 325 ms, and do not indicate extra packet drops by an increase in packet drops.
Anja Louise Schmidt (s020271) Page 70

Figure 29. Throughput for Mobile IPv4 at a location update delay of 300 ms

Figure 30. Throughput for Mobile IPv4 at a location update delay of 325 ms

Anja Louise Schmidt (s020271)

Page 71

Figure 31. Close-up of the location update for Mobile IPv4 at a location update delay of 300 ms

Figure 32. Close-up of the location update for Mobile IPv4 at a location update delay of 325 ms

Anja Louise Schmidt (s020271)

Page 72

The close-ups also show that there are TCP retransmissions briefly after the packet drops as a result of the packet drops and nowhere else during the file transfer. Also, the TCP transmissions are equivalent from the first simulation with a location update delay of 300 ms to the second simulation with a location update delay of 325 ms, and therefore do not indicate extra retransmissions by an increase in retransmissions. Finally, the close-ups show that there is no relative difference in the change of network technology delay from the first simulation with a location update delay of 300 ms to the second simulation with a location update delay of 325 ms All this consequently leads to believe that the deviation is not caused by faults and that the network model is reliable after all. The conclusion of this safety check is therefore that no evident faults have been detected and that the result from the simulation, how peculiar it may seem, is assumed to be correct. The result of the second simulation for one location update therefore indicates that there is a significant difference in the average ftp download response times with Mobile IPv6 and SIP having a relatively better performance than Mobile IPv4 and mSCTP.

TWO LOCATION UPDATES


Finally, the scenario was changed to schedule exactly two location updates during the file transfer. The process model was altered even further to schedule exactly two location updates after 300 and 600 seconds, respectively; see Appendix G (red markers). To test that the scenario transmits the ftp files and schedules two location updates, a test simulation for Mobile IPv4 was performed with a run time of one hour and seed value of 129. Figure 33 below displays the result.

Figure 33. Mobile IPv4 test simulation for a single ftp file with two location updates

Anja Louise Schmidt (s020271)

Page 73

The test simulation verifies that the entire ftp file is transmitted and that two location updates occur during the file transfer. Similar test simulations were performed for Mobile IPv6, mSCTP and SIP, and all showed the same result. Next, the actual simulation followed with seed value 129 and run time one hour for each mobility protocol in location update delays from 0 to 500 milliseconds in steps of 10 milliseconds, a total of 204 simulations. The results from the simulation were collected in a scalar file from which the average ftp download response time for each mobility protocol as a function of location update delay could be created. Figure 34 below displays the result.

Figure 34. Average ftp download response time per location update delay (two location updates)

The blue graph represents the average ftp download response time (in seconds) per location update delay (in milliseconds) for Mobile IPv4, while the red, green and light blue graphs represent the corresponding for Mobile IPv6, mSCTP and SIP, respectively. The light blue graph for SIP is hidden behind the graph for Mobile IPv6. The result shows that there is a relative difference between the four mobility protocols in terms of average ftp download response time. What is different compared to the two previous simulations is that mSCTP this time distinguishes itself from Mobile IPv4. This indicates that mSCTP handles multiple location updates relatively better than Mobile IPv4 but still not as well as Mobile IPv6 and SIP. From the result it can also be seen that the graphs also have some deviations as in the case with only one location update. To ensure these deviations are not caused by faults in the network model, a safety check for Mobile IPv4, similar to the previous case with only one location update, was performed. Figure 35 displays the close-up of the area in focus on the graph for Mobile IPv4. From the close-up it can be seen that location update delays
Anja Louise Schmidt (s020271) Page 74

of 200 and 225 milliseconds as well as 300 and 325 milliseconds represent good points for inspection.

Figure 35. Close-up of the average ftp download response time per location update delay for Mobile IPv4

Four additional simulations were subsequently performed for these four location update delays resulting in four identical figures. Figure 36 on the next page displays the throughput for Mobile IPv4 at a location update delay of 200 ms. The blue dots represent the location updates, while the red graph represents the throughput from the mobility protocol to the client. The green graph shows the delay as a result of changing network technology while the light blue and yellow graphs hidden behind the green graph show the packet drops and TCP retransmission, respectively. Figure 37, also on the next page, furthermore displays a close-up of the area around the location update. Similar to the previous sanity check, the close-up shows no signs of extraordinary packet drops or TCP retransmissions, nor does it show signs of adding extra delay as a result of changing network technology from WLAN to UMTS. All this consequently leads to believe that the deviations are not caused by faults and that the network model is reliable. The result of the second simulation for two location updates therefore indicates that there is a significant difference in the average ftp download response times with Mobile IPv6 and SIP having a relatively better performance than mSCTP followed by Mobile IPv4.

Anja Louise Schmidt (s020271)

Page 75

Figure 36. Throughput for Mobile IPv4 at two location update delays of 200 ms

Figure 37. Close-up of the location updates for Mobile IPv4 at two location update delays of 200 ms

Anja Louise Schmidt (s020271)

Page 76

From the second set of simulations it was therefore concluded that Mobile IPv6 and SIP equally have the best performance in terms of relative average application response times per location update delay followed by mSCTP and Mobile IPv4 in that order.

8.3 Network simulation overview


From the two performed sets of simulations the overall conclusion was therefore that Mobile IPv6 and SIP equally have the best performance in terms of relative average application response times followed by mSCTP and Mobile IPv4 in that order. The simulations, however, indicated that measuring the performance of the mobility protocols form the application level not necessarily brings the answer to what mobility protocol should be chosen. The comparative differences simply are too insignificant in order for the user to notice any difference and care about it. The choice of mobility protocol should therefore be a compromise between the network deployment parameters and the application level performance parameters. This is further discussed in the next chapter.

Anja Louise Schmidt (s020271)

Page 77

9 Discussion
Choosing the right mobility protocol is a compromise between the network deployment parameters and the application level performance parameters. In the following, all four mobility protocols are discussed in terms of their strengths and weaknesses from both a network deployment perspective as well as from an application level performance perspective to identify the mobility protocol with the general best performance and thus the mobility protocol best suited for realising the idea of a multi-service UMTS and WLAN interworking solution. Mobile IPv4 proved to be the most extensive protocol in terms of network deployment. It requires a home agent in the home network, a foreign agent in every visited foreign network as well as the support for Mobile IPv4 in the client, home agent and foreign agents. Also in terms of application level performance, Mobile IPv4 proved to be the protocol with the poorest performance relative to the three other protocols. On a scale of general performance, Mobile IPv4 therefore does not rank high. In contrast, Mobile IPv6 proved to be less extensive in terms of network deployment. Mobile IPv6 only requires a home agent in the home network and the support for Mobile IPv6 in the client and home agent. The network deployment is therefore characterised as more moderate. In terms of application level performance, Mobile IPv6 proved to have the best performance at level with SIP. All in all two good signs. The major obstacle for Mobile IPv6 is the dependence on IPv6. IPv6 is still not implemented other than on a small scale of test networks so in order for Mobile IPv6 to work, it is required that IPv6 starts getting implemented on a large scale. Based on the situation today with practically no sight of IPv6, Mobile IPv6 does not rank very high on the general scale of performance. Should this change with a general changeover from IPv4 to IPv6, Mobile IPv6 would nevertheless be on the high end of the general performance scale. mSCTP proved to be the most simple protocol in terms of network deployment. No additional entities or modification of intermediate entities are required. The only thing required is the support for mSCTP in the client and the server. In terms of application level performance, mSCTP proved to have a medium performance somewhere in between Mobile IPv4 and Mobile IPv6. In general, mSCTP therefore shows signs of a medium ranking on the general performance scale. However, like Mobile IPv6 mSCTP also struggles with a major obstacle, namely the dependence on SCTP transport services. Almost all current software implementations rely on TCP or UDP. Implementing mSCTP would therefore require reconfiguration of at least the client and server to support SCTP. The problem is, however, not as extensive as with Mobile IPv6 and IPv6, since it is limited to the client and server. Nevertheless, based on the situation today with limited support of SCTP, mSCTP would probably be somewhere in the middle on the general scale of performance. Finally, SIP proved to be moderate in network deployment requiring a SIP server in the home network as well as the support for SIP in SIP server, client and server. Not as simple as mSCTP but in range with Mobile IPv6. Also, SIP turned out with the best application level performance at level with Mobile IPv6. The major obstacle of SIP is, however, the limited support for UDP transport services only. To support for TCP services the

Anja Louise Schmidt (s020271)

Page 78

implementation of Mobile IPv4 is required. On a scale of general performance, SIP therefore ranks somewhere close between mSCTP and Mobile IPv4. As can be seen, there is no straightforward solution as to what mobility protocol to implement. Is the choice to be made from the situation here and now, or is it to be made from an ideal situation where IPv6 is implemented on a large scale and SCTP commonly supported. In a here and now situation, Mobile IPv6 would probably be considered the least ideal despite its relative superior performance simply because IPv6 is not implemented on a large scale. Mobile IPv4 would probably rank somewhere higher than Mobile IPv6 but lower than mSCTP and SIP because of its extensive network deployment and relative bad application level performance. Whether mSCTP or SIP should then be considered the best solution is difficult to say. They both rank somewhere in the middle on the general scale of performance. mSCTP struggles with the obstacle of being dependent on SCTP transport services while SIP struggles with the lack of support for TCP transport services. Also, mSCTP provides the most simple network deployment while SIP provides the best application level performance. mSCTP would probably be slightly better than SIP on the general scale since it is fairly easy to reconfigure the client and server to support SCTP, while it is slightly more extensive to implement Mobile IPv4 with SIP in order to support for TCP also. In an ideal situation, Mobile IPv6 would probably be considered the most ideal mobility protocol because of its superior performance, while Mobile IPv4 would be considered the least ideal. In between would be mSCTP and SIP of which mSCTP probably would be the best of the two because of its simple network deployment and fairly medium application level performance and SIPs limited support only for UDP transport services.

Anja Louise Schmidt (s020271)

Page 79

10 Conclusion
Different approaches to how interworking best can be achieved between the two network technologies UMTS and WLAN have been presented in this research. More specifically, the network layer mobility protocols Mobile IPv4 and Mobile IPv6, the transport layer mobility protocol mSCTP and the application layer protocol SIP have been laid out. By examining and comparing the four protocols it became evident that there is no straightforward solution as how to decide on one. A more practical approach in terms of network modelling and simulation was therefore introduced to evaluate and compare the performance of the four protocols at application level. The simulation results showed that Mobile IPv6 and SIP equally have the best performance followed by mSCTP and Mobile IPv4 in that order. The simulations, however, also indicated that measuring the performance of the mobility protocols at application level not necessarily gives the answer to what mobility protocol to choose as the comparative differences are too insignificant for the user to notice anything. The choice of mobility protocol therefore became a compromise between the network deployment parameters and the application level performance parameters. By comparing the two sets of parameters, it showed that depending on the situation different solutions were at hand. In a here and now situation, mSCTP was considered the best alternative followed by SIP, Mobile IPv4 and Mobile v6 in that order. In an ideal situation where IPv6 has been implemented on a large scale and SCTP is commonly supported, Mobile IPv6 was considered the best alternative, followed by mSCTP, SIP and Mobile IPv4, respectively.

Future work
The work presented here in this research has uncovered some interesting possibilities for improvements and future work. The network simulations showed some deviations that were explained as actual behaviour. No tests, however, showed signs of suspicious interrupts. A different approach to finding the explanation for these deviations could be to go more into details with the TCP settings. TCP congestion control automatically reduces the congestion window in case of congestion, which then causes a drop in throughput. By running some test for different TCP settings, an optimal setting producing the largest throughput possible could maybe reduce the deviations. Furthermore, the application level performance simulations indicated that conclusions should not be made on the application level performance results alone, as the comparative differences are too insignificant for the user to notice. Nevertheless, it could be interesting to go more into details with some additional tests to see if this is also a general trend. This research focuses on one client and one server, but what happens if it is changed to e.g. 50 clients and one server instead. Also, more location updates within the same file transfer and other types of applications could serve as potential tests.

Anja Louise Schmidt (s020271)

Page 80

11 References
[1] Tsao, S. and Lin, C.: Design and Evaluation of UMTS-WLAN Interworking Strategies, IEEE 56th Vehicular Technology Conference Proceedings, 2002, vol. 2, p. 777-781. Walke, B.: Mobile radio networks, 2nd edition, John Wiley, 2001, p. 433-523. Holma, H. and Toskala, A.: WCDMA for UMTS, 2nd edition, John Wiley, 2002. Walke, B., Seidenberg, P. and Althoff, M.P.: UMTS The Fundamentals, 1st edition, John Wiley, 2003. 3GPP TS 23.121 v3.6.0: 3rd Generation Partnership Project; Technical Specification Group Services and Systems Aspects; Architectural Requirements for Release 1999 (Release 1999), 3GPP, 1999. Wisely, D., Eardley, P. and Burness, L.: IP for 3G Networking Technologies for Mobile Communications, 1st edition, John Wiley, 2002. Crow, B. P. et al.: IEEE 802.11 Wireless Local Area Networks, IEEE Communications Magazine, 1997, vol. 35, issue 9, p. 116-126. Gast, M. S.: 802.11 Wireless Networks The Definitive Guide, 1st edition, OReilly, 2002. IEEE Std 802.11-1999: IEEE Standard for Information technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE, 1999.

[2] [3] [4] [5]

[6] [7] [8] [9]

[10] http://www.umtsworld.com/technology/overview.htm [11] http://www.intel.com/business/bss/infrastructure/wireless/solutions/technology.htm [12] Jansen, P. and Nilsen, P.: Next generation mobile communication infrastructure: UMTS and WLAN who will succeed?, draft paper, Department of Informatics, University of Oslo, 2002. [13] IEEE Std 802.11b-1999: Supplement to 802.11-1999, Wireless LAN MAC and PHY specifications: Higher speed Physical Layer (PHY) extension in the 2.4 GHz band, IEEE, 1999. [14] IEEE Std 802.11b-1999/Cor1-2001: IEEE Standard for Information technology Telecommunications and information exchange between systems Local and metropolitan area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications Amendment 2: Higher-speed Physical Layer (PHY) extension in the 2.4 GHz band Corrigendum 1, IEEE, 2001.
Anja Louise Schmidt (s020271) Page 81

[15] Pahlavan, K. et al.: Handoff in Hybrid Mobile Data Networks, IEEE Personal Communications, 2000, vol. 7, issue 2, p. 34-47. [16] Niedermeier, C. et al.: Handover Management and Strategies for Reconfigurable Terminals, SDR Forum Document Number SDRF-02-I-0047-V0.00, 2002. [17] Alsenmyr, G. et al.: Handover between WCDMA and GSM, Ericsson Review, 2003, vol. 80, issue 1, p. 6-11. [18] Kapoor, S.: Mobile-Controlled Handoff for MBWA, IEEE 802.20 Working Group on Mobile Broadband Wireless Access, 2003. [19] Mohyeldin, E. et al.: Concepts and Scenarios for Intersystem Handover in Heterogeneous Environments. [20] Freedman, A. and Hadad, Z.: Handoff Schemes Overview and Guidelines for Handoff Procedures in 802.16, IEEE 802.16 Broadband Wireless Access Working Group, 2002. [21] Perkins, C.E.: Mobile IP, IEEE Communications Magazine, 1997, vol. 35, issue 5, p. 84-99. [22] Perkins, C.E.: Mobile Networking Through Mobile IP, IEEE Internet Computing, 1998, vol. 2, issue 1, p. 58-69. [23] Perkins, C.: IP Mobility Support, RFC 2002, 1996, p. 1-79. [24] Nokia: Introducing Mobile IPv6 in 2G and 3G mobile networks, Nokia, 2001, p. 116. [25] Johnson, D.: Mobility Support in IPv6, RFC 3775, 2004, p. 1-165. [26] Foo, S. et al.: Approaches for resolving dynamic IP addressing, Internet Research, 1997, vol. 7, issue 3, p. 208-216. [27] Vixie, P. et al: Dynamic Updates in the Domain Name System (DNS UPDATE), RFC 2136, 1997, p. 1-26. [28] Wellington, B.: Secure Domain Name System (DNS) Dynamic Update, RFC 3007, 2000, p. 1-9. [29] Riegel, M. and Txen, M.: Mobile SCTP: Transport Layer Mobility Management for the Internet, SoftCOM 2002, 2002. [30] Ma, l. et al.: A New Method to Support UMTS/WLAN Vertical Handover Using SCTP, IEEE Vehicular Technology Conference, 2003, vol. 3, p. 1788-1792. [31] Ong, L. et al.: An Introduction to the Stream Control Transmission Protocol (SCTP), RFC 3286, 2002, p. 1-10.

Anja Louise Schmidt (s020271)

Page 82

[32] Stewart, R. et al.: Stream Control Transmission Protocol, RFC 2960, 2000, p. 1134. [33] Stewart, R. et al.: Stream Control Transmission Protocol (SCTP) Dynamic Address Reconfiguration, Internet draft, draft-ietf-tsvwg-addip-sctp-08.txt, 2003, p. 1-37. [34] Koh, S.J. et al.: Mobile SCTP for Transport Layer Mobility, Internet draft, draftsjkoh-sctp-mobility-03.txt, 2004, p. 1-14. [35] Wedlund, E. and Schulzrinne, H.: Mobility Support using SIP, ACM/IEEE International Conference on Wireless and Mobile Multimedia, 1999, p. 1-7. [36] Schulzrinne, H. and Wedlund, E.: ApplicationLayer Mobility Using SIP, IEEE Service Portability and Virtual Customer Environments, 2001, p. 1-9. [37] Rosenberg, J. et al.: SIP: Session Initiation Protocol, RFC 3261, 2002, p. 1-269. [38] Christiansen, H.: Process modelling with OPNET, 34354 Network modelling and simulation lecture notes, Research Center COM, 2003.

Anja Louise Schmidt (s020271)

Page 83

12 Appendices Appendix A: OPNET source code Appendix B: Seed value and run time test results Appendix C: Additional average application response times per location update interval Appendix D: Additional functions for probability density of application response time Appendix E: Modified process model for simulation with no location update Appendix F: Modified process model for simulation with one location update Appendix G: Modified process model for simulation with two location updates

Anja Louise Schmidt (s020271)

Page 84

Appendix A: OPNET source code


================================================================================ Process Model Attributes ================================================================================ -------------------------------------------------------------------------------Attribute: location_update_interval -------------------------------------------------------------------------------Data Type: double -------------------------------------------------------------------------------Attribute: delay_p1 -------------------------------------------------------------------------------Data Type: double -------------------------------------------------------------------------------Attribute: delay_p2 -------------------------------------------------------------------------------Data Type: double -------------------------------------------------------------------------------Attribute: reg_req_p1 -------------------------------------------------------------------------------Data Type: double -------------------------------------------------------------------------------Attribute: reg_req_p2 -------------------------------------------------------------------------------Data Type: double -------------------------------------------------------------------------------Attribute: mobility_protocol -------------------------------------------------------------------------------Data Type: integer -------------------------------------------------------------------------------Attribute: location_update_type -------------------------------------------------------------------------------Data Type: integer --------------------------------------------------------------------------------

A-1

================================================================================ Process Model Interface Attributes ================================================================================ -------------------------------------------------------------------------------Interface Attribute: begsim intrpt -------------------------------------------------------------------------------Assign Status: set Initial Value enabled Data Type: toggle Comments: YES This attribute specifies whether a 'begin simulation interrupt' is generated for a processor module's root process at the start of the simulation. -------------------------------------------------------------------------------Interface Attribute: doc file -------------------------------------------------------------------------------Assign Status: set Initial Value nd_module Data Type: string Comments: YES This attribute defines the name of the product help file which will be displayed when the user invokes help for this object. -------------------------------------------------------------------------------Interface Attribute: endsim intrpt -------------------------------------------------------------------------------Assign Status: set Initial Value disabled Data Type: toggle Comments: YES This attribute specifies whether an 'end simulation interrupt' is generated for a processor module's root process at the end of the simulation. -------------------------------------------------------------------------------Interface Attribute: failure intrpts -------------------------------------------------------------------------------Assign Status: set Initial Value disabled Data Type: enumerated Comments: YES This attribute specifies whether failure interrupts are generated for a processor module's root process upon failure of nodes or links in the network model. -------------------------------------------------------------------------------Interface Attribute: intrpt interval -------------------------------------------------------------------------------Assign Status: set Initial Value disabled Data Type: toggle double Comments: YES This attribute specifies how often regular interrupts are scheduled for the root process of a processor module. -------------------------------------------------------------------------------Interface Attribute: priority -------------------------------------------------------------------------------Assign Status: set Initial Value 0 Data Type: integer Comments: YES A-2

This attribute is used to determine the execution order of events that are scheduled to occur at the same simulation time. -------------------------------------------------------------------------------Interface Attribute: recovery intrpts -------------------------------------------------------------------------------Assign Status: set Initial Value disabled Data Type: enumerated Comments: YES This attribute specifies whether recovery interrupts are scheduled for the processor module's root process upon recovery of nodes or links in the network model. -------------------------------------------------------------------------------Interface Attribute: subqueue -------------------------------------------------------------------------------Assign Status: set Initial Value (...) Data Type: compound Comments: YES This operation attribute permits the addition and deletion of subqueues within the queue module. -------------------------------------------------------------------------------Interface Attribute: super priority -------------------------------------------------------------------------------Assign Status: set Initial Value disabled Data Type: toggle Comments: YES This attribute is used to determine the execution order of events that are scheduled to occur at the same simulation time. --------------------------------------------------------------------------------

A-3

================================================================================ Header Block ================================================================================ #define reg_req 42 #define send_pk 43 #define go_idle 44 #define packet_arrival (op_intrpt_type() == OPC_INTRPT_STRM) #define packet_from_queue ((op_intrpt_type() == OPC_INTRPT_SELF)&&(op_intrpt_code() == send_pk)) #define incoming_reg_req ((op_intrpt_type() == OPC_INTRPT_SELF)&&(op_intrpt_code() == reg_req)) #define reg_req_complete ((op_intrpt_type() == OPC_INTRPT_SELF)&&(op_intrpt_code() == go_idle)) --------------------------------------------------------------------------------

A-4

================================================================================ State Variable Block ================================================================================ double \delay; Packet * \pk_ptr; Stathandle \pkt_delay; Stathandle \pkt_count; int \count; Distribution * \delay_dist; double \delta_delay; int \id; double \delta_reg_req; double \location_update_interval; double \next_loc_update_time; Distribution * \reg_req_dist; double \delta_next_reg_req; double \delay_end_point_p1; double \delay_end_point_p2; double \reg_req_end_point_p1; double \reg_req_end_point_p2; double \update; double \loc_update; double \loc_param; int \mobility_protocol_values; Stathandle \loc_update_time; double \loc_time; int \loc_update_time_status; int \uplink; int \downlink; int \tech; Stathandle \tech_change; Distribution * \tech_dist; double \tech_value; double \change_prob; int \drop; Stathandle \pkt_drop; double \loc_time_complete; Stathandle \loc_update_time_complete; Stathandle \loc_update_value; --------------------------------------------------------------------------------

A-5

================================================================================ Function Block ================================================================================ void location_update() { switch (loc_update_time_status){ case 1: op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); location_update_interval = delta_reg_req; break; case 2: delta_next_reg_req = op_dist_outcome (reg_req_dist); location_update_interval += delta_next_reg_req; if (location_update_interval < 0) {location_update_interval = location_update_interval * -1;} break; } //ensure next location update is not scheduled before ongoing location update is done if (location_update_interval <= loc_update) {location_update_interval += loc_update;} //set random delay delta_delay = op_dist_outcome (delay_dist); delay += delta_delay; if (delay < 0) {delay = delay * -1;} //change delay according to change of technology tech_value = op_dist_outcome(tech_dist); if (tech_value < change_prob) { //change from UMTS->WLAN if (tech == 0) { delay -= 0.031; tech =1; change_prob = 0.8; } //change from WLAN->UMTS else { delay +=0.031; tech = 0; change_prob = 0.2; } } //set random location update delay according to delay value loc_update = loc_param * delay; //return to idle when location update is done, schedule next location update op_intrpt_schedule_self (op_sim_time() + loc_update, go_idle); op_intrpt_schedule_self (op_sim_time() + location_update_interval, reg_req); //write values to statistics op_stat_write(loc_update_time, loc_time); op_stat_write(loc_update_value, loc_update); //delay and location update graph A-6

op_stat_write(tech_change, delay); } void location_update_complete() { //write values to statistics op_stat_write(loc_update_time_complete, loc_time_complete); } --------------------------------------------------------------------------------

A-7

================================================================================ Enter Execs for the forced state "init" ================================================================================ //set client-server/server-client delays, location update delays and //location update delay parameters id = op_id_self(); op_ima_obj_attr_get (id , "mobility_protocol", &mobility_protocol_values); uplink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "client-server")); downlink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "server-client")); switch (mobility_protocol_values){ case 1: if (uplink) { delay = 0.05; loc_update = 0.1; loc_param = 2; } else { delay = 0.075; loc_update = 0.1; loc_param = 1.333; } break; case 2: if (uplink) { delay = 0.025; loc_update = 0.075; loc_param = 3; } else { delay = 0.025; loc_update = 0.075; loc_param = 3; } break; case 3: if (uplink) { delay = 0.05; loc_update = 0.15; loc_param = 3; } else { delay = 0.05; loc_update = 0.15; loc_param = 3; } break; case 4: if (uplink) { delay = 0.025; A-8

loc_update = 0.05; loc_param = 2; } else { delay = 0.025; loc_update = 0.05; loc_param = 2; } break; } //set fixed/random location update time id = op_id_self(); op_ima_obj_attr_get (id , "location_update_type", &loc_update_time_status); //set and schedule first location update op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); next_loc_update_time = (op_sim_time () + delta_reg_req); op_intrpt_schedule_self (next_loc_update_time, reg_req); //set end points'value for location update distribution op_ima_obj_attr_get (id , "reg_req_p1", &reg_req_end_point_p1); op_ima_obj_attr_get (id , "reg_req_p2", &reg_req_end_point_p2); //location update distribution value reg_req_dist = op_dist_load ("uniform", reg_req_end_point_p1, reg_req_end_point_p2); //set end points' value for delay distribution op_ima_obj_attr_get (id , "delay_p1", &delay_end_point_p1); op_ima_obj_attr_get (id , "delay_p2", &delay_end_point_p2); //delay distribution value delay_dist = op_dist_load ("uniform", delay_end_point_p1, delay_end_point_p2); //time for location update statistics loc_time = 0; loc_update_time = op_stat_reg ("Location update time", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //location update value statistics loc_update_value = op_stat_reg ("Location update value", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //time for location update complete statistics loc_time_complete = 0; loc_update_time_complete = op_stat_reg ("Location update complete", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //change of delay according to change between UMTS and WLAN technologies tech_dist = op_dist_load ("uniform", 0, 1); tech = 1; change_prob = 0.8; //delay and location update statistics tech_change = op_stat_reg ("Change of technology", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet delay statistics pkt_delay = op_stat_reg ("Packet delay", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); A-9

//packet count statistics count = 0; pkt_count = op_stat_reg ("Packet count", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet drop statistics drop = 0; pkt_drop = op_stat_reg ("Packet drop", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); -------------------------------------------------------------------------------================================================================================ Exit Execs for the forced state "init" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition init -> idle ================================================================================ name: tr_0 condition: executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 10

================================================================================ Enter Execs for the unforced state "idle" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ Exit Execs for the unforced state "idle" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition idle -> send ================================================================================ name: tr_6 condition: packet_from_queue executive: color: RGB000 drawing style: spline doc file: pr_transition -------------------------------------------------------------------------------================================================================================ transition idle -> loc ================================================================================ name: tr_8 condition: incoming_reg_req executive: location_update() color: RGB000 drawing style: spline doc file: pr_transition -------------------------------------------------------------------------------================================================================================ transition idle -> arrival ================================================================================ name: tr_11 condition: packet_arrival executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 11

================================================================================ Enter Execs for the forced state "arrival" ================================================================================ //acquire packet, queue packet pk_ptr = op_pk_get(op_intrpt_strm()); op_subq_pk_insert(0, pk_ptr, OPC_QPOS_TAIL); //schedule client-server/server-client delay op_intrpt_schedule_self (op_sim_time () + delay, send_pk); //write values to statistics ++count; op_stat_write(pkt_delay, delay); op_stat_write(pkt_count, count); -------------------------------------------------------------------------------================================================================================ Exit Execs for the forced state "arrival" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition arrival -> idle ================================================================================ name: tr_10 condition: executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 12

================================================================================ Enter Execs for the forced state "send" ================================================================================ //take packet out of queue pk_ptr = op_subq_pk_remove(0, OPC_QPOS_HEAD); //send packet op_pk_send(pk_ptr, 0); -------------------------------------------------------------------------------================================================================================ Exit Execs for the forced state "send" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition send -> idle ================================================================================ name: tr_7 condition: executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 13

================================================================================ Enter Execs for the unforced state "loc" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ Exit Execs for the unforced state "loc" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition loc -> idle ================================================================================ name: tr_9 condition: reg_req_complete executive: location_update_complete() color: RGB000 drawing style: spline doc file: pr_transition -------------------------------------------------------------------------------================================================================================ transition loc -> loc_arrival ================================================================================ name: tr_16 condition: packet_arrival executive: color: RGB000 drawing style: spline doc file: pr_transition -------------------------------------------------------------------------------================================================================================ transition loc -> loc_send ================================================================================ name: tr_21 condition: packet_from_queue executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 14

================================================================================ Enter Execs for the forced state "loc_arrival" ================================================================================ //destroy packet if incoming during location update pk_ptr = op_pk_get(op_intrpt_strm()); op_pk_destroy(pk_ptr); //write values to statistics ++drop; op_stat_write(pkt_drop, drop); -------------------------------------------------------------------------------================================================================================ Exit Execs for the forced state "loc_arrival" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition loc_arrival -> loc ================================================================================ name: tr_19 condition: executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 15

================================================================================ Enter Execs for the forced state "loc_send" ================================================================================ //take packet out of queue pk_ptr = op_subq_pk_remove(0, OPC_QPOS_HEAD); //send_packet op_pk_send(pk_ptr, 0); -------------------------------------------------------------------------------================================================================================ Exit Execs for the forced state "loc_send" ================================================================================ NONE -------------------------------------------------------------------------------================================================================================ transition loc_send -> loc ================================================================================ name: tr_22 condition: executive: color: RGB000 drawing style: spline doc file: pr_transition --------------------------------------------------------------------------------

A - 16

Appendix B: Seed value and run time test results


Seed value 97, run time 10 hours

Seed value 112, run time 10 hours

B-1

Seed value 128, run time 10 hours

Seed value 129, run time 10 hours

B-2

Seed value 132, run time 10 hours

Seed value 147, run time 10 hours

B-3

Seed value 159, run time 10 hours

Seed value 178, run time 10 hours

B-4

Appendix C: Additional average application response times per location update interval
Average ftp download response time per location update interval

C-1

Average email download response time per location update interval

C-2

Appendix D: Additional functions for probability density of application response time


Probability density of ftp download response time

Mobile IPv4: Highest probability for http page response times approximately 3.9 seconds Distribution approximately from 3.8 seconds to 4.1 seconds, a total of 0.3 second Mobile IPv6/SIP: Highest probability for http page response times approximately 2.7 seconds Distribution approximately from 2.7 seconds to 3.4 seconds, a total of 0.7 second mSCTP: Highest probability for http page response times approximately 3.4 seconds Distribution approximately from 3.4 seconds to 3.8 seconds, a total of 0.4 second

D-1

Probability density of email download response time

Mobile IPv4: Highest probability for http page response times approximately 1.3 seconds Distribution approximately from 1.3 seconds to 1.7 seconds, a total of 0.4 second Mobile IPv6/SIP: Highest probability for http page response times approximately 0.9 second Distribution approximately from 0.9 second to 1.3 seconds, a total of 0.4 second mSCTP: Highest probability for http page response times approximately 1.0 second Distribution approximately from 1.0 second to 1.4 seconds, a total of 0.4 second

D-2

Appendix E: Modified process model for simulation with no location update


================================================================================ Enter Execs for the forced state "init" ================================================================================ //set client-server/server-client delays, location update delays and //location update delay parameters id = op_id_self(); op_ima_obj_attr_get (id , "mobility_protocol", &mobility_protocol_values); uplink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "client-server")); downlink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "server-client")); switch (mobility_protocol_values){ case 1: if (uplink) { delay = 0.05; loc_update = 0.1; loc_param = 2; } else { delay = 0.075; loc_update = 0.1; loc_param = 1.333; } break; case 2: if (uplink) { delay = 0.025; loc_update = 0.075; loc_param = 3; } else { delay = 0.025; loc_update = 0.075; loc_param = 3; } break; case 3: if (uplink) { delay = 0.05; loc_update = 0.15; loc_param = 3; } else { delay = 0.05; loc_update = 0.15; loc_param = 3; } E-1

break; case 4: if (uplink) { delay = 0.025; loc_update = 0.05; loc_param = 2; } else { delay = 0.025; loc_update = 0.05; loc_param = 2; } break; } //set fixed/random location update time id = op_id_self(); op_ima_obj_attr_get (id , "location_update_type", &loc_update_time_status); //set and schedule first location update //op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); //next_loc_update_time = (op_sim_time () + delta_reg_req); //op_intrpt_schedule_self (next_loc_update_time, reg_req); //set end points'value for location update distribution op_ima_obj_attr_get (id , "reg_req_p1", &reg_req_end_point_p1); op_ima_obj_attr_get (id , "reg_req_p2", &reg_req_end_point_p2); //location update distribution value reg_req_dist = op_dist_load ("uniform", reg_req_end_point_p1, reg_req_end_point_p2); //set end points' value for delay distribution op_ima_obj_attr_get (id , "delay_p1", &delay_end_point_p1); op_ima_obj_attr_get (id , "delay_p2", &delay_end_point_p2); //delay distribution value delay_dist = op_dist_load ("uniform", delay_end_point_p1, delay_end_point_p2); //set location update delay op_ima_obj_attr_get (id , "location_update_delay", &loc_update_delay); loc_update_delay/=1000; //time for location update statistics loc_time = 0; loc_update_time = op_stat_reg ("Location update time", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //location update value statistics loc_update_value = op_stat_reg ("Location update value", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //time for location update complete statistics loc_time_complete = 0; loc_update_time_complete = op_stat_reg ("Location update complete", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //change of delay according to change between UMTS and WLAN technologies E-2

tech_dist = op_dist_load ("uniform", 0, 1); tech = 1; change_prob = 0.8; //delay and location update statistics tech_change = op_stat_reg ("Change of technology", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet delay statistics pkt_delay = op_stat_reg ("Packet delay", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet count statistics count = 0; pkt_count = op_stat_reg ("Packet count", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet drop statistics drop = 0; pkt_drop = op_stat_reg ("Packet drop", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); --------------------------------------------------------------------------------

E-3

Appendix F: Modified process model for simulation with one location update
================================================================================ Enter Execs for the forced state "init" ================================================================================ //set client-server/server-client delays, location update delays and //location update delay parameters id = op_id_self(); op_ima_obj_attr_get (id , "mobility_protocol", &mobility_protocol_values); uplink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "client-server")); downlink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "server-client")); switch (mobility_protocol_values){ case 1: if (uplink) { delay = 0.05; loc_update = 0.1; loc_param = 2; } else { delay = 0.075; loc_update = 0.1; loc_param = 1.333; } break; case 2: if (uplink) { delay = 0.025; loc_update = 0.075; loc_param = 3; } else { delay = 0.025; loc_update = 0.075; loc_param = 3; } break; case 3: if (uplink) { delay = 0.05; loc_update = 0.15; loc_param = 3; } else { delay = 0.05; loc_update = 0.15; loc_param = 3; } F-1

break; case 4: if (uplink) { delay = 0.025; loc_update = 0.05; loc_param = 2; } else { delay = 0.025; loc_update = 0.05; loc_param = 2; } break; } //set fixed/random location update time id = op_id_self(); op_ima_obj_attr_get (id , "location_update_type", &loc_update_time_status); //set and schedule first location update op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); //next_loc_update_time = (op_sim_time () + delta_reg_req); next_loc_update_time = (op_sim_time () + 300); op_intrpt_schedule_self (next_loc_update_time, reg_req); //set end points'value for location update distribution op_ima_obj_attr_get (id , "reg_req_p1", &reg_req_end_point_p1); op_ima_obj_attr_get (id , "reg_req_p2", &reg_req_end_point_p2); //location update distribution value reg_req_dist = op_dist_load ("uniform", reg_req_end_point_p1, reg_req_end_point_p2); //set end points' value for delay distribution op_ima_obj_attr_get (id , "delay_p1", &delay_end_point_p1); op_ima_obj_attr_get (id , "delay_p2", &delay_end_point_p2); //delay distribution value delay_dist = op_dist_load ("uniform", delay_end_point_p1, delay_end_point_p2); //set location update delay op_ima_obj_attr_get (id , "location_update_delay", &loc_update_delay); loc_update_delay/=1000; //time for location update statistics loc_time = 0; loc_update_time = op_stat_reg ("Location update time", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //location update value statistics loc_update_value = op_stat_reg ("Location update value", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //time for location update complete statistics loc_time_complete = 0; loc_update_time_complete = op_stat_reg ("Location update complete", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL);

F-2

//change of delay according to change between UMTS and WLAN technologies tech_dist = op_dist_load ("uniform", 0, 1); tech = 1; change_prob = 0.8; //delay and location update statistics tech_change = op_stat_reg ("Change of technology", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet delay statistics pkt_delay = op_stat_reg ("Packet delay", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet count statistics count = 0; pkt_count = op_stat_reg ("Packet count", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet drop statistics drop = 0; pkt_drop = op_stat_reg ("Packet drop", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); -------------------------------------------------------------------------------================================================================================ Function Block ================================================================================ void location_update() { switch (loc_update_time_status){ case 1: op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); location_update_interval = delta_reg_req; break; case 2: delta_next_reg_req = op_dist_outcome (reg_req_dist); location_update_interval += delta_next_reg_req; if (location_update_interval < 0) {location_update_interval = location_update_interval * -1;} break; } //ensure next location update is not scheduled before ongoing location update is done if (location_update_interval <= loc_update) {location_update_interval += loc_update;} //set random delay delta_delay = op_dist_outcome (delay_dist); delay += delta_delay; if (delay < 0) {delay = delay * -1;} //change delay according to change of technology tech_value = op_dist_outcome(tech_dist); if (tech_value < change_prob) { //change from UMTS->WLAN if (tech == 0) { delay -= 0.031; tech =1; change_prob = 0.8; F-3

} //change from WLAN->UMTS else { delay +=0.031; tech = 0; change_prob = 0.2; } } //set random location update delay according to delay value loc_update = loc_param * delay; //return to idle when location update is //op_intrpt_schedule_self (op_sim_time() op_intrpt_schedule_self (op_sim_time() + //op_intrpt_schedule_self (op_sim_time() done, schedule next location update + loc_update, go_idle); loc_update_delay, go_idle); + location_update_interval, reg_req);

//write values to statistics op_stat_write(loc_update_time, loc_time); op_stat_write(loc_update_value, loc_update); //delay and location update graph op_stat_write(tech_change, delay); } void location_update_complete() { //write values to statistics op_stat_write(loc_update_time_complete, loc_time_complete); } --------------------------------------------------------------------------------

F-4

Appendix G: Modified process model for simulation with two location updates
================================================================================ Enter Execs for the forced state "init" ================================================================================ //set client-server/server-client delays, location update delays and //location update delay parameters id = op_id_self(); op_ima_obj_attr_get (id , "mobility_protocol", &mobility_protocol_values); uplink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "client-server")); downlink = (op_id_self() == op_id_from_name (op_topo_parent(id), OPC_OBJTYPE_QUEUE, "server-client")); switch (mobility_protocol_values){ case 1: if (uplink) { delay = 0.05; loc_update = 0.1; loc_param = 2; } else { delay = 0.075; loc_update = 0.1; loc_param = 1.333; } break; case 2: if (uplink) { delay = 0.025; loc_update = 0.075; loc_param = 3; } else { delay = 0.025; loc_update = 0.075; loc_param = 3; } break; case 3: if (uplink) { delay = 0.05; loc_update = 0.15; loc_param = 3; } else { delay = 0.05; loc_update = 0.15; loc_param = 3; } G-1

break; case 4: if (uplink) { delay = 0.025; loc_update = 0.05; loc_param = 2; } else { delay = 0.025; loc_update = 0.05; loc_param = 2; } break; } //set fixed/random location update time id = op_id_self(); op_ima_obj_attr_get (id , "location_update_type", &loc_update_time_status); //set and schedule first location update op_ima_obj_attr_get (id , "location_update_interval", &delta_reg_req); //next_loc_update_time = (op_sim_time () + delta_reg_req); next_loc_update_time = (op_sim_time () + 300); op_intrpt_schedule_self (next_loc_update_time, reg_req); next_loc_update_time = (op_sim_time () + 600); op_intrpt_schedule_self (next_loc_update_time, reg_req); //set end points'value for location update distribution op_ima_obj_attr_get (id , "reg_req_p1", &reg_req_end_point_p1); op_ima_obj_attr_get (id , "reg_req_p2", &reg_req_end_point_p2); //location update distribution value reg_req_dist = op_dist_load ("uniform", reg_req_end_point_p1, reg_req_end_point_p2); //set end points' value for delay distribution op_ima_obj_attr_get (id , "delay_p1", &delay_end_point_p1); op_ima_obj_attr_get (id , "delay_p2", &delay_end_point_p2); //delay distribution value delay_dist = op_dist_load ("uniform", delay_end_point_p1, delay_end_point_p2); //set location update delay op_ima_obj_attr_get (id , "location_update_delay", &loc_update_delay); loc_update_delay/=1000; //time for location update statistics loc_time = 0; loc_update_time = op_stat_reg ("Location update time", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //location update value statistics loc_update_value = op_stat_reg ("Location update value", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //time for location update complete statistics loc_time_complete = 0;

G-2

loc_update_time_complete = op_stat_reg ("Location update complete", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //change of delay according to change between UMTS and WLAN technologies tech_dist = op_dist_load ("uniform", 0, 1); tech = 1; change_prob = 0.8; //delay and location update statistics tech_change = op_stat_reg ("Change of technology", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet delay statistics pkt_delay = op_stat_reg ("Packet delay", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet count statistics count = 0; pkt_count = op_stat_reg ("Packet count", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); //packet drop statistics drop = 0; pkt_drop = op_stat_reg ("Packet drop", OPC_STAT_INDEX_NONE, OPC_STAT_LOCAL); --------------------------------------------------------------------------------

G-3

Anda mungkin juga menyukai