Anda di halaman 1dari 33

MODULE IV

PART I

Transport Layer: - Transport Service - Elements of transport protocols Internet Transfer Protocols
UDP and TCP .

Transport Layer

The transport layer is not just another layer. It is the heart of the whole protocol hierarchy. Its
task is to provide reliable, cost-effective data transport from the source machine to the destination
machine, independently of the physical network or networks currently in use. Without the transport
layer, the whole concept of layered protocols would make little sense. In this chapter we will study the
transport layer in detail, including its services, design, protocols, and performance.
Transport Service

It includes

Services Provided to the Upper Layers


Transport Service Primitives

Services Provided to the Upper Layers

The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective service
to its users, normally processes in the application layer. To achieve this goal, the transport layer makes
use of the services provided by the network layer. The hardware and/or software within the transport
layer that does the work is called the transport entity.

Figure 1. The network, transport, and application layers.

1
The transport entity can be located in the operating system kernel, in a separate user process, in a
library package bound into network applications, or conceivably on the network interface card. The
(logical) relationship of the network, transport, and application layers is illustrated in Fig 1.
Just as there are two types of network service, connection-oriented and connectionless, there are
also two types of transport service. The connection-oriented transport service is similar to the
connection-oriented network service in many ways. In both cases, connections have three phases:
establishment, data transfer, and release. Addressing and flow control are also similar in both layers.
Furthermore, the connectionless transport service is also very similar to the connectionless network
service.
Transport Service Primitives

To allow users to access the transport service, the transport layer must provide some operations
to application programs, that is, a transport service interface. Each transport service has its own
interface.
The transport service is similar to the network service, but there are also some important differences.
The main difference is that the network service is intended to model the service offered by real
networks, warts and all. Real networks can lose packets, so the network service is generally unreliable.
The (connection-oriented) transport service, in contrast, is reliable. Of course, real networks are not
error-free, but that is precisely the purpose of the transport layerto provide a reliable service on top of
an unreliable network.

A second difference between the network service and transport service is whom the services are
intended for. The network service is used only by the transport entities. Few users write their own
transport entities, and thus few users or programs ever see the bare network service. In contrast, many
programs (and thus programmers) see the transport primitives. Consequently, the transport service must
be convenient and easy to use.
Consider the five primitives listed in Fig. 2

Figure 2. The primitives for a simple transport service.

2
To see how these primitives might be used, consider an application with a server and a number of
remote clients. To start with, the server executes a LISTEN primitive, typically by calling a library
procedure that makes a system call to block the server until a client turns up. The transport service
primitives can be implemented as calls to library procedures in order to make them independent of the
network service primitives.
When a client wants to talk to the server, it executes a CONNECT primitive. The transport entity
carries out this primitive by blocking the caller and sending a packet to the server. Encapsulated in the
payload of this packet is a transport layer message for the server's transport entity. A quick note on
terminology is now in order. For lack of a better term, we will reluctantly use the somewhat ungainly
acronym TPDU (Transport Protocol Data Unit) for messages sent from transport entity to transport
entity. Thus, TPDUs (exchanged by the transport layer) are
contained in packets (exchanged by the network layer). In turn, packets are contained in frames
(exchanged by the data link layer). When a frame arrives, the data link layer processes the frame header
and passes the contents of the frame payload field up to the network entity. The network entity processes
the packet header and passes the contents of the packet payload up to the transport entity. This nesting is
illustrated in Fig. 3.

Figure 3. Nesting of TPDUs, packets, and frames.

Elements of Transport
The transport service is implemented by a transport protocol used between two transport entities.
Transport protocols resemble the data link protocols: error control, sequencing, and flow control.

Figure 4. (a) Environment of the data link layer. (b) Environment of


3
the transport layer.

There are major dissimilarities between DLL protocol and transport layer protocol.

In the data link layer, it is not necessary for a router to specify which router it wants to talk to. In
the transport layer, explicit addressing of destination is required.
Initial connection setup is much more complicated in transport layer. (Shown in fig 4.)
The potential existence of storage capacity in the subnet requires special transport protocols.
Buffering and flow control are needed in both layers, but the presence of a large and dynamically
varying number of connections in the transport layer may require a different approach than the
data link layer approach (e.g., sliding window buffer management).
Addressing
When an application (e.g., a user) process wishes to set up a connection to a remote application
process, it must specify which one to connect to. The method normally used is to define transport
addresses to which processes can listen for connection requests. In the Internet, these end points are
called ports. In Internet, these end points are (IP address, local port) pairs. The term TSAP
(Transport Service Access Point) is used to refer to these end points.
Figure 5 illustrates the relationship between the NSAP, TSAP and transport connection.
Application processes, both clients and servers, can attach themselves to a TSAP to establish a
connection to a remote TSAP. These connections run through NSAPs on each host, as shown. The
purpose of having TSAPs is that in some networks, each computer has a single NSAP, so some way is
needed to distinguish multiple transport end points that share that NSAP.

4
Figure 5. TSAPs, NSAPs, and transport connections.

A possible connection scenario:

1. A time of day server process on host 2 attaches itself to TSAP 1522 to wait for an incoming call. How
a process attaches itself to a TSAP is outside the networking model and depends entirely on the local
operating system. A call such as our LISTEN might be used, for example.

2. An application process on host 1 wants to find out the time-of-day, so it issues a CONNECT request
specifying TSAP 1208 as the source and TSAP 1522 as the destination. This action ultimately results in
a transport connection being established between the application process on host 1 and server 1 on host
2.

3. The application process then sends over a request for the time.
4. The time server process responds with the current time.
5. The transport connection is then released.
In this model, services have stable TSAP addresses that are listed in files in well-known places,
such as the /etc/services file on UNIX systems, which lists which servers are permanently attached to
which ports. While stable TSAP addresses work for a small number of key services that never change
(e.g. the Web server), user processes, in general, often want to talk to other user processes that only exist
for a short time and do not have a TSAP address that is known in advance. Furthermore, if there are
potentially many server processes, most of which are rarely used, it is wasteful to have each of them
active and listening to a stable TSAP address all day long. In short, a better scheme is needed.
One such scheme is shown in Fig. 6 in a simplified form. It is known as the initial connection
protocol. Instead of every conceivable server listening at a well-known TSAP, each machine that wishes
to offer services to remote users has a special process server that acts as a proxy for less heavily used
servers. It listens to a set of ports at the same time, waiting for a connection request. Potential users of a
service begin by doing a CONNECT request, specifying the TSAP address of the service they want. If
no server is waiting for them, they get a connection to the process server, as shown in Fig. 6(a).

5
After it gets the incoming request, the process server spawns the requested server, allowing it to inherit
the existing connection with the user. The new server then does the requested work, while the process
server goes back to listening for new requests, as shown in Fig. 6(b).
While the initial connection protocol works fine for those servers that can be created as they are
needed, there are many situations in which services do exist independently of the process server. A file
server, for example, needs to run on special hardware (a machine with a disk) and cannot just be created
on-the-fly when someone wants to talk to it.

Figure 6. How a user process in host 1 establishes a connection with


a time-of-day server in host 2.

To handle this situation, an alternative scheme is often used. In this model, there exists a special
process called a name server or sometimes a directory server. To find the TSAP address corresponding
to a given service name, such as ''time of day,'' a user sets up a connection to the name server (which
listens to a well-known TSAP). The user then sends a message specifying the service name, and the
name server sends back the TSAP address. Then the user releases the connection with the name server
and establishes a new one with the desired service.
In this model, when a new service is created, it must register itself with the name server, giving
both its service name (typically, an ASCII string) and its TSAP. The name server records this
information in its internal database so that when queries come in later, it will know the answers.
The function of the name server is analogous to the directory assistance operator in the telephone
systemit provides a mapping of names onto numbers. Just as in the telephone system, it is essential

6
that the address of the well-known TSAP used by the name server (or the process server in the initial
connection protocol) is indeed well known. If you do not know the number of the information operator,
you cannot call the information operator to find it out. If you think the number you dial for information
is obvious, try it in a foreign country sometime.

Connection Establishment
Connection establishment begins when a Transport Service user (TS-user) issues a T.CONNECT
request primitive to the transport entity. The transport entity issues a CR-TPDU (Connection Request
Transport Protocol Data Unit) to its peer transport entity who informs its user with a T.CONNECT
indication primitive. The user can respond with a T.CONNECT response primitive if the user is prepared
to except the connection or a T.DISCONNECT request primitive .The peer transport entity then issues a
CC-TPDU (Connection Confirm) or a DR-TPDU (Disconnect Request) respectively. This response is
conveyed to the initiating user by means of a T.CONNECT confirm primitive (for success) or
T.DISCONNECT indication primitive (for rejection) contain the reason for the failure.

The CR-TPDU and the CC-TPDUs both contain information relating to the connection being
established. The purpose of CC is to establish a connection with agreed-upon characteristics. These
characteristics are defined in the CC-TPDU.

A transport connection has three types of Identifiers:

TSAP - Transport Service Access Point


NSAP - Network Service Access Point
Transport Connection I.D.
As there may be more than one user of the transport entity, a TSAP allows the transport entity to
multiplex data between users. This identifier must be passed down from the transport user, and included
in CC- and CR-TPDUs. The NSAP identifies the system on which the transport entity is located. Each
transport connection is given a unique identifier used in all TPDUs. It allows the transport entity to
multiplex multiple transport connections on a single network.

The issue here is how to deal with the problem of delayed duplicates and establish connections in a
reliable way. The methods suggested are

a. Use throwaway TSAP addresses - Each time a TSAP address is needed, a new, unique address is
generated, typically based on the current time. When a connection is released, the addresses are
discarded forever.
7
b. Each connection is assigned a connection identifier (i.e., a sequence number incremented for
each connection established), chosen by the initiating party, and put in each TPDU, including the
one requesting the connection. After each connection is released, each transport entity could
update a table listing obsolete connections as (peer transport entity, connection identifier) pair.
Whenever a connection request comes in, it could be checked against the table, to see if it
belonged to a previously released connection.
c. Kill off aged packets - here we ensure that no packet lives longer than some known time. So we
need is a mechanism to kill off very old packets that are still wandering about. Methods are
Restricted subnet design - prevents packets from looping
Putting a hop counter in each packet.
Time stamping each packet
d. When a host crashes and comes up again, the transport entity will remain idle for T seconds
(where T is the life time of a packet) so that all old TPDUs will die off.

Figure 7. Three protocol scenarios for establishing a connection using a three-way handshake. CR denotes CONNECTION
REQUEST. (a) Normal operation. (b) Old duplicate CONNECTION REQUEST appearing out of nowhere. (c) Duplicate
CONNECTION REQUEST and duplicate ACK.
8
The clock-based method solves the delayed duplicate problem for data TPDUs, but for this method to be
useful, a connection must first be established. Since control TPDUs may also be delayed, there is a
potential problem in getting both sides to agree on the initial sequence number. Suppose, for example,
that connections are established by having host 1 send a CONNECTION REQUEST TPDU containing
the proposed initial sequence number and destination port number to a remote peer, host 2. The receiver,
host 2, then acknowledges this request by sending a CONNECTION ACCEPTED TPDU back. If the
CONNECTION REQUEST TPDU is lost but a delayed duplicate CONNECTION REQUEST suddenly
shows up at host 2, the connection will be established incorrectly.
To solve this problem, Tomlinson (1975) introduced the three-way handshake. This
establishment protocol does not require both sides to begin sending with the same sequence number, so
it can be used with synchronization methods other than the global clock method. The normal setup
procedure when host 1 initiates is shown in Fig. 7(a). Host 1 chooses a sequence number, x, and sends a
CONNECTION REQUEST TPDU containing it to host 2. Host 2 replies with an ACK TPDU
acknowledging x and announcing its own initial sequence number, y. Finally, host 1 acknowledges host
2's choice of an initial sequence number in the first data TPDU that it sends.
Now let us see how the three-way handshake works in the presence of delayed duplicate control
TPDUs. In Fig. 7(b), the first TPDU is a delayed duplicate CONNECTION REQUEST from an old
connection. This TPDU arrives at host 2 without host 1's knowledge. Host 2 reacts to this TPDU by
sending host 1 an ACK TPDU, in effect asking for verification that host 1 was indeed trying to set up a
new connection. When host 1 rejects host 2's attempt to establish a connection, host 2 realizes that it was
tricked by a delayed duplicate and abandons the connection. In this way, a delayed duplicate does no
damage.
The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating
around in the subnet. This case is shown in Fig. 7(c). As in the previous example, host 2 gets a delayed
CONNECTION REQUEST and replies to it. At this point it is crucial to realize that host 2 has proposed
using y as the initial sequence number for host 2 to host 1 traffic, knowing full well that no TPDUs
containing sequence number y or acknowledgements to y are still in existence. When the second delayed
TPDU arrives at host 2, the fact that z has been acknowledged rather than y tells host 2 that this, too, is
an old duplicate. The important thing to realize here is that there is no combination of old TPDUs that
can cause the protocol to fail and have a connection set up by accident when no one wants it.
Connection Release
Releasing a connection is easier than establishing one. There are two types of connection termination.

9
Asymmetric - Either party issues a DISCONNECT, which results in a DISCONNECT TPDU
being sent and the transmission ends in both directions.
Symmetric - Both parties issue DISCONNECT, closing only one direction at a time.
Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig. 8. After
the connection is established, host 1 sends a TPDU that arrives properly at host 2. Then host 1 sends
another TPDU. Unfortunately, host 2 issues a DISCONNECT before the second TPDU arrives. The
result is that the connection is released and data are lost.

Figure 8. Abrupt disconnection with loss of data.

Clearly, a more sophisticated release protocol is needed to avoid data loss. One way is to use
symmetric release, in which each direction is released independently of the other one. Here, a host can
continue to receive data even after it has sent a DISCONNECT TPDU.
Symmetric release does the job when each process has a fixed amount of data to send and clearly
knows when it has sent it. In other situations, determining that all the work has been done and the
connection should be terminated is not so obvious. If host 2 responds: I am done too. Goodbye, the
connection can be safely released.
Figure 9 illustrates four scenarios of releasing using a three-way handshake. While this protocol
is not infallible, it is usually adequate.In Fig. 9(a), we see the normal case in which one of the users
sends a DR (DISCONNECTION REQUEST) TPDU to initiate the connection release. When it arrives,
the recipient sends back a DR TPDU, too, and starts a timer, just in case its DR is lost. When this DR
arrives, the original sender sends back an ACK TPDU and releases the connection. Finally, when the
ACK TPDU arrives, the receiver also releases the connection. Releasing a connection means that the
transport entity removes the information about the connection from its table of currently open
connections and signals the connection's owner (the transport user) somehow.
10
This action is different from a transport user issuing a DISCONNECT primitive. If the final ACK
TPDU is lost, as shown in Fig. 9(b), the situation is saved by the timer.
When the timer expires, the connection is released anyway.

Figure 9. Four protocol scenarios for releasing a connection. (a) Normal case of three-way handshake. (b) Final ACK lost.
(c) Response lost. (d) Response lost and subsequent DRs lost.

Now consider the case of the second DR being lost. The user initiating the disconnection will not
receive the expected response, will time out, and will start all over again. In Fig. 9(c) we see how this
works, assuming that the second time no TPDUs are lost and all TPDUs are delivered correctly and on
time.
Our last scenario, Fig. 9(d), is the same as Fig. 9(c) except that now we assume all the repeated
attempts to retransmit the DR also fail due to lost TPDUs. After N retries, the sender just gives up and
releases the connection. Meanwhile, the receiver times out and also exits. While this protocol usually
11
suffices, in theory it can fail if the initial DR and N retransmissions are all lost. The sender will give up
and release the connection, while the other side knows nothing at all about the attempts to disconnect
and is still fully active. This situation results in a half-open connection.
We could have avoided this problem by not allowing the sender to give up after N retries but
forcing it to go on forever until it gets a response. However, if the other side is allowed to time out, then
the sender will indeed go on forever, because no response will ever be forthcoming. If we do not allow
the receiving side to time out, then the protocol hangs in Fig. 9(d). One way to kill off half-open
connections is to have a rule saying that if no TPDUs have arrived for a certain number of seconds, the
connection is then automatically disconnected. That way, if one side ever disconnects, the other side will
detect the lack of activity and also disconnect.
Of course, if this rule is introduced, it is necessary for each transport entity to have a timer that is
stopped and then restarted whenever a TPDU is sent. If this timer expires, a dummy TPDU is
transmitted, just to keep the other side from disconnecting. On the other hand, if the automatic
disconnect rule is used and too many dummy TPDUs in a row are lost on an otherwise idle connection,
first one side, then the other side will automatically disconnect. We will not belabor this point any
more, but by now it should be clear that releasing a connection without data loss is not nearly as simple
as it at first appears.
Flow Control and Buffering

For flow control, a sliding window is needed on each connection to keep a fast transmitter from
overrunning a slow receiver (the same as the data link layer). Since a host may have numerous
connections, it is impractical to implement the same data link buffering strategy (using dedicated buffers
for each line). The sender should always buffer outgoing TPDUs until they are acknowledged. The
receiver may not dedicate specific buffers to specific connections. Instead, a single buffer pool may be
maintained for all connections. When a TPDU comes in, if there is a free buffer available, the TPDU is
accepted, otherwise it is discarded. However, for high-bandwidth traffic (e.g., file transfers), it is better
if the receiver dedicate a full window of buffers, to allow the data to flow at maximum speed.
If the network service is unreliable, the sender must buffer all TPDUs sent, just as in the data link
layer. However, with reliable network service, other trade-offs become possible. In particular, if the
sender knows that the receiver always has buffer space, it need not retain copies of the TPDUs it sends.
However, if the receiver cannot guarantee that every incoming TPDU will be accepted, the sender will
have to buffer anyway. In the latter case, the sender cannot trust the network layer's acknowledgement,
because the acknowledgement means only that the TPDU arrived, not that it was accepted. We will
come back to this important point later.
12
Even if the receiver has agreed to do the buffering, there still remains the question of the buffer
size. If most TPDUs are nearly the same size, it is natural to organize the buffers as a pool of identically-
sized buffers, with one TPDU per buffer, as in Fig. 10(a). However, if there is wide variation in TPDU
size, from a few characters typed at a terminal to thousands of characters from file transfers, a pool of
fixed-sized buffers presents problems. If the buffer size is chosen equal to the largest possible TPDU,
space will be wasted whenever a short TPDU arrives. If the buffer size is chosen less than the maximum
TPDU size, multiple buffers will be needed for long TPDUs, with the attendant complexity.

Figure 10. (a) Chained fixed-size buffers. (b) Chained variable-sized


buffers. (c) One large circular buffer per connection.

Another approach to the buffer size problem is to use variable-sized buffers, as in Fig. 10(b). The
advantage here is better memory utilization, at the price of more complicated buffer management. A
third possibility is to dedicate a single large circular buffer per connection, as in Fig. 10(c). This system
also makes good use of memory, provided that all connections are heavily loaded, but is poor if some
connections are lightly loaded.
The optimum trade-off between source buffering and destination buffering depends on the type
of traffic carried by the connection. For low-bandwidth bursty traffic, such as that produced by an
interactive terminal, it is better not to dedicate any buffers, but rather to acquire them dynamically at
both ends. Since the sender cannot be sure the receiver will be able to acquire a buffer, the sender must
retain a copy of the TPDU until it is acknowledged. On the other hand, for file transfer and other high-
bandwidth traffic, it is better if the receiver does dedicate a full window of buffers, to allow the data to

13
flow at maximum speed. Thus, for low-bandwidth bursty traffic, it is better to buffer at the sender, and
for highbandwidth smooth traffic, it is better to buffer at the receiver.
A reasonably general way to manage dynamic buffer allocation is to decouple the buffering from
the acknowledgements. Dynamic buffer management means, in effect, a variable-sized window. Initially,
the sender requests a certain number of buffers, based on its perceived needs. The receiver then grants as
many of these as it can afford. Every time the sender transmits a TPDU, it must decrement its allocation,
stopping altogether when the allocation reaches zero. The receiver then separately piggybacks both
acknowledgements and buffer allocations onto the reverse traffic.
Multiplexing

The reasons for multiplexing:

To share the price of a virtual circuit connection: mapping multiple transport connections to a
single network connection (upward multiplexing).
To provide a high bandwidth: mapping a single transport connection to multiple network
connections (downward multiplexing).
For example, if only one network address is available on a host, all transport connections on that
machine have to use it. When a TPDU comes in, some way is needed to tell which process to give it to.
This situation, called upward multiplexing, is shown in Fig. 11(a). In this figure, four distinct transport
connections all use the same network connection (e.g., IP address) to the remote host.

Figure 11. (a) Upward multiplexing. (b) Downward multiplexing.


Multiplexing can also be useful in the transport layer for another reason. Suppose, for example, that a
subnet uses virtual circuits internally and imposes a maximum data rate on each one. If a user needs
more bandwidth than one virtual circuit can provide, a way out is to open multiple network connections
and distribute the traffic among them on a round-robin basis, as indicated in Fig. 11(b). This modus
14
operandi is called downward multiplexing. With k network connections open, the effective bandwidth
is increased by a factor of k. A common example of downward multiplexing occurs with home users
who have an ISDN line. This line provides for two separate connections of 64 kbps each. Using both of
them to call an Internet provider and dividing the traffic over both lines makes it possible to achieve an
effective bandwidth of 128 kbps.
Crash Recovery

In case of a router crash, the two transport entities must exchange information after the crash to
determine which TPDUs were received and which were not. The crash can be recovered by
retransmitting the lost ones. It is very difficult to recover from a host crash. Suppose that the sender is
sending a long file to the receiver using a simple stop-and-wait protocol. Part way through the
transmission the receiver crashes. When the receiver comes back up, it might send a broadcast TPDU to
all other hosts, requesting the other hosts to inform it of the status of all open connections before the
crash. The sender can be in one of two states: one TPDU outstanding or no TPDUs outstanding.

Internet Transport Layer Protocols


The Internet has two main protocols in the transport layer, a connectionless protocol and a
connection-oriented one. In the following sections we will study both of them. The connectionless
protocol is UDP. The connection-oriented protocol is TCP.
The two primary protocols in this layer are

Transmission Control Protocol (TCP)


User Datagram Protocol (UDP)

User Datagram Protocols (UDP)

The Internet protocol suite supports a connectionless transport protocol, UDP (User Datagram
Protocol). UDP transmits segments consisting of an 8-byte header followed by the payload. The header
is shown in Fig. 12. The two ports serve to identify the end points within the source and destination
machines. When a UDP packet arrives, its payload is handed to the process attached to the destination
port. Without the port fields, the transport layer would not know what to do with the packet. With them,
it delivers segments correctly.

15
Figure 12. The UDP header.

The source port is primarily needed when a reply must be sent back to the source. By copying
the source port field from the incoming segment into the destination port field of the outgoing segment,
the process sending the reply can specify which process on the sending machine is to get it.
The UDP length field includes the 8-byte header and the data. The UDP checksum is optional
and stored as 0 if not computed (a true computed 0 is stored as all 1s).
UDP Applications
Client-server (request-reply) interactions
E.g., Domain Name System (DNS), Remote Procedure Call (RPC)
Real-time multimedia applications
E.g., Real-time Transport Protocol (RTP)

Remote Procedure Call


When a process on machine 1 calls a procedure on machine 2, the calling process on 1 is
suspended and execution of the called procedure takes place on 2. Information can be transported from
the caller to the callee in the parameters and can come back in the procedure result. No message passing
is visible to the programmer. This technique is known as RPC (Remote Procedure Call) and has
become the basis for many networking applications. Traditionally, the calling procedure is known as the
client and the called procedure is known as the server, and we will use those names here too.
The idea behind RPC is to make a remote procedure call look as much as possible like a local
one. In the simplest form, to call a remote procedure, the client program must be bound with a small
library procedure, called the client stub, that represents the server procedure in the client's address
space. Similarly, the server is bound with a procedure called the server stub. These procedures hide the
fact that the procedure call from the client to the server is not local.
The actual steps in making an RPC are shown in Fig. 13. Step 1 is the client calling the client
stub. This call is a local procedure call, with the parameters pushed onto the stack in the normal way.
Step 2 is the client stub packing the parameters into a message and making a system call to send the
message. Packing the parameters is called marshaling. Step 3 is the kernel sending the message from
the client machine to the server machine. Step 4 is the kernel passing the incoming packet to the server

16
stub. Finally, step 5 is the server stub calling the server procedure with the unmarshaled parameters. The
reply traces the same path in the other direction.

Figure 13. Steps in making a remote procedure call. The stubs are
shaded.
The key item to note here is that the client procedure, written by the user, just makes a normal (i.e.,
local) procedure call to the client stub, which has the same name as the server procedure. Since the client
procedure and client stub are in the same address space, the parameters are passed in the usual way.
Similarly, the server procedure is called by a procedure in its address space with the parameters it
expects. To the server procedure,nothing is unusual. In this way, instead of I/O being done on sockets,
network communication is done by faking a normal procedure call.
Despite the conceptual elegance of RPC, there are a few snakes hiding under the grass. A big one
is the use of pointer parameters. Normally, passing a pointer to a procedure is not a problem. The called
procedure can use the pointer in the same way the caller can because both procedures live in the same
virtual address space. With RPC, passing pointers is impossible because the client and server are in
different address spaces.
In some cases, tricks can be used to make it possible to pass pointers. Suppose that the first
parameter is a pointer to an integer, k. The client stub can marshal k and send it along to the server. The
server stub then creates a pointer to k and passes it to the server procedure, just as it expects. When the
server procedure returns control to the server stub, the latter sends k back to the client where the new k is
copied over the old one, just in case the server changed it. In effect, the standard calling sequence of
call-by-reference has been replaced by copy restore.
Unfortunately, this trick does not always work, for example, if the pointer points to a graph or
other complex data structure. For this reason, some restrictions must be placed on parameters to
procedures called remotely.
A second problem is that in weakly-typed languages, like C, it is perfectly legal to write a
procedure that computes the inner product of two vectors (arrays), without specifying how large either
17
one is. Each could be terminated by a special value known only to the calling and called procedure.
Under these circumstances, it is essentially impossible for the client stub to marshal the parameters: it
has no way of determining how large they are.
A third problem is that it is not always possible to deduce the types of the parameters, not even
from a formal specification or the code itself. An example is printf, which may have any number of
parameters (at least one), and the parameters can be an arbitrary mixture of integers, shorts, longs,
characters, strings, floating-point numbers of various lengths, and other types. Trying to call printf as a
remote procedure would be practically impossible because C is so permissive. However, a rule saying
that RPC can be used provided that you do not program in C (or C++) would not be popular.
A fourth problem relates to the use of global variables. Normally, the calling and called
procedure can communicate by using global variables, in addition to communicating via parameters. If
the called procedure is now moved to a remote machine, the code will fail because the global variables
are no longer shared.
These problems are not meant to suggest that RPC is hopeless. In fact, it is widely used, but
some restrictions are needed to make it work well in practice. Of course, RPC need not use UDP
packets, but RPC and UDP are a good fit and UDP is commonly used for RPC. However, when the
parameters or results may be larger than the maximum UDP packet or when the operation requested is
not idempotent (i.e., cannot be repeated safely, such as when incrementing a counter), it may be
necessary to set up a TCP connection and send the request over it rather than use UDP.

The Real-Time Transport Protocol


UDP is widely used in real-time multimedia applications. In particular, as Internet radio, Internet
telephony, music-on-demand, videoconferencing, video-on-demand, and other multimedia applications
became more commonplace, people discovered that each application was reinventing more or less the
same real-time transport protocol. Thus was RTP (Real-time Transport Protocol) born.
The position of RTP in the protocol stack is somewhat strange. It was decided to put RTP in user
space and have it (normally) run over UDP. It operates as follows. The multimedia application consists
of multiple audio, video, text, and possibly other streams. These are fed into the RTP library, which is in
user space along with the application. This library then multiplexes the streams and encodes them in
RTP packets, which it then stuffs into a socket. At the other end of the socket (in the operating system
kernel), UDP packets are generated and embedded in IP packets. If the computer is on an Ethernet, the
IP packets are then put in Ethernet frames for transmission. The protocol stack for this situation is shown
in Fig. 14(a). The packet nesting is shown in Fig. 14(b).

18
Figure 14. (a) The position of RTP in the protocol stack. (b) Packet
nesting.
As a consequence of this design, it is a little hard to say which layer RTP is in. Since it runs in
user space and is linked to the application program, it certainly looks like an application protocol. On the
other hand, it is a generic, application-independent protocol that just provides transport facilities, so it
also looks like a transport protocol. Probably the best description is that it is a transport protocol that is
implemented in the application layer.
The basic function of RTP is to multiplex several real-time data streams onto a single stream of
UDP packets. The UDP stream can be sent to a single destination (unicasting) or to multiple destinations
(multicasting). Because RTP just uses normal UDP, its packets are not treated specially by the routers
unless some normal IP quality-of-service features are enabled. In particular, there are no special
guarantees about delivery, jitter, etc.
Each packet sent in an RTP stream is given a number one higher than its predecessor. This
numbering allows the destination to determine if any packets are missing. If a packet is missing, the best
action for the destination to take is to approximate the missing value by interpolation. Retransmission is
not a practical option since the retransmitted packet would probably arrive too late to be useful. As a
consequence, RTP has no flow control, no error control, no acknowledgements, and no mechanism to
request retransmissions.

[ RTP is intended for real-time multimedia applications, like radio, telephony, music-on-demand,
videoconferencing, video-on-command, etc. Its basic function is to multiplex several real-time data
streams into a single stream of UDP packets, send to a single or to multiple destinations. It may contain
for example a video stream and 2 audio stream for stereo or sound tracks in 2 languages. The packets
receive no special treatment from routers unless some quality-of-service features of the IP packets are
enabled.RTP packets have a sequence number. If a receiver misses one, the best action is probably to
approximate missing values by interpolation, since a retransmission would probably come to late. Each

19
payload may contain multiple samples, they may be coded any way the uses wants. The payload type
field in the header indicates which one is used.

The time of each sample relative to the first sample in the stream can be indicated in the header
by the sender. The receiver can use this to buffer incoming samples and to use each sample at the right
moment, this to reduce jitter effects. RTCP (Real Time Control Protocol) can be used to handle
feedback, synchronization and the user interface. The feedback can be used to provide information on
delay, jitter, bandwidth, congestion and other network properties to the source. The encoding process can
use it increase the data rate and give better quality if the network is functioning well, otherwise it can
cutback in quality and data rate. By providing continuous feedback the best quality under the current
circumstances can be provided. RTCP provides also for synchronizing multiple data streams, e.g. video
and sound. Further it provides names (e.g. in ASCII text) for the various data streams, to be shown to the
user. ]

The RTP header is illustrated in Fig. 15.

Figure 15. The RTP header.

version (V): 2 bits


This field identifies the version of RTP. The version is 2 upto RFC 1889.

padding (P): 1 bit


If the padding bit is set, the packet contains one or more additional padding octets at the end
which are not part of the payload. The last octet of the padding contains a count of how many
padding octets should be ignored. Padding may be needed by some encryption algorithms with
fixed block sizes or for carrying several RTP packets in a lower-layer protocol data unit.

20
extension (X): 1 bit
If the extension bit is set, the fixed header is followed by exactly one header extension.

CSRC count (CC): 4 bits


The CSRC count contains the number of CSRC identifiers that follow the fixed header.

marker (M): 1 bit


Marker bit is used by specific applications to serve a purpose of its own. We will discuss this in
more detail when we study Application Level Framing.

payload type (PT): 7 bits


This field identifies the format (e.g. encoding) of the RTP payload and determines its
interpretation by the application. This field is not intended for multiplexing separate media.

sequence number: 16 bits


The sequence number increments by one for each RTP data packet sent, and may be used by the
receiver to detect packet loss and to restore packet sequence. The initial value of the sequence
number is random (unpredictable).

timestamp: 32 bits
The timestamp reflects the sampling instant of the first octet in the RTP data packet. The
sampling instant must be derived from a clock that increments monotonically and linearly in time
to allow synchronization and jitter calculations.

SSRC: 32 bits
The SSRC field identifies the synchronization source. This identifier is chosen randomly, with
the intent that no two synchronization sources within the same RTP session will have the same
SSRC identifier.

CSRC list: 0 to 15 items, 32 bits each


The CSRC list identifies the contributing sources for the payload contained in this packet. The
number of identifiers is given by the CC field. If there are more than 15 contributing sources,
only 15 may be identified. CSRC identifiers are inserted by mixers, using the SSRC identifiers of
contributing sources.

21
Transmission Control Protocol (TCP)

Introduction to TCP

TCP (Transmission Control Protocol) provides a reliable byte stream over an unreliable
internetwork. Each machine supporting TCP has a TCP transport entity, either a user process or part of
the kernel, that manages TCP streams (connections) and interfaces to the IP layer. A TCP entity accepts
user data streams from local processes, breaks them up into pieces not exceeding 64K (usually about
1460 bytes to fit in a single Ethernet frame) and sends each piece as a separate IP datagram. The receiver
side gives IP datagrams containing TCP data to its TCP entity, which reconstructs the original byte
streams.IP gives no guarantees that datagrams will be delivered properly, so it is up to TCP to time out
and retransmit. Also IP datagrams might be delivered in the wrong order, it is up to TCP to rearrange
them in the proper sequence.

The TCP service model

The TCP service is obtained by having both the sender and receiver create end points, called
sockets. Each socket has a socket number (address) consisting of the IP address of the host and a 16 bit
number local to that host, called a port. A port is the TCP name for a TSAP. A connection must then
explicitly be established between a socket on the sending machine and a socket on the receiving
machine. Two or more connections may terminate at the same socket. Connections are identified by the
socket identifiers at both ends, that is (socket1, socket2). No VC numbers or other identifiers are used.
Port numbers below 1024 are called well-known ports and are reserved for standard services, like FTP
or TELNET.
All TCP connections are full-duplex (traffic can go in both directions) and point-to-point (each
connection has exactly 2 end points). Multicasting or broadcasting are not supported. A TCP connection
is a byte stream, not a message stream. For example, if the sending process does four 512 byte writes to
a TCP stream, these data may be delivered to the receiving process as four 512 byte chunks, two 1024
bytes chunks, one 2048 byte chunk or some other way. This is analogous to UNIX files.

22
TCP may send user data immediately or buffer it, in order to send larger IP datagrams. Applications
might use the PUSH flag to indicate that TCP should send the data immediately. The application can
also send urgent data (e.g. when an interactive user hits the CTRL-C key), TCP then puts control
information in its header and the receiving side interrupts (gives a signal in UNIX terms) to the
application program using the connection. The end of the urgent data is marked, but not its beginning,
the receiving application has to figure that out. This provides a crude signaling mechanism.
The TCP protocol
Every byte on a TCP connection has its own unsigned 32 bit sequence number. They are used
both for acks, which uses a 32 bit header field and for the window mechanism, which uses a separate 16
bit header field.
The sending and receiving TCP entities exchange data in the form of segments. A segment
consists of a fixed 20 byte header (plus an optional part) followed by 0 or more data bytes. The TCP
software decides how big segments should be, it can accumulate bytes from several writes into one
segment or split data from one write over multiple segments. Each segment (including the header) must
fit into the 64K IP payload. Each network has a MTU (maximum transfer unit) and a segment must fit in
it. In practice, the MTU is generally a few thousand bytes and thus defines the upper bound on segment
size. If a segment passes through a sequence of networks and hits one whose MTU is smaller than the
segment, the router at the boundary fragments the segment. Each fragment is a new segment having its
own TCP and IP header, thus fragmentation increases the total overhead.
TCP basically uses a sliding window protocol. On sending a segment a timer is started, on arrival
the receiver sends back a segment (with data if any exists) bearing an ack equal to the next sequence
number it expects to receive. This sounds simple, but segments can be fragmented and the parts can be
lost or delayed so much that a retransmission occurs. If a retransmitted segments takes a different route
and is fragmented differently, bytes and pieces of both the original and the duplicate can arrive
sporadically, requiring a careful administration to achieve a reliable byte stream.

The TCP header


23
URG is set to 1 if the urgent pointer is in use which indicates a byte offset from the current
sequence number to the end of the urgent data. An ACK of 1 indicates a valid ack number. The PSH bit
indicates pushed data, requesting the receiver to deliver the received data directly to its application
program and not buffer it. The RST bit is used to reset a connection that has become confused due to a
host crash or some other reason.

The SYN bit is used to establish a connection and the FIN bit to release one. The latter tells that the
sender has no more data to send, but it may continue to receive data. Both SYN and FIN segments have
sequence numbers and are thus guaranteed to be processed in the correct order.

The window field tells how many bytes may be sent starting at the byte acknowledged. A
window field of 0 is valid, it tells the sender to be quiet for a while. Permission to send can be granted
later by sending a segment with the same ack number and a nonzero window field.

Figure 16. The TCP header.

24
Figure 17.. The pseudoheader included in the TCP checksum.

A checksum is also provided for extreme reliability. It checksums the header, the data and the
conceptual pseudo header. The checksum is a simple one, just adding the 16-bits words (using padding if
needed) in 1's complement and then take the 1's complement of the sum. The receiving side should thus
find a checksum of 0. The pseudoheader contains IP addresses and thus violates the protocol hierarchy.
It helps to detect misdelivered packets.
The options field was designed to provide a way to add extra facilities. An important option is to
allow each host to specify the maximum TCP payload it is willing to accept (small host might not be
able to accept very large segments). During connection, each side can announce its maximum and the
smallest is taken. If a host does not use this option, it defaults to a 536 byte payload, which all Internet
hosts are required to accept.
For lines with high bandwidth, high delay or both, the 64 KByte window is often a problem. On
a T3 line (44.736 Mbps), it takes only 12 msec to output. If the round trip propagation is 50 msec
(typical for a transcontinental fiber), the sender will be idle 75% of the time waiting for an ack. The
window scale options allows both sides to negotiate a scale factor for the window field, allowing
windows of up to 4 GBytes. Most TCP implementations now support this option.
Another option (proposed in RFC 1106 and widely implemented) is the use of selective repeat
instead of go back n. It introduced NAK's, to allow the receiver to ask for specific, not (yet) received
data bytes, after it has received following bytes. After it gets these, it can ack all its buffered data, thus
reducing the amount of retransmitted data. This is nowadays important as memory is cheap and
bandwidth is still small or expensive.
TCP connection management
To establish a connection, one side, say the server, passively waits for an incoming connection by
executing the listen and accept primitives, either specifying a particular other side or nobody in
particular. The other side executes a connect primitive, specifying the IP and port to which it wants to
connect, the maximum TCP segment size, possible other options and optionally some user data (e.g. a
password). The connect primitive sends a TCP segment with the SYN bit on and the ACK bit off.

The receiving server checks to see if there is a process that has done a listen on the port given in the
destination field. If not, it sends a reply with the RST bit on to reject the connection. Otherwise it gives
the TCP segment to the listening process, which can accept or refuse (e.g. if it does not like the client)

25
the connection. On acception a SYN is send, otherwise a RST. Note that a SYN segment occupies 1 byte
of sequence space so it can be acked unambiguously.

Figure 18. (a) TCP connection establishment in the normal case. (b)
Call collision.

Releasing a TCP connection is symmetric. Either part can send a TCP segment with the FIN bit
set, meaning it has no more data to send. When the FIN is acked, that direction is shut down, but data
may continue to flow indefinitely in the other direction. If a response to a FIN is not received within 2
maximum packet lifetimes, the sender of the FIN releases the connection. The receiver will eventually
notice that it receives no more data and timeout as well.

In the event that 2 host simultaneously attempt to establish a connection between the same two
sockets, still just one connection is established, because connections are identified by their end points.
For the initial sequence number a clock based scheme is used, with a clock tick every 4 sec. For
additional safety, when a host crashes it may not reboot for 120 sec.

The sequence of TCP segments sent in the normal case is shown in Fig. 18(a). In the event that two hosts
simultaneously attempt to establish a connection between the same two sockets, the sequence of events is as
illustrated in Fig. 18(b).

26
Figure 19. The states used in the TCP connection management finite state machine.

There are 11 states (listed in Fig. 19. )used in the TCP connection management finite state
machine. Data can be send in the ESTABLISHED and the CLOSE WAIT states and received in the
ESTABLISHED and FINWAIT1 states.

TCP connection management final state machine. The heavy solid line is the normal path for the
client, the heavy dashed line that for the server.

Each line is marked by an event/action pair. The event can either be a user-initiated system call
(CONNECT, LISTEN, SEND or CLOSE), a segment arrival (SYN, FIN, ACK or RST), or a timeout.
For the TIMED WAIT state the event can only be a timeout of twice the maximum packet length. The
action is the sending of a control segment (SYN, FIN or RST) or nothing. The time-outs to guard for lost
packets ( e.g. in the SYN SENT state) are not shown here.

27
Figure 20. TCP connection management finite state machine. The heavy solid line is the normal path for a client. The heavy
dashed line is the normal path for a server. The light lines are unusual events. Each transition is labeled by the event causing
it and the action resulting from it, separated by a slash.

TCP transmission policy

Window management in TCP is decoupled from acks. When the window is 0, the sender may not
normally send segments, with two exceptions. First, urgent data may be send, e.g. to allow the user to
kill the process running on the other machine. Second, the sender may send a 1-byte segment to make
the receiver reannounce the next byte expected and the window size. This is used to prevent deadlock if
a window announcement ever gets lost.

Senders are not required to transmit data as soon as they come in from the application. Usually
Nagle's algorithm is used. When data come into the sender one byte at a time (e.g. on a Telnet
connection), just the first byte is send and the rest is buffered until the outstanding byte is acked.
Sometimes it is better to disable this, e.g. when mouse movements are sent in X-Windows applications.
Receivers are also not required to send acks and window updates immediately. Many implementations
delay them for 500 msec in the hope of acquiring some data on which to hitch a free ride.Another

28
problem is the silly window syndrome, occurring when the sender transmit data in large blocks, but an
interactive application on the receiver side reads data 1 byte at a time.

Figure 21. Window management in TCP.

The receiver continuously gives 1 byte window updates and the sender transmits 1 byte
segments. Clark's solution is to let the receiver only send updates when it can handle the maximum
segment size it advertised when the connection was established or when its buffer is half empty.The
receiver usually uses selective update, but go back n can also be used.

TCP congestion control

All Internet TCP algorithms assume that time-outs are caused by congestion due to network and receiver
capacity as lost packets due to noise on the transmission lines are rare these days. Each sender maintains
two windows: the window the receiver has granted (indicating the receiver capacity) and the

29
Figure 22. An example of the Internet congestion algorithm.

congestion window (indicating the network capacity).


The number of bytes that may be sent is the minimum
of the 2 windows.Initially the congestion window is
the MTU(maximum transmission unit). It is doubled
on each burst successfully (an ack received before the
timeout) sent. This exponential increase (called the
slow start) continues until the threshold (initially 64K) is reached, after which the increase is linearly
with 1 MTU. When a timeout occurs, the threshold is set to half the current congestion window and the
slow start is repeated. If an ICMP source quench packet comes in, it is treated the same way as a
timeout.The first step in managing congestion is detecting it. In the old days, detecting congestion was
difficult. A timeout caused by a lost packet could have been caused by either (1) noise on a transmission
line or (2) packet discard at a congested router. Telling the difference was difficult. Nowadays, packet
loss due to transmission errors is relatively rare because most long-haul trunks are fiber (although
wireless networks are a different story). Consequently, most transmission timeouts on the Internet are
due to congestion. All the Internet TCP algorithms assume that timeouts are caused by congestion and
monitor timeouts for signs of trouble the way miners watch their canaries.

Before discussing how TCP reacts to congestion, let us first describe what it does to try to
prevent congestion from occurring in the first place. When a connection is established, a suitable
window size has to be chosen. The receiver can specify a window based on its buffer size. If the sender
sticks to this window size, problems will not occur due to buffer overflow at the receiving end, but they
may still occur due to internal congestion within the network.
The Internet solution is to realize that two potential problems existnetwork capacity and
receiver capacityand to deal with each of them separately. To do so, each sender maintains two
windows: the window the receiver has granted and a second window, the congestion window. Each
reflects the number of bytes the sender may transmit. The number of bytes that may be sent is the
minimum of the two windows. Thus, the effective window is the minimum of what the sender thinks is
all right and what the receiver thinks is all right.
The congestion window keeps growing exponentially until either a timeout occurs or the

30
receiver's window is reached. The idea is that if bursts of size, say, 1024, 2048, and 4096 bytes work fine
but a burst of 8192 bytes gives a timeout, the congestion window should be set to 4096 to avoid
congestion. As long as the congestion window remains at 4096, no bursts longer than that will be sent,
no matter how much window space the receiver grants. This algorithm is called slow start, but it is not
slow at all (Jacobson, 1988). It is exponential. All TCP implementations are required to support it.
Now let us look at the Internet congestion control algorithm. It uses a third parameter, the
threshold, initially 64 KB, in addition to the receiver and congestion windows. When a timeout occurs,
the threshold is set to half of the current congestion window, and the congestion window is reset to one
maximum segment. Slow start is then used to determine what the network can handle, except that
exponential growth stops when the threshold is hit. From that point on, successful transmissions grow
the congestion window linearly (by one maximum segment for each burst) instead of one per segment.
In effect, this algorithm is guessing that it is probably acceptable to cut the congestion window in half,
and then it gradually works its way up from there.
As an illustration of how the congestion algorithm works, see Fig. 22. The maximum segment
size here is 1024 bytes. Initially, the congestion window was 64 KB, but a timeout occurred, so the
threshold is set to 32 KB and the congestion window to 1 KB for transmission 0 here. The congestion
window then grows exponentially until it hits the threshold (32 KB). Starting then, it grows linearly.

TCP timer management

The retransmission timer has to handle the large variation in round trip time occurring in TCP. See the
example right, on the left is a normal situation for the data link layer. For each segment the round trip
time M is measured and the estimates of the
mean and mean deviation are updated as:

Figure 23. (a) Probability density of


acknowledgement arrival times
in the data link layer. (b) Probability density of
acknowledgement
arrival times for TCP.
RTT = RTT + (1-)M
D = D + (1-) |RTT-M|

with a smoothing parameter, typically 7/8. The timeout is then set to: RTT + 4 D
With Karn's algorithm, RTT and D are not updated for retransmitted segments. Instead, the timeout is
doubled on each failure until the segments get through the first time.

31
When a window size of 0 is received, the persistence timer is used to guard against the lost of the next
window update.
Some implementations also use a keepalive timer. When a connection has been idle for a long
time, the timeout causes a packet to be send to see if the other side is still alive. If it fails to respond, the
connection is terminated. This feature is controversial because it adds overhead and may terminate an
otherwise healthy connection due to a transient network problem.
The last timer is the one used in the TIMED WAIT state while closing, running for twice the
maximum packet lifetime to make sure that when a connection is closed, all packets created by it have
died off.

Wireless TCP and UDP

In theory, transport protocols should be independent of the technology of the underlying network layer.
In practice, most TCP implementations have been
carefully

Figure 24. Splitting a TCP connection into two connections.

optimized based on assumptions that are true for wired networks but not for wireless networks. Off
coarse, they work correctly for wireless network, but the performance is low.
In practice, it does matter because most TCP implementations have been carefully optimized
based on assumptions that are true for wired networks but that fail for wireless networks. The principal
problem is the congestion control algorithm. Nearly all TCP implementations nowadays assume that
timeouts are caused by congestion, not by lost packets. Consequently, when a timer goes off, TCP slows
down and sends less vigorously (e.g., Jacobson's slow start algorithm). The idea behind this approach is
to reduce the network load and thus alleviate the congestion. Unfortunately, wireless transmission links
are highly unreliable. They lose packets all the time. The proper approach to dealing with lost packets is
to send them again, and as quickly as possible.
On a wireless network if a packet is lost, it is usually not due to congestion, but due to "noise" on
the transmission. TCP should not slow down, but retransmit as soon as possible. That can be done on a
host which knows it sends over a wireless network, but what if the first part of a connection from a
sender to a receiver is over a wired network, and the last part over a wireless link?
Using an indirect TCP solution is a possibility. But the acknowledgment, returned from the base
station to the sender, does not indicate that the mobile host has received the data. Another possibility is
32
to add a snooping agent on the base station. It watches the interchange between base station and mobile
host and performs retransmissions (and interception of duplicated acknowledgments) on that part alone.
However there is still the possibility that the sender times out and starts it congestion control.

33

Anda mungkin juga menyukai