7. What are the two multiplexing strategies used in transport layer? (M/J-2012)
DEPT. OF ECE/UNIT-III Page 1
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-to-
one relationship and requires multiplexing. The protocol accepts messages from
different processes, differentiated by their assigned port numbers. After adding the
header, the transport layer passes the packet to the network layer
1. What are the network support layers and the user support layers?
Network support layers:
The network support layers are Physical layer, Data link layer and
Network layer. These deals with electrical specifications, physical connection,
transport timing and reliability.
User support layers: The user support layers are: Session layer, Presentation layer,
Application layer. These allow interoperability among un related software system.
2. With a neat diagram ex plain the relationship of IEEE Project to the OSI
model?
Other layers Other layers
Network
Network
Logical Link Control
Data link
Media Access Control Physical
Physical
The IEEE has subdivided the data link layer into two sub layers: * Logical link
control (LLC) * Medium access control (MAC) LLC is non-architecture specific.
The MAC sub layer contains a number of distinct modules ,each carries proprietary
information specific to the LAN product being used.
6. What are headers and t railers and how do they get added and removed?
The control data added to the beginning of a data is called headers. The
control data added to the end of a data is called trailers. At the sendin g machine, wh
en the message passes through the layers each layer adds the headers or trailers. At
the receiving machine, each layer removes the data meant for it and passes the rest to
the next layer.
10. What are the two types of implementation formats in virtual circuits? Virtual
circuit transmission is implemented in 2 formats. •
Switched virtual circuit
• Permanent virtual circuit.
18. Write the keys for understanding the distance vector routing.
The three keys for understanding the algorithm are • Knowledge about the
whole networks
• Routing only to neighbours
• Information sharing at regular intervals
19. Write the keys for understanding the link state routing.
The three keys for understanding the algorithm are
• Knowledge about the neighbourhood.
• Routing to all neighbours.
• Information sharing when there is a range.
20. How the packet cost referred in distance vector and link state routing?
In distance vector routing, cost refer to hop count while in case o f link state
routing, cost is a weighted value based on a variety of factors such as security levels,
traffic or the state of the link.
• Routers
• Gateway
• The bytes in the IP address that corresponds to 0 in the mask will change to 0 in the
subnetwork address
• For other bytes, use the bit-wise AND operator
PART-B
1. Explain the User Datagram Protocol (UDP) in detail. (N/D-2012, 2011)
UDP Segment
The format of the UDP segment is shown in Figure 8.4. The segment starts
with the source port, followed by the destination port. These port numbers are used
to identify the ports of applications at the source or the destination, respectively. The
source port identifies the application that is sending the data. The destination port
helps UDP to demultiplex the packet and directs it to the right application. The UDP
length field indicates the length of the UDP segment, including both the header and
the data. UDP checksum specifies the computed checksum when transmitting the
packet from the host. If no checksum is computed, this field contains all zeroes.
When this segment is received at the destination, the checksum is computed; if there
is an UDP takes messages from the application process, attaches source and
destination port number fields and two other fields, and makes this segment
available to the network layer. The network layer encapsulates the segment into an
IP datagram (packet) and finds the best path to deliver the segment to the other end
host.
The stop-and-wait protocol is the simplest and the least expensive technique
for link-overflow control. The idea behind this protocol is that the transmitter waits
for an acknowledgement after transmitting one frame (see Figure 4.10). The essence
of this protocol is that if the acknowledgment is not received by the transmitter after
a certain agreed period of time, the transmitter retransmits the original frame.
Figure 4.10. A simple timing chart of a stop-and-wait flow control of data links
The shortcoming of the stop-and-wait protocol is that only one frame at a time
is allowed for transmission. This flow control can be significantly improved by
letting multiple frames travel on a transmission link. However, allowing a sequence
of frames to be in transit at the same time requires a more sophisticated protocol for
the control of data overflow on links. The well-known sliding-window protocol is
one technique that efficiently controls the flow of frames.
Figure 4.11 shows an example of sliding-window flow control. With this
protocol, a transmitter (T) and a receiver (R) agree to form identical-size sequences
of frames. Let the size of a sequence be .. Thus, a transmitter allocates buffer space
for . frames, and the receiver can accept up to . frames. In this figure, . = 5. The
transmitter can then send up to . = 5 frames without waiting for any
acknowledgment frames. Each frame in a sequence is labeled by a unique sequence
number. For every k frames forming a sequence, the transmitter attaches a sequence-
number field to each frame. Therefore, as many as 2k sequence numbers exist.
First, a transmitter opens a window of size . = 5, as the figure shows. The
receiver also opens a window of size . to indicate how many frames it can expect. In
the first attempt, let's say that frames 1, 2, and 3 as part of sequence i are transmitted.
The transmitter then shrinks its window to . = 2 but keeps the copy of these frames
in its buffer just in case any of the frames is not received by the receiver. At the
receipt of these three frames, the receiver shrinks its expected window to . = 2 and
acknowledges the receipt of frames by sending an acknowledgment ACK1,2,3 frame
to the transmitter.
At the release of ACK1,2,3, the receiver changes its expected window size
back to . = 5. This acknowledgment also carries information about the sequence
number of the next frame expected and informs that the receiver is prepared to
receive the next . frames. At the receipt of ACK1,2,3, the transmitter maximizes its
window back to . = 5, discards the copies of frames 1, 2, and 3, and continues the
Figure 4.11. Timing chart of a sliding-window flow control for two sequences of
frames
where rm is the minimum of all the measured round-trip times, and .g is the
congestion window size. We define Ar as the actual rate and (E[r] - Ar) as the rate
difference. We also denote the maximum and minimum threshold to be max and
min, respectively. When the rate difference is very small less than min the method
increases the congestion window size to keep the amount of extra data at a nominal
level. If the rate difference is between min and max, the congestion window size is
unaltered. When the rate difference is greater than max, there is too much extra data,
and the congestion window size is reduced. The decrease in the congestion window
size is linear. The TCP normalized method attempts to maintain the traffic flow such
that the difference between expected and actual rates lies in this range.
Sequence number is a 32-bit filed that TCP assigns to each first data
byte in the segment. The sequence number restarts from 0 after the
number reaches 232 - 1.
Header length (HL) is a 4-bit field indicating the length of the header in
32-bit words.
Urgent (URG) is a 1-bit field implying that the urgent-pointer field is
applicable.
Acknowledgment (ACK) shows the validity of an acknowledgment.
Push (PSH), if set, directs the receiver to immediately forward the data
to the destination application.
Reset (RST), if set, directs the receiver to abort the connection.
Synchronize (SYN) is a 1-bit field used as a connection request to
synchronize the sequence numbers.
Finished (FIN) is a 1-bit field indicating that the sender has finished
sending the data.
Window size specifies the advertised window size.
Checksum is used to check the validity of the received packet.
Urgent pointer (URG), if set, directs the receiver to add up the values in
the urgent-pointer field and the sequence number field to specify the
last byte number of the data to be delivered urgently to the destination
application.
Figure 8.2. TCP segment format
One of the possible options is maximum segment size (MSS). A receiver uses
this option to specify the maximum segment size it can receive. A total of 16 bits are
provided to specify this option. Thus, the maximum segment size is limited to
65,535 bytes minus 20 bytes of TCP header and minus 20 bytes of IP header,
resulting in 65,495 bytes. The default MSS for TCP is 536 bytes. Another possible
option is the window scale. This option is used to increase the size of the advertised
window beyond the specified 216 - 1 in the header. The advertised window can be
scaled to a maximum of 214.
Connection Setup
As a connection-oriented protocol, TCP requires an explicit connection set-up
phase. Connection is set up using a three-step mechanism, as shown in Figure 8.3
(a). Assume that host A is a sender and host B a destination. First, the sender sends a
connection request to the destination. The connection request comprises the initial
sequence number indicated by seq(i), with the SYN bit set. Next, on receipt of the
connection request, the destination sends an acknowledgment, ack(i + 1), back to the
source, indicating that the destination is waiting for the next byte. The destination
also sends a request packet comprising the sequence number, seq(j), and the SYN bit
set. Finally, at the third step, the sender returns an acknowledgment segment, ack(j +
1), specifying that it is waiting for the next byte. The sequence number of this next
segment is seq(i + 1). This process establishes a connection between the sender and
the receiver.
Figure 8.3. TCP signaling: (a) establishment of a connection; (b) collision of two
connection requests
The connection process uses different initial sequence numbers between the
sender and the receiver to distinguish between the old and new segments and to
avoid the duplication and subsequent deletion of one of the packets. In Figure 8.3,
both end-point hosts simultaneously try to establish a connection. In this case, since
they recognize the connection requests coming from each other, only one connection
is established. If one of the segments from a previous connection arrives late, the
receiver accepts the packet, presuming that it belongs to the new connection. If the
packet from the current connection with the same sequence number arrives, it is
considered a duplicate and is dropped. Hence, it is important to make sure that the
initial sequence numbers are different.
When one of the end points decides to abort the connection, a segment with
the RST bit set is sent. Once an application has no data to transmit, the sender sends
a segment with the FIN bit set. The receiver acknowledges receipt of this segment by
responding with an ACK and notifies the application that the connection is
terminated. Now, the flow from the sender to the receiver is terminated. However, in
such cases, the flow from the receiver to the sender is still open. The receiver then
sends a segment with the FIN bit set. Once the sender acknowledges this by
responding with an ACK, the connection is terminated at both ends.
Scheduling
Packets from different flows arrive at a switch or router for processing. A
good scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service.
We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be discarded.
A FIFO queue is familiar to those who have had to wait for a bus at a bus stop.
Figure 24.16 shows a conceptual view of a FIFO queue.
Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty. Figure 24.17 shows priority queuing with two
priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because
higherpriority traffic, such as multimedia, can reach the destination with less delay.
However, there is a potential drawback. If there is a continuous flow in a high-
priority queue, the packets in the lower-priority queues will never have a chance to
be processed. This is a condition called starvation.
queue. If the system does not impose priority on the classes, all weights can be
equaL In this way, we have fair queuing with priority. Figure 24.18 shows the
technique with three classes. Traffic Shaping Traffic shaping is a mechanism to
control the amount and the rate of the traffic sent to the network. Two techniques can
shape traffic: leaky bucket and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket. The rate at which the water leaks
does not depend on the rate at which the water is input to the bucket unless the
bucket is empty. The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth out bursty
traffic. Bursty chunks are stored in the bucket and sent out at an average rate. Figure
24.19 shows a leaky bucket and its effects.
Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For
example, if a host is not sending for a while, its bucket becomes empty. Now if the
host has bursty data, the leaky bucket allows only an average rate. The time when
the host was idle is not taken into account. On the other hand, the token bucket
algorithm allows idle hosts to accumulate credit for the future in the form of tokens.
For each tick of the clock, the system sends n tokens to the bucket. The system
removes one token for every cell (or byte) of data sent. For example, if n is 100 and
the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can
consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks
with 10 cells per tick. In other words, the host can send bursty data as long as the
bucket is not empty. Figure 24.21 shows the idea.
The token bucket can easily be implemented with a counter. The token is
initialized to zero. Each time a token is added, the counter is incremented by 1. Each
time a unit of data is sent, the counter is decremented by 1. When the counter is zero,
the host cannot send data.
Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so
on. The quality of service is improved if these resources are reserved beforehand.
We discuss in this section one QoS model called Integrated Services, which depends
heavily on resource reservation to improve the quality of service.
Admission Control
Admission control refers to the mechanism used by a router, or a switch, to
accept or reject a flow based on predefined parameters called flow specifications.
Before a router accepts a flow for processing, it checks the flow specifications to see
if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.
Equation 7.3
where coefficient c is set by the router to determine how quickly it wants to reach a
desired P. In fact, c can be thought of as the number of arriving packets that have
been queued. We can then obtain from
Equation 7.5
In essence, when the queue length is below the minimum threshold, the packet
is admitted into the queue. Figure 7.15 shows the variable setup in RED congestion
avoidance. When the queue length is between the two thresholds, the packet-drop
probability increases as the queue length increases. When the queue length is above
the maximum threshold, the packet is always dropped. Also, shown in Equation
(7.4), the packet-drop probability depends on a variable that represents the number
of arriving packets from a flow that has been queued. When the queue length
increases, all that is needed is to drop one packet from the source. The source then
halves its congestion window size.
Once a small number of packets are dropped, the associated sources reduce
their congestion windows if the average queue length exceeds the minimum
threshold, and therefore the traffic to the router drops. With this method, any
congestion is avoided by an early dropping of packets.
RED also has a certain amount of fairness associated with it; for larger flows,
more packets are dropped, since P for these flows could become large. One of the
challenges in RED is to set the optimum values for Nmin, Nmax, and c. Typically,
Nmin has to be set large enough to keep the throughput at a reasonably high level
but low enough to avoid congestion. In practice, for most networks on the Internet,
the Nmax is set to twice the minimum threshold value. Also, as shown by guard
space in Figure 7.15, there has to be enough buffer space beyond Nmax, as Internet
traffic is bursty.
traveled are not warned. We have seen an example of this type of control in ICMP.
When a router in the Internet is over whe1med with IP datagrams, it may discard
some of them; but it informs the source host, using a source quench ICMP message.
The warning message goes directly to the source station; the intermediate routers,
and does not take any action. Figure 24.7 shows the idea of a choke packet.
Multiplexing
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-to-
one relationship and requires multiplexing. The protocol accepts messages from
different processes, differentiated by their assigned port numbers. After adding the
header, the transport layer passes the packet to the network layer.
Demultiplexing
At the receiver site, the relationship is one-to-many and requires
demultiplexing. The transport layer receives datagrams from the network layer. After
error checking and dropping of the header, the transport layer delivers each message
to the appropriate process based on the port number.