Anda di halaman 1dari 26

SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

PART-A (UNIVERSITY QUESTIONS)


1. Define congestion. (N/D-2012, 2011)
Congestion in a network occur if user send data into the network at a rate
greater than that allowed by network resources.

2. List any four QOS parameters. (M/J-2013)


 Bandwidth
 Reliability
 Delay
 Jitter

3. Draw the datagram format of UDP. (A/M-2011)

4. Define flow control. (A/M-2011)


Flow control refers to a set of procedures used to restrict the amount of data.
The sender can send before waiting for acknowledgment.

5. What is queuing? (N/D-2011)


Packets wait in a buffer (queue) until the node (router or switch) is ready to process
them.

6. What is client process? (M/J-2012)


The user client process sends a message, the proxy firewall runs a server
process to receive the request. The server opens the packet at the application level
and finds out if the request is legitimate. If it is, the server acts as a client process
and sends the message to the real server in the corporation. If it is not, the message is
dropped and an error message is sent to the external user.

7. What are the two multiplexing strategies used in transport layer? (M/J-2012)
DEPT. OF ECE/UNIT-III Page 1
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-to-
one relationship and requires multiplexing. The protocol accepts messages from
different processes, differentiated by their assigned port numbers. After adding the
header, the transport layer passes the packet to the network layer

PART-A (IMPORTANT QUESTIONS)

1. What are the network support layers and the user support layers?
Network support layers:
The network support layers are Physical layer, Data link layer and
Network layer. These deals with electrical specifications, physical connection,
transport timing and reliability.
User support layers: The user support layers are: Session layer, Presentation layer,
Application layer. These allow interoperability among un related software system.

2. With a neat diagram ex plain the relationship of IEEE Project to the OSI
model?
Other layers Other layers
Network
Network
Logical Link Control
Data link
Media Access Control Physical
Physical
The IEEE has subdivided the data link layer into two sub layers: * Logical link
control (LLC) * Medium access control (MAC) LLC is non-architecture specific.
The MAC sub layer contains a number of distinct modules ,each carries proprietary
information specific to the LAN product being used.

3. What are the functions of LLC?


The IEEE project 802 model takes the structure of an HDLC frame and
divides it into 2 sets of functions. One set contains the end user portion of the HDLC
frame - the logical address, control information, and data. These functions are
handled by the IEEE 802.2 logical link control (LLC) protocol.

DEPT. OF ECE/UNIT-III Page 2


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

4. What are the functions of MAC?


MAC sub layer resolves the contention for the shared media. It contains
synchronization, flag, flow and error control specifications necessary to move
information from one place to another, as well as the physical address of the next
station to receive and route a packet.

5. What is protocol data unit?


The data unit in the LLC level is called Protocol Data Unit (PDU). It contains four
fields.
DSAP SSAP Control In formation
• Destination
Service Point
Address (DSAP)
• Source Service Access Point
• Control field
• Information field

6. What are headers and t railers and how do they get added and removed?
The control data added to the beginning of a data is called headers. The
control data added to the end of a data is called trailers. At the sendin g machine, wh
en the message passes through the layers each layer adds the headers or trailers. At
the receiving machine, each layer removes the data meant for it and passes the rest to
the next layer.

7. What are the responsibilities of network layer?


The network layer is responsible for the source-to-destination delivery of packet
across multiple network links. The specific responsibilities of network layer include
the following:
• Logical addressing.
• Routing.

8. What is a virtual circuit?


A logical circuit made between the sendin g and receiving computer s. The
connection is made after both computers do handshakin g. After the connection, all
packets follow the same route and arrive in sequence.

DEPT. OF ECE/UNIT-III Page 3


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

9. What are data grams?


In datagram approach, each packet is treated ind ependently from all others.
Even when one packet represents just a place of a multipacket transmission, the
network treats it although it existed alone. Packets in this technology are referred to
as datagram.

10. What are the two types of implementation formats in virtual circuits? Virtual
circuit transmission is implemented in 2 formats. •
Switched virtual circuit
• Permanent virtual circuit.

11. What is meant by switched virtual circuit?


Switched virtual circuit format is comparable conceptually to dial-up line in
circuit switching. In this method, a virtual circuit is created whenever it is needed
and exits only for the du ration of specific exchange.

12. What is meant by Permanent virtual circuit?


Permanent virtual circuits are comparable to leased lines in circuit switching.
In this method, the same virtual circuit is provided between two uses on a
continuous basis. The circuit is dedicated to the specific uses.

13. Define Routers.


Routers relay packets among multiple interconnected networks. They Route
packets from one network to any o f a number o f p otential destination networks on
internet routers operate in the physical, data link and network layer of OSI model.

14. What is meant by hop count?


The pathway requiring the smallest number of relays, it is called hop-count
routing, in which ev ery link is considered to be of equal length and given th e value
one.

15. How can the routing be classified?


The routing can be classified as,
• Adaptive routing
• Non-adaptive routing.

DEPT. OF ECE/UNIT-III Page 4


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

16. What is time-to-live or packet lifetime?


As the time-to-live field is generated, each packet is marked with a lifetime,
usually the number of h ops that are allowed before a packet is considered lost and
accordingly, destroyed. The time-to-live determines the lifetime of a packet.
17. What is meant by brouter?
A brouter is a single pr otocol or multiprotocol router that sometimes act as a
router and sometimes act as a bridge.

18. Write the keys for understanding the distance vector routing.
The three keys for understanding the algorithm are • Knowledge about the
whole networks
• Routing only to neighbours
• Information sharing at regular intervals

19. Write the keys for understanding the link state routing.
The three keys for understanding the algorithm are
• Knowledge about the neighbourhood.
• Routing to all neighbours.
• Information sharing when there is a range.

20. How the packet cost referred in distance vector and link state routing?
In distance vector routing, cost refer to hop count while in case o f link state
routing, cost is a weighted value based on a variety of factors such as security levels,
traffic or the state of the link.

21. How the routers get the information about neighbour?


A router gets its information about its neighbours by periodically sending
them a short greeting packets. If the neighbourhood responds to the greeting as
expected, it is assumed to be alive and functioning. If it dose not, a change is
assumed to have occurred and the sending router then alerts the rest of the network
in its next LSP.

22. What are the four internetworking devices?


The four internetworking devices are,
• Repeaters
• Bridges

DEPT. OF ECE/UNIT-III Page 5


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

• Routers

DEPT. OF ECE/UNIT-III Page 6


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

• Gateway

23. Define IP address.


IP address is the 3-bit number for representing a host or system in the network.
One portion of the IP address indicates a networking and the other represents the
host in a network.

24. What is Token Bus?


Token Bus is a physical bus that operates as a logical ring using tokens. Here
stations are logically organized into a ring. A token is passed among stations. If a
station wants to send data, it must wait and capture the token. Like Ethernet, station
communicates via a common bus.

25. What is token passing?


Stations may attempt to send data multiple times before a transmission
makes it onto a link. This redundancy may create delays of indeterminable length if
the traffic is heavy. Token ring resolves this uncertainty by requiring that stations
take turns sending data. Each station may transmit only during its turn and may send
only one frame during each turn. The mechanism that coordinates this rotation is
called token passing.

26. Define Masking?


Masking is the process that extracts the address of the physical network
from an IP address.

27. What are the rules of boundary-level masking?


The rules of boundary-level masking
• The bytes in the IP address that corresponds to 255 in the mask will be repeated in
the subnetwork address
• The bytes in the IP address that corresponds to 0 in the mask will change to 0 in the
subnetwork address

28. What are the rules of nonboundary-level masking?


• The bytes in the IP address that corresponds to 255 in the mask will be repeated in
the subnetwork address

DEPT. OF ECE/UNIT-III Page 7


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

• The bytes in the IP address that corresponds to 0 in the mask will change to 0 in the
subnetwork address
• For other bytes, use the bit-wise AND operator

29. Define Gateway.


A device used to connect two separate networks that we different communication
protocols.

30. What is LSP?


In link state routing, a small packet containin g routing information sent by
a router to all other router by a pack et called link state packet.

DEPT. OF ECE/UNIT-III Page 8


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

PART-B
1. Explain the User Datagram Protocol (UDP) in detail. (N/D-2012, 2011)

The user datagram protocol (UDP) is another transport-layer protocol that is


placed on top of the network layer. UDP is a connectionless protocol, as no
handshaking between sending and receiving points occurs before sending a segment.
UDP does not provide a reliable service. The enhancement provided by UDP over
IP is its ability to check the integrity of flowing packets. IP is capable of delivering a
packet to its destination but stops delivering them to an application. UDP fills this
gap by providing a mechanism to differentiate among multiple applications and
deliver a packet to the desired application. UDP can perform error detection
to a certain extent but not to the level that TCP can.

UDP Segment

The format of the UDP segment is shown in Figure 8.4. The segment starts
with the source port, followed by the destination port. These port numbers are used
to identify the ports of applications at the source or the destination, respectively. The
source port identifies the application that is sending the data. The destination port
helps UDP to demultiplex the packet and directs it to the right application. The UDP
length field indicates the length of the UDP segment, including both the header and
the data. UDP checksum specifies the computed checksum when transmitting the
packet from the host. If no checksum is computed, this field contains all zeroes.
When this segment is received at the destination, the checksum is computed; if there
is an UDP takes messages from the application process, attaches source and
destination port number fields and two other fields, and makes this segment
available to the network layer. The network layer encapsulates the segment into an
IP datagram (packet) and finds the best path to deliver the segment to the other end
host.

DEPT. OF ECE/UNIT-III Page 9


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

2. What is flow control? Explain its methodology and techniques. (N/D-2012,


2011)

Communication systems must use flow-control techniques on their transmission


links to guarantee that a transmitter does not overwhelm a receiver with data.
Several protocols guarantee the control of link flows.

Two widely used flow-control protocols are

 Stop and wait flow-control

 Sliding window flow-control

Stop and wait flow-control

The stop-and-wait protocol is the simplest and the least expensive technique
for link-overflow control. The idea behind this protocol is that the transmitter waits
for an acknowledgement after transmitting one frame (see Figure 4.10). The essence
of this protocol is that if the acknowledgment is not received by the transmitter after
a certain agreed period of time, the transmitter retransmits the original frame.

Figure 4.10. A simple timing chart of a stop-and-wait flow control of data links

DEPT. OF ECE/UNIT-III Page 10


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

In this figure, we assume two consecutive frames i and i + 1. Frame i is ready


to enter the link and is transmitted at t = 0. Let tf be the time required to enter all the
bits of a frame and tp the propagation time of the frame between the transmitter (T)
and the receiver (R). It takes as long as t = tf + tp to transmit a frame. At the arrival
of a frame i, the receiver processes the frame for as long as tr and generates an
acknowledgment. For the same reason, the acknowledgment packet takes t = ta + tp
to be received by the transmitter if ta is assumed to be the time required to enter all
the bits of an acknowledgment frame, and it takes tr for processing it at the receiver

Sliding window flow-control

The shortcoming of the stop-and-wait protocol is that only one frame at a time
is allowed for transmission. This flow control can be significantly improved by
letting multiple frames travel on a transmission link. However, allowing a sequence
of frames to be in transit at the same time requires a more sophisticated protocol for
the control of data overflow on links. The well-known sliding-window protocol is
one technique that efficiently controls the flow of frames.
Figure 4.11 shows an example of sliding-window flow control. With this
protocol, a transmitter (T) and a receiver (R) agree to form identical-size sequences
of frames. Let the size of a sequence be .. Thus, a transmitter allocates buffer space
for . frames, and the receiver can accept up to . frames. In this figure, . = 5. The
transmitter can then send up to . = 5 frames without waiting for any
acknowledgment frames. Each frame in a sequence is labeled by a unique sequence
number. For every k frames forming a sequence, the transmitter attaches a sequence-
number field to each frame. Therefore, as many as 2k sequence numbers exist.
First, a transmitter opens a window of size . = 5, as the figure shows. The
receiver also opens a window of size . to indicate how many frames it can expect. In
the first attempt, let's say that frames 1, 2, and 3 as part of sequence i are transmitted.
The transmitter then shrinks its window to . = 2 but keeps the copy of these frames
in its buffer just in case any of the frames is not received by the receiver. At the
receipt of these three frames, the receiver shrinks its expected window to . = 2 and
acknowledges the receipt of frames by sending an acknowledgment ACK1,2,3 frame
to the transmitter.
At the release of ACK1,2,3, the receiver changes its expected window size
back to . = 5. This acknowledgment also carries information about the sequence
number of the next frame expected and informs that the receiver is prepared to
receive the next . frames. At the receipt of ACK1,2,3, the transmitter maximizes its
window back to . = 5, discards the copies of frames 1, 2, and 3, and continues the

DEPT. OF ECE/UNIT-III Page 11


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

procedure. In this protocol, the window is imagined to be sliding on coming frames.


Similarly, we can derive an expression for this protocol as we did for the stop-and-
wait protocol. For the transmission of a frame as the integral portion of . frames
sequence, the total time, including all the acknowledgment processes.

Figure 4.11. Timing chart of a sliding-window flow control for two sequences of
frames

DEPT. OF ECE/UNIT-III Page 12


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

3. Explain TCP congestion control technique. (N/D-2012, M/J-2013, 2012, A/M-


2011)

Network congestion is a traffic bottleneck between a source and a destination.


Congestion avoidance uses precautionary algorithms to avoid possible congestion in
a network. Otherwise, TCP congestion control is applied once congestion occurs in a
network. TCP increases the traffic rate to a point where congestion occurs and then
gradually reduces the rate. It would be better if congestion could be avoided. This
would involve sending some precautionary information to the source just before
packets are discarded. The source would then reduce its sending rate, and congestion
could be avoided to some extent.

Source-Based Congestion Avoidance


Source-based congestion avoidance detects congestion early from end-hosts.
An end host estimates congestion in the network by using the round-trip time and
throughput as it measures. An increase in round-trip time can indicate that routers'
queues on the selected routing path are increasing and that congestion may happen.
The source-based schemes can be classified into four basic algorithms:

1. Use of round trip time (RTT) as a measure of congestion in the network. As


queues in the routers build up, the RTT for each new packet sent out on the network
increases. If the current RTT is greater than the average of the minimum and
maximum RTTs measured so far, the congestion window size is reduced.
2. Use of RTT and window size to set the current window size. Let . be the current
window size, .o be the old window size, r be the current RTT, and ro be the old RTT.
A window RTT product is computed based on (. - .o)(r - ro). If the product is
positive, the source decreases the window size by a fraction of its old value. If the
product is negative or 0, the source increases the window size by one packet.
3. Use of throughput as a measure to avoid congestion. During every RTT, a source
increases the window size by one packet. The achieved throughput is then compared
with the throughput when the window size was one packet smaller. If the difference
is less than half the throughput at the beginning of the connection when the window
size was one packet, the window size is reduced by one packet.
4. Use of throughput as a measure to avoid congestion. But this time, the algorithm
uses two parameters: the current throughput and the expected throughput to avoid
congestion.
TCP normalized is presented next, as an example for the fourth algorithm.

DEPT. OF ECE/UNIT-III Page 13


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

TCP Normalized Method


In the TCP normalized method, the congestion window size is increased in the
first few seconds, but the throughput remains constant, because the capacity of the
network has been reached, resulting in an increase in the queue length at the router.
Thus, an increase in the window size results in any increase in the throughput. This
traffic over and above available bandwidth of the network is called extra data. The
idea behind TCP normalized is to maintain this extra data at a nominal level. Too
much of extra data may lead to longer delays and congestion. Too little extra data
may lead to an underutilization of resources, because the available bandwidth
changes owing to the bursty nature of Internet traffic. The algorithm defines the
expected value of the rate E[r] as
Equation 8.4

where rm is the minimum of all the measured round-trip times, and .g is the
congestion window size. We define Ar as the actual rate and (E[r] - Ar) as the rate
difference. We also denote the maximum and minimum threshold to be max and
min, respectively. When the rate difference is very small less than min the method
increases the congestion window size to keep the amount of extra data at a nominal
level. If the rate difference is between min and max, the congestion window size is
unaltered. When the rate difference is greater than max, there is too much extra data,
and the congestion window size is reduced. The decrease in the congestion window
size is linear. The TCP normalized method attempts to maintain the traffic flow such
that the difference between expected and actual rates lies in this range.

4. Discuss in detail the TCP segment header. Discuss about connection


management in TCP. (M/J-2013, A/M2011, N/D-2011)
TCP Segment
A TCP segment is a TCP session packet containing part of a TCP bytestream
in transit. The fields of the TCP header segment are shown in Figure 8.2. The TCP
segment contains a minimum of 20 bytes of fixed fields and a variable-length
options field. The details of the fields are as follows.
 Source port and destination port specify, respectively, the user's port
number, which sends packets and the user's port number, which
receives packets.
DEPT. OF ECE/UNIT-III Page 14
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

 Sequence number is a 32-bit filed that TCP assigns to each first data
byte in the segment. The sequence number restarts from 0 after the
number reaches 232 - 1.

 Acknowledgment number specifies the sequence number of the next


byte that a receiver waits for and acknowledges receipt of bytes up to
this sequence number. If the SYN field is set, the acknowledgment
number refers to the initial sequence number (ISN).

 Header length (HL) is a 4-bit field indicating the length of the header in
32-bit words.
 Urgent (URG) is a 1-bit field implying that the urgent-pointer field is
applicable.
 Acknowledgment (ACK) shows the validity of an acknowledgment.
 Push (PSH), if set, directs the receiver to immediately forward the data
to the destination application.
 Reset (RST), if set, directs the receiver to abort the connection.
 Synchronize (SYN) is a 1-bit field used as a connection request to
synchronize the sequence numbers.
 Finished (FIN) is a 1-bit field indicating that the sender has finished
sending the data.
 Window size specifies the advertised window size.
 Checksum is used to check the validity of the received packet.
 Urgent pointer (URG), if set, directs the receiver to add up the values in
the urgent-pointer field and the sequence number field to specify the
last byte number of the data to be delivered urgently to the destination
application.
Figure 8.2. TCP segment format

DEPT. OF ECE/UNIT-III Page 15


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

One of the possible options is maximum segment size (MSS). A receiver uses
this option to specify the maximum segment size it can receive. A total of 16 bits are
provided to specify this option. Thus, the maximum segment size is limited to
65,535 bytes minus 20 bytes of TCP header and minus 20 bytes of IP header,
resulting in 65,495 bytes. The default MSS for TCP is 536 bytes. Another possible
option is the window scale. This option is used to increase the size of the advertised
window beyond the specified 216 - 1 in the header. The advertised window can be
scaled to a maximum of 214.

Connection Setup
As a connection-oriented protocol, TCP requires an explicit connection set-up
phase. Connection is set up using a three-step mechanism, as shown in Figure 8.3
(a). Assume that host A is a sender and host B a destination. First, the sender sends a
connection request to the destination. The connection request comprises the initial
sequence number indicated by seq(i), with the SYN bit set. Next, on receipt of the
connection request, the destination sends an acknowledgment, ack(i + 1), back to the
source, indicating that the destination is waiting for the next byte. The destination
also sends a request packet comprising the sequence number, seq(j), and the SYN bit
set. Finally, at the third step, the sender returns an acknowledgment segment, ack(j +
1), specifying that it is waiting for the next byte. The sequence number of this next
segment is seq(i + 1). This process establishes a connection between the sender and
the receiver.

Figure 8.3. TCP signaling: (a) establishment of a connection; (b) collision of two
connection requests

DEPT. OF ECE/UNIT-III Page 16


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

The connection process uses different initial sequence numbers between the
sender and the receiver to distinguish between the old and new segments and to
avoid the duplication and subsequent deletion of one of the packets. In Figure 8.3,
both end-point hosts simultaneously try to establish a connection. In this case, since
they recognize the connection requests coming from each other, only one connection
is established. If one of the segments from a previous connection arrives late, the
receiver accepts the packet, presuming that it belongs to the new connection. If the
packet from the current connection with the same sequence number arrives, it is
considered a duplicate and is dropped. Hence, it is important to make sure that the
initial sequence numbers are different.
When one of the end points decides to abort the connection, a segment with
the RST bit set is sent. Once an application has no data to transmit, the sender sends
a segment with the FIN bit set. The receiver acknowledges receipt of this segment by
responding with an ACK and notifies the application that the connection is
terminated. Now, the flow from the sender to the receiver is terminated. However, in
such cases, the flow from the receiver to the sender is still open. The receiver then
sends a segment with the FIN bit set. Once the sender acknowledges this by
responding with an ACK, the connection is terminated at both ends.

5. Briefly explain the techniques to improve QOS. (A/M-2011)


Techniques that can be used to improve the quality of service. We briefly
discuss four common methods: scheduling, traffic shaping, admission control, and
resource reservation.

Scheduling
Packets from different flows arrive at a switch or router for processing. A
good scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service.
We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.

FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be discarded.
A FIFO queue is familiar to those who have had to wait for a bus at a bus stop.
Figure 24.16 shows a conceptual view of a FIFO queue.

DEPT. OF ECE/UNIT-III Page 17


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty. Figure 24.17 shows priority queuing with two
priority levels (for simplicity).

A priority queue can provide better QoS than the FIFO queue because
higherpriority traffic, such as multimedia, can reach the destination with less delay.
However, there is a potential drawback. If there is a continuous flow in a high-
priority queue, the packets in the lower-priority queues will never have a chance to
be processed. This is a condition called starvation.

Weighted Fair Queuing


A better scheduling method is weighted fair queuing. In this technique, the
packets are still assigned to different classes and admitted to different queues. The
queues, however, are weighted based on the priority of the queues; higher priority
means a higher weight. The system processes packets in each queue in a round-robin
fashion with the number of packets selected from each queue based on the
corresponding weight. For example, if the weights are 3, 2, and 1, three packets are
processed from the first queue, two from the second queue, and one from the third
DEPT. OF ECE/UNIT-III Page 18
SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

queue. If the system does not impose priority on the classes, all weights can be
equaL In this way, we have fair queuing with priority. Figure 24.18 shows the
technique with three classes. Traffic Shaping Traffic shaping is a mechanism to
control the amount and the rate of the traffic sent to the network. Two techniques can
shape traffic: leaky bucket and token bucket.

Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket. The rate at which the water leaks
does not depend on the rate at which the water is input to the bucket unless the
bucket is empty. The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth out bursty
traffic. Bursty chunks are stored in the bucket and sent out at an average rate. Figure
24.19 shows a leaky bucket and its effects.

DEPT. OF ECE/UNIT-III Page 19


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

In the figure, we assume that the network has committed a bandwidth of 3


Mbps for a host. The use of the leaky bucket shapes the input traffic to make it
conform to this commitment. In Figure 24.19 the host sends a burst of data at a rate
of 12 Mbps for 2 s, for a total of 24 M bits of data. The host is silent for 5 s and then
sends data at a rate of 2 Mbps for 3 s, for a total of 6 M bits of data. In all, the host
has sent 30 M bits of data in lOs. The leaky bucket smooths the traffic by sending
out data at a rate of 3 Mbps during the same 10 s. Without the leaky bucket, the
beginning burst may have hurt the network by consuming more bandwidth than is
set aside for this host. We can also see that the leaky bucket may prevent congestion.
As an analogy, consider the freeway during rush hour (bursty traffic). If, instead,
commuters could stagger their working hours, congestion o'n our freeways could be
avoided.
A simple leaky bucket implementation is shown in Figure 24.20. A FIFO
queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in
ATMnetworks), the process removes a fixed number of packets from the queue at
each tick of the clock. If the traffic consists of variable-length packets, the fixed
output rate must be based on the number of bytes or bits.

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the
counter by the packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

DEPT. OF ECE/UNIT-III Page 20


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For
example, if a host is not sending for a while, its bucket becomes empty. Now if the
host has bursty data, the leaky bucket allows only an average rate. The time when
the host was idle is not taken into account. On the other hand, the token bucket
algorithm allows idle hosts to accumulate credit for the future in the form of tokens.
For each tick of the clock, the system sends n tokens to the bucket. The system
removes one token for every cell (or byte) of data sent. For example, if n is 100 and
the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can
consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks
with 10 cells per tick. In other words, the host can send bursty data as long as the
bucket is not empty. Figure 24.21 shows the idea.
The token bucket can easily be implemented with a counter. The token is
initialized to zero. Each time a token is added, the counter is incremented by 1. Each
time a unit of data is sent, the counter is decremented by 1. When the counter is zero,
the host cannot send data.

DEPT. OF ECE/UNIT-III Page 21


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

Combining Token Bucket and Leaky Bucket


The two techniques can be combined to credit an idle host and at the same
time regulate the traffic. The leaky bucket is applied after the token bucket; the rate
of the leaky bucket needs to be higher than the rate of tokens dropped in the bucket.

Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so
on. The quality of service is improved if these resources are reserved beforehand.
We discuss in this section one QoS model called Integrated Services, which depends
heavily on resource reservation to improve the quality of service.

Admission Control
Admission control refers to the mechanism used by a router, or a switch, to
accept or reject a flow based on predefined parameters called flow specifications.
Before a router accepts a flow for processing, it checks the flow specifications to see
if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.

6. Write a note on congestion avoidance mechanisms. (N/D-2011)

Random Early Detection (RED)


Random early detection (RED) avoids congestion by detecting and taking
appropriate measures early. When packet queues in a router's buffer experience
congestion, they discard all incoming packets that could not be kept in the buffer.
This tail-drop policy leads to two serious problems: global synchronization of TCP
sessions and prolonged congestion in the network. RED overcomes the
disadvantages of the tail-drop policy in queues by randomly dropping the packets
when the average queue size exceeds a given minimum threshold.
From the statistical standpoint, when a queueing buffer is full, the policy of
random packet drop is better than multiple-packet drop at once. RED works as a
feedback mechanism to inform TCP sessions that the source anticipates congestion
and must reduce its transmission rate. The packet-drop probability is calculated
based on the weight allocation on its flow. For example, heavy flows experience a
larger number of dropped packets. The average queue size is computed, using an
exponentially weighted moving average so that RED does not react to spontaneous
transitions caused by bursty Internet traffic. When the average queue size exceeds
the maximum threshold, all further incoming packets are discarded.

DEPT. OF ECE/UNIT-III Page 22


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

RED Setup at Routers


With RED, a router continually monitors its own queue length and available
buffer space. When the buffer space begins to fill up and the router detects the
possibility of congestion, it notifies the source implicitly by dropping a few packets
from the source. The source detects this through a time-out period or a duplicate
ACK. Consequently, the router drops packets earlier than it has to and thus
implicitly notifies the source to reduce its congestion window size.
The "random" part of this method suggests that the router drops an arriving
packet with some drop probability when the queue length exceeds a threshold. This
scheme computes the average queue length, E[Nq], recursively by

Equation 7.3

where coefficient c is set by the router to determine how quickly it wants to reach a
desired P. In fact, c can be thought of as the number of arriving packets that have
been queued. We can then obtain from

Equation 7.5

In essence, when the queue length is below the minimum threshold, the packet
is admitted into the queue. Figure 7.15 shows the variable setup in RED congestion
avoidance. When the queue length is between the two thresholds, the packet-drop
probability increases as the queue length increases. When the queue length is above
the maximum threshold, the packet is always dropped. Also, shown in Equation
(7.4), the packet-drop probability depends on a variable that represents the number
of arriving packets from a flow that has been queued. When the queue length
increases, all that is needed is to drop one packet from the source. The source then
halves its congestion window size.

DEPT. OF ECE/UNIT-III Page 23


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

Figure 7.15. Variables setup in the RED congestion-avoidance method

Once a small number of packets are dropped, the associated sources reduce
their congestion windows if the average queue length exceeds the minimum
threshold, and therefore the traffic to the router drops. With this method, any
congestion is avoided by an early dropping of packets.
RED also has a certain amount of fairness associated with it; for larger flows,
more packets are dropped, since P for these flows could become large. One of the
challenges in RED is to set the optimum values for Nmin, Nmax, and c. Typically,
Nmin has to be set large enough to keep the throughput at a reasonably high level
but low enough to avoid congestion. In practice, for most networks on the Internet,
the Nmax is set to twice the minimum threshold value. Also, as shown by guard
space in Figure 7.15, there has to be enough buffer space beyond Nmax, as Internet
traffic is bursty.

7. Explain choke packet method of congestion control. (M/J-2012)


A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke packet
methods. In backpressure, the warning is from one node to its upstream node,
although the warning may eventually reach the source station. In the choke packet
method, the warning is from the router, which has encountered congestion, to the
source station directly. The intermediate nodes through which the packet has

DEPT. OF ECE/UNIT-III Page 24


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

traveled are not warned. We have seen an example of this type of control in ICMP.
When a router in the Internet is over whe1med with IP datagrams, it may discard
some of them; but it informs the source host, using a source quench ICMP message.
The warning message goes directly to the source station; the intermediate routers,
and does not take any action. Figure 24.7 shows the idea of a choke packet.

8. Discuss how multiplexing and demultiplexing is done in transport layer.


(M/J-2012)
The addressing mechanism allows multiplexing and demultiplexing by the
transport layer, as shown in Figure 23.6.

Multiplexing
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-to-
one relationship and requires multiplexing. The protocol accepts messages from
different processes, differentiated by their assigned port numbers. After adding the
header, the transport layer passes the packet to the network layer.

DEPT. OF ECE/UNIT-III Page 25


SENGUNTHAR COLLEGE OF ENGINEERING, TIRUCHENGIDE

Demultiplexing
At the receiver site, the relationship is one-to-many and requires
demultiplexing. The transport layer receives datagrams from the network layer. After
error checking and dropping of the header, the transport layer delivers each message
to the appropriate process based on the port number.

DEPT. OF ECE/UNIT-III Page 26

Anda mungkin juga menyukai