Anda di halaman 1dari 38

UNIT II Medium Access Sub layer: 1. 2. 3. 4. 5. 6. 7. 8.

Medium access sub layer channel allocations, LAN protocols ALOHA protocols Overview of IEEE standards FDDI. Data link layer elementary data link protocols Sliding window protocols Error handling.

Lecture - 01 Medium access sub layer channel allocations: Medium Access Control Sublayer ``Data Link Layer for LANs'' Can divide networks into point-to-point and broadcast. Look at broadcast networks and their protocols. When many stations compete for a channel (e.g., broadcast channel such as an Ethernet), an algorithm must arbitrate access to the shared channel. Need a way of insuring that when two or more stations wish to transmit, they all wait until doing so won't interfere with other transmitters. Broadcast links include LANs, satellites (WAN), etc. LAN's:

diameter not more than a few kilometers data rate of at least several Mbps. complete ownership by a single organization.

MANs cover a city-wide area with LAN technology. For example, cable TV. Can have higher speed, lower error rate lines with LANs than WANs. Channel Allocation Methods How to allocate the broadcast channel to multiple users (nodes, stations)?

ALOHA--(Hawaii, 1970s). Packet radio. Simple, users transmit data whenever they have data to be sent. On collisions frames are destroyed. Can detect because of broadcast property. Resend after a random amount of time. What is the problem?

Defn: frame time--amount of time to transmit a standard fixed-length frame (frame size divided by transmission rate).

Slotted ALOHA--divide time into discrete intervals of width frame time (slots). Can only send at the beginning of a time slot. (need a means of synchronization--station emit a pip at start of each interval). Double the throughput (number of frames per frame time) vs. pure ALOHA.

Carrier Sense Protocols Problem with ALOHA is that frames are blindly sent--bound to be collisions. Stations listen for a transmission before trying to send data--carrier sense. Only send if channel is idle.

1-Persistent CSMA (Carrier Sense Multiple Access). Sense channel, if idle then send, if busy wait until idle and then send. 1-persistent because it sends with probability of one when senses channel is idle. Collisions? Effect of propagation delay (takes time for signal to propagate from one channel to another). Even if zero could have collisions.

nonpersistent CSMA--less greedy. Sense channel. If idle then send. If busy then wait random amount of time before repeating the same routine. Collisions go away? No, can still send at the same time. p-persistent CSMA--applies to slotted channels. If channel idle then send with probability p, with probability q=(1-p), it defers to the next time slot. Delay for random time if channel busy. Note: p = 1 implies we transmit immediately, p = .1, we transmit with probability .1. The primary advantage of p-persistent protocols is that they reduce the number of collisions under heavy load. The primary disadvantage is that they increase the average delay before a station transmits a frame. Under low loads, the increased delay reduces efficiency.

Protocol Efficiency The following table gives maximum efficiency percentages for some of the protocols we have studied so far:

CSMA/CD Another way to reduce the number of collisions is to abort collisions as soon as they are detected. CSMA networks with Collision Detect (CSMA/CD) do just that. How long does it take to detect collisions:

at least twice the propagation delay, or (in worst case, for the signal to travel from one end of cable to the other, another for the collision indication to travel back) we'll call this interval the contention period what does this say about building broadcast networks that span large distances? (increase propagation delay)

small frames? (could send entire frame before detecting a collision)-pad out the frames in the standard

Lecture - 02 Overview of IEEE standards: 802 for LAN's The IEEE has produced a set of LAN protocols known as the IEEE 802 protocols. These protocols have been adopted by ANSI and ISO:

802.2: logical link standard (device driver interface) 802.3: CSMA/CD 802.4: token bus 802.5: token ring

Ethernet is a specific product implementing (or nearly so) the IEEE standard. Interesting to note that having an Ethernet port on a machine has become a standard (certainly for workstations). The 802.3 protocol is described as follows:

1-persistent CSMA/CD LAN its history is as follows: 1. started with ALOHA 2. continued at Xerox, where Metcalf & Boggs produced a 3 Mbps LAN version 3. Xerox, DEC, and Intel standardized a 10Mbps version 4. IEEE standardized a 10Mbps version (with slight differences from the Xerox standard the maximum length of a segment of cable is 500 meters segments can be separated by repeaters, devices that regenerate or ``amplify'' signals (not frames); a single repeater can join multiple segments maximum distance between two stations 2.5 km, maximum number of repeaters along any path: 4 (why these limits?)

802.3 Frame Layout:

At the Medium Access (MAC) sublayer, frames consist of the following: 1. Frames begin with 56-bit preamble consisting of alternating 1s and 0s. Purpose? Similar to start bit in RS-232-A. It allows the receiver to detect the start of a frame. 2. Start of the frame designated by the byte ``10101011''. That is, two consecutive 1 bits flag the end of the preamble. 3. 48-bit destination address (16-bit for lower speed version). 4. 48-bit source address (16-bit for lower speed version). 5. 16-bit data length field; maximum data size 1500 bytes. 6. 32-bit checksum field. The checksum is a number dependent on every bit in the frame. The sender computes a checksum and appends it to the frame. The receiver also recalculates the checksum, comparing result with value stored in frame. If the two checksums differ, some of the bits in the frame must have changed and the packet is discarded as having errors. There are two types of addresses:
1.

2.

Unicast addresses start with a high-order bit of 0. Unicast addresses refer to a single machine, and every Ethernet address in the world is guaranteed to be unique (e.g., address is sort of like a serial number). Multicast (group) addresses start with a high-order bit of 1. Multicast addresses refer to a set of one or more stations. A broadcast address (all 1's) is a special case of multicasting. All machines process broadcast frames. The management of multicast addresses (e.g., entering or leaving a group) must be managed by some outside mechanism (e.g, higher-layer software). 802.3 LANs use a binary exponential backoff algorithm:

if a station wished to send a frame, and the channel is idle, transmission proceeds immediately

when a collision occurs, sender generates a noise burst to insure that all stations recognize the condition, and aborts transmission after the collision, wait 0 or 1 contention period time before attempting again (1 contention period time ) if another collision occurs: o wait 0, 1, 2, or 3 slot times before attempting transmission again o in general, wait between 0 and times, where r is the number of times we've already retransmitted o finally, freeze interval at 1023 slot times after 10 attempts, and give up altogether after 16 attempts.

Note: our goal is to keep delays low at low loads, but avoid collisions under high load. Also, note that there are no acknowledgements; a sender has no way of knowing that a frame was successfully delivered. Switched 802.3 LANs Can connect hosts to a hub switch. Advantage is that stations use the same network interface card, but they can run at higher speeds.

Lecture - 03 802.4: Token Bus Physically a bus, logically a ring with each station having a number. Use of a special control frame called a token. Can only send a message by first capturing the token which passes between stations. No collisions because only one station at a time may hold the token. Different priority levels at each stations with each getting a fraction of the amount of time for the station. Periodically broadcast special tokens to allow new stations to enter the ring. 802.5: Token Ring Token passed around the ring. It is seized to send a message then regenerated and passed to the next station. Comparison of LAN's Non-deterministic nature of CSMA/CD protocol makes real time computing people nervous. 802.3:

most widely used of the three low delay under light loads simple to install and maintain analog circuitry required to handle collision detect features minimum frame size of 64 bytes is large when carrying single byte keystrokes delay characteristics non-deterministic originally limited to 10 Mbps and short distances (using hubs to get fast ethernet (100Mbps))

no support for multiple priorities

802.4:

uses standard analog technology more deterministic access time (but not 100% either, due to ring maintenance overhead) supports priorities works well under heavy load relatively high delay at low loads can be mixed with TV and voice equipment

802.5:

all digital can use with fiber optic technology over long distances supports both large and small frames high efficiency at high loads low delays at low loads, bounded delay in any case ring monitor is a single point of failure

FDDI The Fiber Distributed Data Interface (FDDI) is a standard for token rings using fiber for individual links: 1. supports data rates of 100 Mbps over 200 km 2. uses LED rather than lasers (lasers potentially dangerous) 3. has a bit error rate of only 1 in bits 4. cabling consists of a pair of rings (one for each direction); two pairs are specified so that the second ring can be used to reconfigure the ring in the event of a station or link failure The protocol itself is similar to 802.5, but has the following differences:

802.5 requires that the sender receive his frame before regenerating the token; because of lengths, sender regenerates token as soon as frame is out its interface, allowing the ring to carry multiple token/data frames simultaneously FDDI can also carry synchronous frames for PCM ir ISDN traffic

Waiting for applications to catch up. Similar to Ethernet which people said was too complicated when it came out. 802.2: Logical Link Control On top of MAC layer for DLL equivalent

Service options:

Unreliable datagram service acknowledged datagram service reliable connection-oriented service

Lecture - 04 Data link layer elementary data link protocols: Data Link Layer DLL purpose? The goal of the data link layer is to provide reliable, efficient communication between adjacent machines connected by a single communication channel. Specifically:
1.

2. 3.

4.

5. 6.

Group the physical layer bit stream into units called frames. Note that frames are nothing more than ``packets'' or ``messages''. By convention, we'll use the term ``frames'' when discussing DLL packets. Sender checksums the frame and sends checksum together with data. The checksum allows the receiver to determine when a frame has been damaged in transit. Receiver recomputes the checksum and compares it with the received value. If they differ, an error has occurred and the frame is discarded. Perhaps return a positive or negative acknowledgment to the sender. A positive acknowledgment indicate the frame was received without errors, while a negative acknowledgment indicates the opposite. Flow control. Prevent a fast sender from overwhelming a slower receiver. For example, a supercomputer can easily generate data faster than a PC can consume it. In general, provide service to the network layer. The network layer wants to be able to send packets to its neighbors without worrying about the details of getting it there in one piece.

At least, the above is what the OSI reference model suggests. As we will see later, not everyone agrees that the data link layer should perform all these tasks. Design Issues

If we don't follow the OSI reference model as gospel, we can imagine providing several alternative service semantics: Reliable Delivery: Frames are delivered to the receiver reliably and in the same order as generated by the sender. Connection state keeps track of sending order and which frames require retransmission. For example, receiver state includes which frames have been received, which ones have not, etc. Best Effort: The receiver does not return acknowledgments to the sender, so the sender has no way of knowing if a frame has been successfully delivered. When would such a service be appropriate? 1. When higher layers can recover from errors with little loss in performance. That is, when errors are so infrequent that there is little to be gained by the data link layer performing the recovery. It is just as easy to have higher layers deal with occasional lost packet. 2. For real-time applications requiring ``better never than late'' semantics. Old data may be worse than no data. For example, should an airplane bother calculating the proper wing flap angle using old altitude and wind speed data when newer data is already available? Acknowledged Delivery: The receiver returns an acknowledgment frame to the sender indicating that a data frame was properly received. This sits somewhere between the other two in that the sender keeps connection state, but may not necessarily retransmit unacknowledged frames. Likewise, the receiver may hand received packets to higher layers in the order in which the arrive, regardless of the original sending order.

Typically, each frame is assigned a unique sequence number, which the receiver returns in an acknowledgment frame to indicate which frame the ACK refers to. The sender must retransmit unacknowledged (e.g., lost or damaged) frames. Framing The DLL translates the physical layer's raw bit stream into discrete units (messages) called frames. How can the receiver detect frame boundaries? That is, how can the receiver recognize the start and end of a frame? Length Count: Make the first field in the frame's header be the length of the frame. That way the receiver knows how big the current frame is and can determine where the next frame ends. Disadvantage: Receiver loses synchronization when bits become garbled. If the bits in the count become corrupted during transmission, the receiver will think that the frame contains fewer (or more) bits than it actually does. Although checksum will detect the incorrect frames, the receiver will have difficulty resynchronizing to the start of a new frame. This technique is not used anymore, since better techniques are available. Bit Stuffing: Use reserved bit patterns to indicate the start and end of a frame. For instance, use the 4-bit sequence of 0111 to delimit consecutive frames. A frame consists of everything between two delimiters. Problem: What happens if the reserved delimiter happens to appear in the frame itself? If we don't remove it from the data, the receiver will think that the incoming frame is actually two smaller frames! Solution: Use bit stuffing. Within the frame, replace every occurrence of two consecutive 1's with 110. E.g., append a

zero bit after each pair of 1's in the data. This prevents 3 consecutive 1's from ever appearing in the frame. Likewise, the receiver converts two consecutive 1's followed by a 0 into two 1's, but recognizes the 0111 sequence as the end of the frame. Example: The frame ``1011101'' would be transmitted over the physical layer as ``0111101101010111''. Note: When using bit stuffing, locating the start/end of a frame is easy, even when frames are damaged. The receiver simply scans arriving data for the reserved patterns. Moreover, the receiver will resynchronize quickly with the sender as to where frames begin and end, even when bits in the frame get garbled. The main disadvantage with bit stuffing is the insertion of additional bits into the data stream, wasting bandwidth. How much expansion? The precise amount depends on the frequency in which the reserved patterns appear as user data. Character stuffing: Same idea as bit-stuffing, but operates on bytes instead of bits. Use reserved characters to indicate the start and end of a frame. For instance, use the two-character sequence DLE STX (Data-Link Escape, Start of TeXt) to signal the beginning of a frame, and the sequence DLE ETX (End of TeXt) to flag the frame's end. Problem: What happens if the two-character sequence DLE ETX happens to appear in the frame itself? Solution: Use character stuffing; within the frame, replace every occurrence of DLE with the two-character sequence DLE DLE. The receiver reverses the processes, replacing every occurrence of DLE DLE with a single DLE.

Example: If the frame contained ``A B DLE D E DLE'', the characters transmitted over the channel would be ``DLE STX A B DLE DLE D E DLE DLE DLE ETX''. Disadvantage: character is the smallest unit that can be operated on; not all architectures are byte oriented. Encoding Violations: Send an signal that doesn't conform to any legal bit representation. In Manchester encoding, for instance, 1-bits are represented by a high-low sequence, and 0-bits by lowhigh sequences. The start/end of a frame could be represented by the signal low-low or high-high. The advantage of encoding violations is that no extra bandwidth is required as in bit-stuffing. The IEEE 802.4 standard uses this approach. Finally, some systems use a combination of these techniques. IEEE 802.3, for instance, has both a length field and special frame start and frame end patterns.

Lecture - 05 Error Control Error control is concerned with insuring that all frames are eventually delivered (possibly in order) to a destination. How? Three items are required. Acknowledgements: Typically, reliable delivery is achieved using the ``acknowledgments with retransmission'' paradigm, whereby the receiver returns a special acknowledgment (ACK) frame to the sender indicating the correct receipt of a frame. In some systems, the receiver also returns a negative acknowledgment (NACK) for incorrectly-received frames. This is nothing more than a hint to the sender so that it can retransmit a frame right away without waiting for a timer to expire. Timers: One problem that simple ACK/NACK schemes fail to address is recovering from a frame that is lost, and as a result, fails to solicit an ACK or NACK. What happens if an ACK or NACK becomes lost? Retransmission timers are used to resend frames that don't produce an ACK. When sending a frame, schedule a timer to expire at some time after the ACK should have been returned. If the timer goes off, retransmit the frame. Sequence Numbers: Retransmissions introduce the possibility of duplicate frames. To suppress duplicates, add sequence numbers to each frame, so that a receiver can distinguish between new frames and old copies. Flow Control

Flow control deals with throttling the speed of the sender to match that of the receiver. Usually, this is a dynamic process, as the receiving speed depends on such changing factors as the load, and availability of buffer space. One solution is to have the receiver extend credits to the sender. For each credit, the sender may send one frame. Thus, the receiver controls the transmission rate by handing out credits. Link Management In some cases, the data link layer service must be ``opened'' before use:

The data link layer uses open operations for allocating buffer space, control blocks, agreeing on the maximum message size, etc. Synchronize and initialize send and receive sequence numbers with its peer at the other end of the communications channel.

Error Detection and Correction In data communication, line noise is a fact of life (e.g., signal attenuation, natural phenomenon such as lightning, and the telephone repairman). Moreover, noise usually occurs as bursts rather than independent, single bit errors. For example, a burst of lightning will affect a set of bits for a short time after the lightning strike. Detecting and correcting errors requires redundancy -- sending additional information along with the data. There are two types of attacks against errors: Error Detecting Codes: Include enough redundancy bits to detect errors and use ACKs and retransmissions to recover from the errors. Error Correcting Codes: Include enough redundancy to detect and correct errors. To understand errors, consider the following:
1. 2.

3. 4.

5.

Messages (frames) consist of m data (message) bits and r redundancy bits, yielding an n = (m+r)-bit codeword. Hamming Distance. Given any two codewords, we can determine how many of the bits differ. Simply exclusive or (XOR) the two words, and count the number of 1 bits in the result. Significance? If two codewords are d bits apart, d errors are required to convert one to the other. A code's Hamming Distance is defined as the minimum Hamming Distance between any two of its legal codewords (from all possible codewords). In general, all possible data words are legal. However, by choosing check bits carefully, the resulting codewords will have a large Hamming Distance. The larger the Hamming distance, the better able the code can detect errors.

To detect d 1-bit errors requires having a Hamming Distance of at least d+1 bits. Why? To correct d errors requires 2d+1 bits. Intuitively, after d errors, the garbled messages is still closer to the original message than any other legal codeword. Parity Bits For example, consider parity: A single parity bit is appended to each data block (e.g. each character in ASCII systems) so that the number of 1 bits always adds up to an even (odd) number. 1000000(1) 1111101(0) The Hamming Distance for parity is 2, and it cannot correct even single-bit errors (but can detect single-bit errors). As another example, consider a 10-bit code used to represent 4 possible values: ``00000 00000'', ``00000 11111'', ``11111 00000'', and ``11111 11111''. Its Hamming distance is 5, and we can correct 2 single-bit errors: For instance, ``10111 00010'' becomes ``11111 00000'' by changing only two bits. However, if the sender transmits ``11111 00000'' and the receiver sees ``00011 00000'', the receiver will not correct the error properly. Finally, in this example we are guaranteed to catch all 2-bit errors, but we might do better: if ``00111 00111'' contains 4 single-bit errors, we will reconstruct the block correctly.

Lecture - 06 Single-Bit Error Correction What's the fewest number of bits needed to correct single bit errors? Let us design a code containing n=m+r bits that corrects all single-bit errors (remember m is number of message (data) bits and r is number of redundant (check) bits):
1. 2.

There are legal messages (e.g., legal bit patterns). Each of the m messages has n illegal codewords a distance of 1 from it.That is, we systematically invert each bit in the corresponding n-bit codeword, we get n illegal codewords a distance of 1 from the original. Thus, each message requires n+1 bits dedicated to it (n that are one bit away and 1 that is the message).

3.

4.

The total number of bit patterns = . That is, all encoded messages should be unique, and there can't be fewer messages than the possible codewords. Since n=m+r, we get: , or . This formula gives the absolute lower limit on the number of bits required to detect (and correct!) 1-bit errors.

Hamming developed a code that meets this lower limit:


Bits are numbered left-to-right starting at 1. Bit numbers that are powers of two (e.g., 1, 2, 4, 8, etc.) are check bits; the remaining bits are the actual data bits. Each check bit acts as a parity bit for a set of bits (both data and check).

To determine which parity bits in the codeword cover bit k of the codeword, rewrite bit position k as the a sum of powers of two (e.g., 19 = 1+2+16). A bit is checked by only those check bits in the expansion (e.g., check bits 1, 2, and 16). When a codeword arrives, examine each check bit k to verify that it has the correct parity. If not, add k to a counter. At the end of the process, a zero counter means no errors have occurred; otherwise, the counter gives the bit position of the incorrect bit.

For instance, consider the ascii character ``a'' = ``1100001''. We know that:

check bit 1 covers all odd numbered bits (e.g, 1, 3, 5, check bit 2 covers bits 2, 3, 6, 7, 10, 11, check bit 3 covers bits 4, 5, 6, 7, 12, 13, 14, 15, check bit 4 covers bits 8, 9, 10, 11, 12, etc.

Thus:

check bit 1 equals: ?+1+1+0+0+1 = 1 check bit 2 equals: ?+1+0+0+0+1 = 0 check bit 3 equals: ?+1+0+0 = 1 check bit 4 equals: ?+0+0+1 = 1

giving: Note: Hamming Codes correct only single bit errors. To correct burst errors, we can send b blocks, distributing the burst over each of the b blocks. For instance, build a b-row matrix, where each row is one block. When actually sending the data, send it one column at a time. If a burst error occurs, each block (row) will see a fraction of the errors, and may be able to correct its block. Error correction is most useful in three contexts:

1. Simplex links (e.g., those that provide only one-way communication). 2. Long delay paths, where retransmitting data leads to long delays (e.g., satellites). 3. Links with very high error rates, where there is often one or two errors in each frame. Without forward error correction, most frames would be damaged, and retransmitting them would result in the frames becoming garbled again. Error Detection Error correction is relatively expensive (computationally and in bandwidth). For example, 10 redundancy bits are required to correct 1 singlebit error in a 1000-bit message. Detection? In contrast, detecting a single bit error requires only a single-bit, no matter how large the message. The most popular error detection codes are based on polynomial codes or cyclic redundancy codes (CRCs). Allows us to acknowledge correctly received frames and to discard incorrect ones. CRC Checksums The most popular error detection codes are based on polynomial codes or cyclic redundancy codes. Idea:

Represent a k-bit frame as coefficients of a polynomial expansion ranging from to , with the high-order bit corresponding to the coefficient of . For example, represent the string ``11011'' as the polynomial:

Perform modulo 2 arithmetic (e.g. XOR of the bits)

Sender and receiver agree on a generator polynomial: G(x). (G(x) must be smaller than the number of bits in the message.) Append a checksum to message; let's call the message M(x), and the combination T(x). The checksum is computed as follows: 1. Let r be the degree of G(x), append r zeros to M(x). Our new polynomial becomes 2. Divide by G(x) using modulo 2 arithmetic. 3. Subtract the remainder from giving us T(x). When receiver gets T(x), it divides T(x) by G(x); if T(x) divides cleanly (e.g., no remainder), no error has occurred.

The presence of a remainder indicates an error. What sort of errors will we catch? Assume:

the receiver gets T(x) + E(x), where each bit in E(x) corresponds to an error bit. k 1 bits indicate k single-bit errors. Receiver computes [T(x) + E(x)]/G(x) = E(x)/G(x).

Will detect:

single bit errors. If a single-bit error occurs, G(x) will detect it if it contains more than one term. If it contains only one term, it may or may not detect the error, depending on the E(x) and G(x). two isolated single-bit errors. Consider two single-bit errors: Note: is not divisible by G(x) if it contains two or more terms. Thus, we can detect double-bit errors if G(x) does not divide for any k up to the message size.

Satisfactory .

generator

polynomials

can

be

found. for

, for instance, does not divide

odd number of bits. burst errors less than or equal to degree. Note: A polynomial with r check bits will detect all burst errors of length .

What transmitted message will be an error but still generate a checksum of zero on receiving end? (T(x) + E(x))/G(x) so if E(x) = G(x). CRC Standards There are currently three international standards:

CRC-12: CRC-16: CRC-CCITT:

Note: 16-bit CRCs detect all single and double errors, all errors with odd number of bits, all burst errors of length 16 bits, and 99.997% of 17-bit errors. Is usually done in hardware! MD5 Message-digest algorithm for compressing a large message. Take a message and produce a 128-bit message digest. This compact form can be used to validate the received copy of the message.

Lecture - 07 Data Link Layer Protocols: The data link layer provides service to the Network Layer above it:

The network layer is interested in getting messages to the corresponding network layer module on an adjacent machine. The remote Network Layer peer should receive the identical message generated by the sender (e.g., if the data link layer adds control information, the header information must be removed before the message is passed to the Network Layer). The Network Layer wants to be sure that all messages it sends, will be delivered correctly (e.g., none lost, no corruption). Note that arbitrary errors may result in the loss of both data and control frames. The Network Layer wants messages to be delivered to the remote peer in the exact same order as they are sent.

Note: It is not always clear that we really want our data link layer protocol to provide this type of service. What if we run real-time applications across the link? Nonetheless, the ISO reference model suggests that the data link layer provide such a service, and we now examine the protocols that provide such a service. Motivation Look at successive data link protocols of increasing complexity to provide realiable, in order message delivery to the network layer. Environment Assume DLL executes as a process with routines to communicate with the Network Layer above and the Physical Layer below. Deal with messages (packets)--bit strings

frames are the unit of transmission. Consists of data plus control bits (header information). procedure wait (var event: EvType); {wait for 'event'} an event; return event type in

Look at data structures in Fig. 3-8 Utopia/Unrestricted Simplex Protocol Assumptions:


data transmission in one direction only (simplex) no errors take place on the physical channel the sender/receiver can generate/consume an infinite amount of data. always ready for sending/receiving

Simplex Stop-and-Wait Protocol Drop assumption that receiver can process incoming data infinitely fast stop-and-wait--protocols where the sender sends one frame and then waits for acknowledgement. In this protocol, the contents of the acknowledgement frame are unimportant. Data transmission is one directional, but must have bidirectional line. Could have a half-duplex (one direction at a time) physical channel.

Simplex Protocol for a Noisy Channel

What if the channel is noisy and we can lose frames (checksum incorrect). Simple approach, add a time out to the sender so it retransmits after a certain period and retransmit the frame. Scenario of what could happen:

A transmits frame one B receives A1 B generates ACK ACK is lost A times out, retransmits B gets duplicate copy (sending on to network layer)

Use a sequence number. How many bits? 1-bit is sufficient because only concerned about two successive frames. Positive Acknowledgement with Retransmission (PAR)--sender waits for positive acknowledgement before advancing to the next data item. Figure 3-11. How long should the timer be? What if too long? (inefficient) What if too short? A problem because the ACK does not contain the sequence number of the frame which is being ACK'ed. Scenario:

A sends frame zero timeout of A resend frame A0 B receives A0, ACKS B receives A0 again, ACKS again (does not accept) A gets A0 ACK, sends frame A1 A1 gets lost A gets second A0 ACK, sends A2 B gets A2 (rejects, not correct seq. number)

will lose two packets before getting back on track (with A3,1)

Lecture - 08 One Bit Sliding Window Protocol Two-way communication. One-way is not realistic. Have two kinds of frames (kind field): 1. Data 2. Ack (sequence number of last correctly received frame) piggybacking--add acknowledgement to data frames going in reverse direction. For better use of bandwidth. How long to wait for outgoing data frame before sending the ACK on its own. Example of a sliding window protocol. Contains a sequence number whose maximum value MaxSeq is . For stop-and-wait sliding window protocol n=1.

Essentially protocol 3, except ACKs are numbered, which solves early time out problem. protocol works, all frames delivered in correct order requires little buffer space poor line utilization (next page)

Problem with stop and wait protocols is that sender can only have one unACKed frame outstanding. Example:

1000 bit frames 1 Mbs channel (satellite) 270 ms propagation delay

Frame takes 1ms to send. With propagation delay the ACK is not seen at the sender again until time 541ms. Very poor channel utilization. Solution:

1. We can use larger frames, but the maximum size is limited by the bit error rate of the channel. The larger the frame, the higher the probability that it will become damaged during transmission. 2. Use pipelining: allow multiple frames to be in transmission simultaneously. Pipelining Sender does not wait for each frame to be ACK'ed. Rather it sends many frames with the assumption that they will arrive. Must still get back ACKs for each frame.

Example 1: Use a 3-bit sequence number (0-7). Now we can transmit 7 frames (seq. nr. 0-6) before receiving an ACK. Example 2: What if we allow the sender to send 8 (0-7) instead of 7 (0-6) frames? Potential problem of window sizes (receiver window size of one): MaxSeq is 7 (0 through 7) is valid. How big can sender window be? 1. 2. 3. 4. 5. Send 0-7. Receive 0-7 (one at a time) and send ACKS All ACKS are lost Message 0 times out and is retransmitted Receiver accepts frame 0 (why?--because that is next frame) and passes it to NL.

Window Size Rule: The sender window size (number of buffers) plus the receiver window size (number of buffers) must be where n is the number of bits in the sequence number. Example 3: Provides more efficient use of space and works well if we follow the window size rule and do not get errors. What if an error occurs? What if 7 frames transmitted (seq nr 0-6), and sequence number 1 has an error. Frames 2-6 will be ignored at receiver side? Sender will have to retransmit. Two strategies for Window size: 1. Go back n--receiver's window size of one (protocol 5) 2. selective repeat--receiver's window size larger than one, generally the sender and receiver have the same window size. (protocol 6) Tradeoff between bandwidth and data link layer buffer space.

In either case will need buffer space on the sender side. Cannot release until an ACK is received. Use a timer for each unACK'ed frame that has been sent. Can enable/disable network layer because no longer assume that network layer is ready (NetworkLayerReady event).

Protocol 6, Selective Repeat Summarize, not details Example 4: Look at same example using protocol 6 with sender and receiver window size each equal to 4. Now if 4 frames transmitted (seq nr 0-3), and sequence number 1 has an error. Frames 2-3 will not be ignored at receiver side, but they can be buffered. What happens when Frame 1 times out and is retransmitted? Upon reception, the receiver can pass frames 1, 2 and 3 up to the network layer and update its receiver window. Protocol Performance What is the channel efficiency of a stop-and-wait protocol?

F = frame size = D + H = data + header bits C = channel capacity (bps) I = propagation delay and IMP service time (seconds) A = ack size (bits)

Draw picture time between frames: F/C + 2I + A/C time spent sending data: D/C efficiency:

Lecture - 09 Examples of the Data Link Layer High-level Data Link Control Adopted as part of X.25.

Bit oriented (uses bit stuffing and bit delimeters) 3-bit sequence numbers up to 7 unack'ed frames can be outstanding at any time (how big is the receiver's window? one) ACK's the ``frame expected'' rather than last frame received (any difference between the two? No, as long as same convention).

Basically, Go Back N protocol. Data Link Layer in the Internet Point-to-point lines 1. between routers over leased lines 2. dial-up to a host via a modem Two protocols used: SLIP--Serial Line IP Older protocol that just adds a framing byte at end of IP packet (with appropriate character stuffing). Some problems: 1. no error detection/correction 2. supports only IP 3. each side must have its own IP address (not enough for all homes) or some versions do it dynamically. 4. no authentication 5. not an approved standard, many version exist PPP--Point-to-Point Protocol

a Standard (RFCs 1661-1663)! Can be used for dial-up and leased router-router lines. Provides: 1. Framing method to delineate frames. Also handles error detection. 2. Link Control Protocol (LCP) for bringing lines up, negotiation of options, bringing them down. These are distinct PPP packets. 3. Network Control Protocol (NCP) for negotiating network layer options. Similar to HDLC, but is character-oriented. It does not provide reliable data transfer using sequence numbers and acknowledgements as the default. Reliable data transfer can be requested as an option (as part of LCP). Data Link Layer in ATM Transmission Convergence sublayer (refer back to ATM reference model). Physical layer is T1, T3, SONET, FDDI. This sublayer does header checksumming and cell reception. Header Checksum 5-byte header consists of 4 bytes of virtual circuit and control stuff. Checksum 4 bytes of header information and store in 5th byte. Use CRC checksum and add a constant 01010101 bit string. Low probability of error (likelihood of fiber) so keep it cheap to checksum. Upper layers can do payload if they like. 8-bit checksum field is called Header Error Control (HEC).

Transmission May have to output dummy cells in a synchronous medium (must send cells at periodic times). Use idle cells. Also have operation and maintenance (OAM) cells. Exchange control and other information. Cell Reception Drop idle cells, pass along OAM cells. Need to generate framing information for underlying technology, but no framing bits! Use a probablistic approach of matching up valid headers and checksums in a 40-bit window. Have a state-transition diagram where we are looking for consecutive valid headers. If a bad cell received (flipped bit) do not immediately give up on synchronization. Important: TC sublayer must know about header information above it. Violates layering principle to make things work.

Anda mungkin juga menyukai