Anda di halaman 1dari 500

Study Guide for the Cisco CCIE Routing & Switchin Written Exam

Brad Ellis CCIE#5796

Study Guide for the 2006 Cisco CCIE R&S Written Exam Author: Brad Ellis Contributing Author: Rob Webber Editor: Raymond Young Copyright 2006 EnableMode Expert, Inc. Published by: EnableMode Expert Inc In conjunction with Network Learning Inc (Cisco Learning Partner) 1997 Whitney Mesa Dr Henderson, NV 89014 USA All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review. Printed in the United States of America First printing January 2006 ISBN: 1-931881-08-1 Warning and Disclaimer This book is designed to provide information the Cisco R&S written exam. Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied. The information is provided on an "as is" basis. The authors, editors, and EnableMode Expert Inc., shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the discs or programs that may accompany it. The opinions expressed in this book belong to the author and are not necessarily those of EnableMode Expert Inc. Trademark Acknowledgements All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. EnableMode Expert Inc. cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. UPC: 82251881081

Feedback Information At EnableMode Expert Inc, our goal is to create advanced technical material of the highest quality and value. Each book is authored with attention to detail, undergoing strenuous development that involves input from a variety of technical experts. Readers' feedback is a natural part of this process. If you have any comments regarding how we could improve the quality of our materials, or otherwise change it to better suit your needs, you can contact us through e-mail at sales@enablemode.com. Please make sure to include the book title and ISBN number in your message. Also, feel free to visit our websites: www.ccbootcamp.com and www.enablemode.com for information on many more great products! Thank you for your input

About the contributors:


Author - Brad Ellis Brad Ellis (CCIE #5796, CCSI #30482, CSSl, CCDP, CCNP, MCNE, MCSE) works as a network engineer. Brad is co-owner of Network Learning Inc and owner of EnableMode Expert Inc. He has worked with in the networking industry for over 12 years. He has worked on large scale security assessments and infrastructure projects. He is currently focusing his efforts in the security and voice fields. Brad is a dual CCIE (R&S / Security) #5796. Contributing Author - Rob Webber Rob Webber (CCIE #6922) is a Senior Network Consultant with SBC/Callisma. He has over 17 years of experience in the data networking industry and has spent the last seven as a consultant. Rob specializes in complex network design and implementations in the financial, medical, manufacturing and service provider industries. He is a Cisco Voice Design Specialist and was a contributing author on the Syngress books Cisco Avvid and IP Telephony Design and Implementation and IPv6 for Cisco IOS. Editor - Raymond Young Ray Young has an electrical engineering degree, has worked as an administrator in industry for many years, and has supported the implementation of a number of IT and networking projects in the field of supply chain management. He is currently pursuing his doctorate in engineering.

(this page intentionally left blank)

Table of Contents
Chapter 1 General N e t w o r k i n g Theory.....................
OSI Model TCP/IP Model Routing and Switching Concepts Cisco Hierarchical I n t e r n e t w o r k i n g Model Distance-Vector Routing Protocols Link-State Routing Protocols Hybrid Routing Protocols Routing Loops Route Summarization Tunnels N e t w o r k i n g Standards Protocol Mechanisms Transmission Control Protocol (TCP) User Datagram Protocol (UDP) Address Resolution Protocol (ARP) Commands Cisco Discovery Protocol (CDP) Chapter 1 Questions. Chapter 1 Answers

1
1 2 3 3 4 5 5 6 6 9 9 11 12 13 14 16 20 21 33

Chapter 2 Bridging & LAN Switching


General Bridging Rules Concurrent Routing and Bridging (CRB) I n t e g r a t e d Routing and Bridging (IRB) LAN Switching Trunking Gigabit Ethernet Virtual LAN (VLAN) VLAN Trunk Protocol (VTP) Spanning-Tree Protocol (STP) 8 0 2 . I d Multi-Layer Switching (MLS) Catalyst Configuration Commands

35
35 35 36 37 38 39 42 42 44 53 56

Table of Contents (Continued)


Chapter 2 Questions Chapter 2 Answers 59 75

Chapter 3 Internet Protocol (IP)


IP Addressing Subnetting Network Address Translation (NAT) CIDR and VLSM Hot Standby Router Protocol (HSRP) Network Time Protocol (NTP) IP Services....... Domain Name Service (DNS). Internet Control Message Protocol (ICMP). IP Applications Simple Network Management Protocol (SNMP) Transport MPLS Overview Internet Protocol Version 6 ( I P v 6 ) Chapter 3 Questions Chapter 3 Answers

77
77 78 81 81 82 83 90 90 90 92 92 97 99 101 110 120

Chapter 4 IP Routing Protocols


Routing Protocol Concepts Administrative Distance Distribution Lists Route-maps Prefix Lists Filter Lists Routing Information Protocol ( R I P ) & RIP V2 Interior Gateway Routing Protocol (IGRP) Open Shortest Path First (OSPF) Enhanced Interior Gateway Routing Protocol (EIGRP) Border Gateway Protocol (BGP) Policy Routing The use of SHOW and DEBUG commands Chapter 4 Questions Chapter 4 Answers

122
122 124 127 127 128 129 130 132 133 143 153 164 173 175 196

Table of Contents (Continued)


Chapter 5 Quality of Service (QoS)
QoS Overview... Call Admission Control Functionality I n t e g r a t e d Services vs. Differentiated Services Configure QoS Policy using Modular QoS CLI Classification and Marking Difference Between Classification and Marking Class of Service, IP Precedence and DiffServ Code Points Congestion Management Congestion Avoidance Link Efficiency Tools... Real Time Protocol Header Compression (CRTP) Policing and Shaping Chapter 5 Questions Chapter 5 Answers....

199
200 201 201 202 207 208 208 211 218 220 222 224 259 273

Chapter 6 Wide Area N e t w o r k i n g (WAN)


Leased Line Protocols High-Level Data Link Control (HDLC) Point-to-Point Protocol (PPP) Modems and Async Frame Relay Physical Layer

275
275 275 276 276 277 283

Alarms 293 Dynamic Packet T r a n s p o r t / S p a t i a l Reuse Protocol (DPT/SRP) .. 294 Data Compression Chapter 6 Questions Chapter 6 Answers 294 296 304

Chapter 7 IP Multicast
Benefits of IP Multicast IGMP and CGMP Multicast Protocols IGMP Versions 1, 2, and 3 A d m i n i s t r a t i v e Scoped Addresses Distribution Trees (Shared Trees, Source Trees) Rendezvous Points (Auto-RP, BSR) Protocol I n d e p e n d e n t Multicast ( P I M )

305
305 306 307 310 311 312 315

in

Table of Contents (Continued)


Reverse Path Forwarding (RPF) Benefits and Drawbacks of PIM Source Specific Multicast (SSM) I n t e r n e t Standard Multicast ( I S M ) Multicast Forwarding RPF Check Multicast Source Discovery Protocol (MSDP) IP Multicast Terms Chapter 7 Questions Chapter 7 Answers 316 322 324 324 324 325 325 325 329 340

Chapter 8 Security
B r i d g e / S w i t c h Security Understanding How Port Security W o r k s Private VLANs

341
341 341 347

802. l x
Access Lists Security Criteria for Deploying Wireless VLANs RADIUS and TACACS+ AAA Security Services Unicast RPF SMURF Attack I n t e r n e t Key Exchange ( I K E ) IP Security Protocol (IPSec)

347
348 351 352 356 359 360 361 365

IP Spoofing
Chapter 8 Questions Chapter 8 Answers

379
381 390

Chapter 9 Enterprise Wireless Mobility.....


Wireless Standards

391
391

Wireless/802.lib
Wireless N e t w o r k i n g Terms Radio Frequency (RF) Terms Wireless Deployment Issues 801.11b Framing I n f o r m a t i o n 8 0 2 . l x Authentication Cisco Compatible Extensions (CCX) Related Cisco Products

391
392 393 394 394 395 396 396

IV

Table of Contents (Continued)


Antenna Types Wireless Handsets (Other Than Cisco) Cisco Structured Wireless-Aware N e t w o r k (SWAN) 8 0 2 . 1 1 On I t s Own is I n h e r e n t l y Insecure Wireless N e t w o r k s Are Targets f o r I n t r u d e r s 8 0 2 . 1 1 Wired Equivalent Privacy (WEP) IPsec in a WLAN Environment 802.1x/EAP EAP Authentication Protocols EAP Authentication Summary RF Troubleshooting VoWLAN IP Telephony N e t w o r k Sizing Multicast and Wireless Voice... Server and Switch Recommendations Enhanced Distribution Channel Access (EDCA) Cisco 7920 Wireless IP Phone.. Wireless Domain Services (WDS) Cisco Unified Wireless N e t w o r k Cisco Unified Wireless N e t w o r k Products Cisco Unified Wireless N e t w o r k Deployment Radio Management.. Public WLAN (PWLAN) PWLAN Security The Cisco PWLAN Solution CiscoWorks Wireless LAN Solution Engine (WLSE) Cisco Fast Secure Roaming Cisco AVVID Design Chapter 9 Questions Chapter 9 Answers 404 406 406 406 407 410 410 411 413 418 419 420 421 422 423 425 425 427 432 434 435 441 443 445 447 450 457 468 470 481

Introduction
By Brad Ellis

YOUR JOURNEY IS JUST BEGINNING!

I got my first CCIE certification back in April of 2000. That was quite an exciting time! There were no lab boot camps, no training classes, and there was VERY little training material available to use for practice (except, of course, www.ccbootcamp.com'). Since then, I've spent many hours answering e-mails, phone calls, and posts on message forums in an attempt to help other CCIE candidates find their shortest path to becoming a CCIE. The CCIE process is two fold, the written exam which qualifies you for your lab exam attempt, and the actual lab exam itself. Both of these steps need to be broken down into smaller steps and appropriately attacked. The first big step, the written exam, is all theory. Meaning you need to understand the meat and bones behind the Cisco networking technologies that are going to be presented to you in this book. Having a good understanding of these technologies will enable you to properly prepare for the second big (or shall we say, HUGE) step, the lab exam. When I prepared for my written exam, I was forced to use many books and try to ascertain what was applicable to the exam, and what wasn't. I probably over studied by a couple weeks for the exam, but I'm sure it paid off when I prepared for my lab exam! My goal, in authoring this book, was to collect ALL of the relevant information and put it together in a one-stop-shop book. While this book is enough for you to pass the written exam, it is certainly not enough for you to use to pass the lab exam. There are many other books available for the lab exam which goes into much greater detail on the subjects covered in this book. We'll save the lab exam preparation for another time (and my next book!). Why are there wireless questions on the written exam while wireless is not present in the lab exam? Could this be a hint of things to come? Only time will tell! One of the biggest reasons why folks tend to start studying for the CCIE exams and never actually accomplish their goal is due to (1) time and (2) focus. Set your goals appropriately, try to finish reading this book within four to six weeks, and plan on taking your written exam after that. Then, schedule your lab exam for six to nine months out, and start studying for that. Put aside plenty of time to tackle the written and lab exams. Set the proper expectations with your family members. Let's get this first step out of the way together, and then you can focus on overcoming the lab exam! Good luck on the first step of your networking journey!

Chapter 1

General N e t w o r k i n g Theory
OSI Model If you've been through the CCNA and CCNP exams, I'm sure you have a solid grasp on the OSI model; but it does deserve review while preparing for the CCIE Written exam. The OSI model is a common tool for conceptualizing how network traffic is handled. In the CCIE track, we will be interested primarily in the lower three levels. Just a reminder, you can use the old mnemonic "All People Seem To Need Data Processing" to help remember the sequence. The seven layers of the OSI model are: ApplicationThe application layer provides services directly to applications. This can include identifying communication partners, determining resource availability, and synchronizing communication. Some examples of application layer implementations include Telnet, FTP, SMTP, and SNMP. PresentationThe presentation layer provides many coding and conversion functions. These ensure that information sent from the application layer of one system will be readable by the application layer of another. Examples of presentation layer coding and conversion schemes include ASCII, EBCDIC, JPEG, GIF, TIFF, MPEG, QuickTime, and Codecs (G.711, G.728, and G.729). SessionThe session layer establishes, manages, maintains, and terminates communication sessions between applications. Some examples of session layer implementations include Remote Procedure Call (RPC), Zone Information Protocol (ZIP), and Session Control Protocol (SCP). TransportThe transport layer segments and reassembles data into data streams, and is also responsible for both reliable and unreliable end-to-end data transmission. Transport layer functions typically include flow control, multiplexing, virtual circuit management, and error checking and recovery. Some examples of transport layer implementations include Transmission Control Protocol (TCP), Sequenced Packet Exchange (SPX), Name Binding Protocol (NBP), and the Real-Time Transport Protocol (RTP). NetworkThe network layer uses logical addressing to provide routing and related functions to allow multiple data links to be combined into an internetwork. It supports both connection-oriented and connectionless service from higher-layer protocols. Network layer protocols are typically routing protocols, however, other types of protocols, such as the Internet Protocol (IP), are implemented at the network layer as well. Some common

Chapter 1: General Networking Theory

routing protocols include Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), and Routing Information Protocol (RIP). Data LinkThe data link layer provides reliable transmission of data across physical medium. It specifies network and protocol attributes, including physical addressing, network topology, error notification, sequencing of frames, and flow control. Data link layer implementations can be categorized as either LAN or WAN types. LAN data link layer implementations include Ethernet, FDDI, and Token Ring. WAN data link layer implementations include Frame Relay, Link Access Procedure, Balanced (LAPB), and Synchronous Data Link Control (SDLC). The Data link layer consists of two sub-layers, known as the Media Access Control (MAC) Layer and the Logical Link Control (LLC) layer. LLCThe LLC sub-layer manages communications between devices over a single link of a network. It supports both connectionless and connection-oriented services used by higher-layer protocols. MACThe MAC sub-layer manages protocol access to the physical network. The IEEE MAC specification defines 48 bit MAC addresses, which allow multiple devices to uniquely identify one another at the data link layer. The MAC layer manages addressing and access to the physical layer

PhysicalThe physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. It defines such characteristics as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and the physical connectors to be used. Physical layer implementations can be categorized as either LAN or WAN types. LAN physical layer implementations include Ethernet, FDDI, and Token Ring. WAN physical layer implementations include High-Speed Serial Interface (HSSI), SMDS Interface Protocol (SIP), and X.21bis.

TCP/IP Model The TCP/IP Model is a four-layer model: Application-The top layer is the application layer. This layer is for applications and processes that use the network. This layer maps to the OSI Application, Presentation, and Session Layers. Transport-This layer is responsible for end-to-end data integrity, and reliable communication. This layer maps to the OSI Transport Layer. Internetwork-Th\s layer is responsible for data routing. Datagrams are used at this layer, consisting of header, data, and trailer. This layer defines the datagram and the addressing scheme. This layer maps to the OSI Network Layer. Network-Th\s is the lowest layer and defines how data is transferred across the physical connection. Data is exchanged between end stations and the physical network. Data is delivered between devices on the same network, using physical addresses. Datagrams are encapsulated into frames and transmitted. This layer corresponds to the OSI Data-Link and Physical Layers.

3 Chapter 1: General Networking Theory Routing and Switching Concepts Routing is handled at Layer 3 of the OSI model. Bridging is handled at Layer 2 of the OSI model. Switching is handled at Layer 2 of the OSI model. Switches have separate collision domains for each port, but one broadcast domain for all ports that belong to the same VLAN. By default all ports on a Cisco switch belong to VLAN 1. Hub ports are all in the same collision and broadcast domains. Routers have separate collision and broadcast domains for each port.

Cisco Hierarchical Internetworking Model There will be ongoing references to this model during the course of our discussions. Cisco uses these layers of this model to describe the function of various pieces of equipment and how they fit into a larger picture. AccessThe point at which users are allowed into the network. Local user ports, Remote office WAN connections and RAS services are all found at the Access layer. Cheap, reliable, but not particularly powerful high-port density switches and hubs meet the need of connecting users to the network at the Access layer. The access layer is where MAC address filtering and Virtual LANs (VLAN) function. DistributionThe distribution layer is the demarcation point between the access and core layers. Its purpose is to provide boundary definitions, and it is where packet manipulation takes place. Typically this is where aggregation of traffic, access lists, compression and encryption take place. Think of this as where servicing of network data takes place on the more powerful switches and routers you would find here. CoreThe Core layer is a high-speed switching backbone and should be designed to switch packets as fast as possible. This layer of the network should not perform any packet manipulation, such as access lists and filtering, which would slow down the switching of packets. Big workhorse routers and switches maximize speed in the Core.

Every novice network architect I have ever known has tried to describe to me why they have a better way of building networks, but whenever I've run across them years later, they have always come to see the light. If you've already been converted, please skip to the next section. Most people would logically want to put the two devices that are most closely related next to each other on a network, like putting a workgroup server next to the users that need to access it. While you're on a small network, that works fine. The host makes one hop to get to the device; you're done, go home. The problem begins when the new manager in that group wants to grant access to people on other floors of the building, or moves his people so they're not sitting near each other. You can begin to create VTP domains and put these

Chapter 1: General Networking Theory people into the same VLAN, but pretty soon you take a day off and somebody else hooks up the new user in the department. Another day, somebody in another building wants to get on the server and the network admin does it without letting you know (you know how server-guys are).

Pretty soon you've got a company wide resource, being accessed by important people from who-knows-where on a slow, congested non-redundant access layer switch, with people making six or seven hops to reach the destination. If you don't believe me, take a cynical look at your user community, and the other engineers in your group, and tell me I'm wrong. Just my $.02, but it represents years of experience. Distance-Vector Routing Protocols DV protocols periodically pass full copies of their routing tables to all of their immediate neighbors. Each recipient then increments the values, updates its routing table, and forwards that out through all of its interfaces to its neighboring routers. These routing protocols understand the direction and distance to any network connection on the internetwork. These updates are sent at specific increments, usually every 30 to 90 seconds, and contain the entire routing table. Once this information has made the rounds, each router will have built a routing table with information about the "distances" to networked resources. It does not learn anything specific about other routers, or the network's actual topology. Because the "distance" usually depends on the number of "hops" to a destination, network distances and costs are rough estimates that do not relate directly to either physical distances or speed of the intermediate links. The primary benefit of DV protocols is that they are easy to configure and maintain. For this reason they are quite common in small networks that have few redundant paths, and no stringent network performance requirements. The most common DV routing protocol is Routing Information Protocol (RIP), which uses a single distance metric (hops) to determine the best next path to take for any given packet. Cisco's proprietary Interior Gateway Routing Protocol (IGRP) would be another example of a DV routing protocol. In any internetwork with redundant routes, using a dynamic routing protocol is better than using static routes, because the routing protocol will have the flexibility to automatically detect and correct failures in the network. The problems associated with DV protocols include slow convergence, routing loops, counting to infinity, and excessive bandwidth utilization. RIP version 1 and RIP version 2 are examples of distance-vector routing protocols.

5 Chapter 1: General Networking Theory Link-State Routing Protocols Link-State Routing Protocols develop and maintain a full knowledge of the network's routers as well as how they connect to one another. This information is gathered through the exchange of link-state advertisements (LSAs) between routers. The LSAs are used to develop a topological database, which the shortest path algorithm then uses to compute reachability to networked destinations. This process allows quick discovery of changes in the network topology, either because of a component failure, or as a result of changes by the network engineer. One of the biggest advantages to Link-State protocols is that they avoid the problem of wasted bandwidth that comes from DV routing protocols sending out their full routing tables several times a minute. On a properly configured network, this will leave more bandwidth available for passing user traffic. Other advantages to Link-State routing protocols include: Faster convergence Greater scalability, allowing bigger, more robust networks Changes in topology can be sent out immediately, so convergence can be quicker They take bandwidth into account when determining routes

The concerns with Link-State protocols include: During the initial discovery process, link-state routing protocols can flood the network, decreasing the network's capability to transport data. Link-state routing is both memory and processor intensive.

OSPF and ISIS are examples of Link State protocols. Hybrid Routing Protocols Hybrid Routing Protocols take into account basic distance-vector metrics, but also incorporate other more accurate metrics in their calculations. They converge more rapidly than distance-vector protocols, while avoiding the processing overhead associated with link-state updates. Also, they are event driven rather than using a timer to decide when to send updates; this conserves bandwidth for the transmission of user data. Cisco's proprietary Enhanced Interior Gateway Routing Protocol (EIGRP) is the most common hybrid routing protocol. It was designed to combine the best aspects of distance-vector and link-state routing protocols without incurring any of the performance limitations specific to either. Remember that one of the major limitations to EIGRP is that it only runs on Cisco equipment.

Chapter 1: General Networking Theory Routing Loops Routing loops are created in several ways: Improperly configured protocol redistribution, especially mutual redistribution. Disabling split horizon, poison reverse, and other techniques that were introduced to inhibit the creation of loops. Not fixing a flapping route.

The elimination of routing loops has been one of the driving factors in the evolution of routing protocols. Having loops in your network is like driving around an unfamiliar neighborhood with conflicting directions from two different mentally deranged people, and trying to follow both at once. A field in the IP frame called the Time-to-Live field will prevent IP packets from traversing the network forever, but the result will be that the data never reaches its destination. Methods for Avoiding Routing Loops HolddownsLearned routes are held incommunicado for a period of time to prevent updates advertising networks that are misbehaving. Triggered updatesConfiguring routing updates to occur after a triggering event, such as a topology change. This allows quicker convergence. Split horizonIf a router has received a route advertisement from a specific interface, it will not re-advertise it back to that same interface. Think of this as a sphincter; things are not sent back to where they came from (gross, but you won't forget it, and that's the point). Poison reverseSimilar to split horizon, but instead of ignoring the update, the route is advertised back to the originating interface as a poison reverse update. The originating router gets its own route back, but with the time-tolive field exceeded, so the route is removed from the table. This helps to more quickly clear bad routes from the list being passed back and forth between the routers.

Route Summarization Route summarization condenses routing information by consolidating like routes, and collapsing multiple subnet routes into a single network route. Where summarization is not applied, each router in a network must retain a route to every subnet in the network. This means as the network grows, the routing table becomes larger and larger. Routers that have had their routes summarized can reduce some sets of routes to a single advertisement, which reduces the load on the router and simplifies the network design.

7 Chapter 1: General Networking Theory For instance, let's consider a router that has several interfaces that have the following addresses: Interface sO - 172.16.215.0/24 Interface si - 172.16.126.0/24 Interface s2 - 172.16.227.0/24 Interface s3 - 172.16.218.0/24 Interface s4 - 172.16.219.0/24 Interface s5 - 172.16.129.0/24 Interface s6 - 172.16.119.0/24 Interface s7 - 172.16.117.0/24

Provided this address sequence was not used elsewhere on the network, an upstream neighbor could summarize these addresses as 172.16.0.0/16 and have only a single route in its table. For another example, consider that you had a router with interfaces configured as follows: Interface Interface Interface Interface Interface Interface sO si s2 s3 s4 s5 172.108.168.0/24 172.108.169.0/24 172.108.170.0/24 172.108.171.0/24 172.108.172.0/24 172.108.173.0/24

The entire range of subnets could be summarized, as 172.108.168.0/21 and an upstream neighbor would only have to maintain a single route in its table. Let's take one more example, but this time review the actual bits involved: a. 172.16.25.0/24 b. 172.16.26.0/24 c. 172.16.27.0/24 d. 172.16.28.0/24 e. 172.16.29.0/24 f. 172.16.30.0/24 First let's translate the decimal values of the IP addresses to binary: a. b. c. d. e. f. 10101100.00010000.00011001.00000000 10101100.00010000.00011010.00000000 10101100.00010000.00011011.00000000 10101100.00010000.00011100.00000000 10101100.00010000.00011101.00000000 10101100.00010000.00011110.00000000

Chapter 1: General Networking Theory Now let's compare and determine which is the least significant digit where the numbers remain identical:

a. 10101100.00010000.00011
b. 10101100.00010000.00011

c 10101100.00010000.00011 .
d. 10101100.00010000.00011

e. 10101100.00010000.00011
f 10101100.00010000.00011 .

001.00000000 010.00000000 011.00000000 100.00000000 101.00000000 110.00000000

You have just discovered the summary address and subnet: 172.16.24.0/21 Some important reasons to take advantage of summarization: The larger the routing table, the more memory is required because every entry takes up some of the available memory. The routing decision process may take longer to complete as the number of entries in the table are increased. An added benefit of reducing the IP routing table size is that it requires less bandwidth and time to advertise the network to remote locations, thereby increasing network performance.

For large networks, the reduction in route propagation and routing information overhead can be significant. Route summarization is of minor concern in production networks until their size gets considerable. However, if summarization has not been taken into account during the initial design phase, it is very difficult to implement later. Some routing protocols, such as EIGRP, summarize automatically. Other routing protocols, such as OSPF, require manual configuration to support route summarization. A routing protocol can summarize on a bit boundary only if it supports variablelength subnet masks (VLSMs). Remember that when redistributing routes from a routing protocol that supports VLSM (such as EIGRP or OSPF) into a routing protocol that does not (such as RIPvl or IGRP) you might lose some routing information. Most specific network match is used first for a router running multiple protocols to learn how to reach a destination network/host. Some important requirements exist for summarization: Multiple IP addresses must share the same high-order bits. Since the summarization takes place on the low-order bits, the high-order bits must have commonality.

9 Chapter 1: General Networking Theory Routing tables and protocols must use classless addressing to make their routing decisions; in other words, they are not restricted by the Class A, B and C designations to indicate the boundaries for networks. Routing protocols must carry the prefix length (subnet mask) with the IP address.

Tunnels
A tunnel is a software interface on a Cisco router that is used to transport nonroutable protocols across an IP network. Appletalk, SNA, and NETBIOS are popular protocols to be tunneled. The encapsulated protocol and user data are carried as normal data in the encapsulating protocol. At the far end of the transmission, the encapsulating protocol is stripped off, and the encapsulated materials are processed as normal. Tunneling is often used to: Pass serial network traffic through a packet-switched IP network Pass a non-routable protocol by putting it inside a routable one Pass IPX or some other protocol through an IP network or link without adding the extra overhead of processing a second protocol

Networking Standards IEEE 802.x Protocols IEEEThe Institute of Electrical and Electronics Engineersa professional organization that, among other things, develops communications and network standards. IEEE 802.1IEEE specification that describes an algorithm that prevents bridging loops by creating a spanning tree. The algorithm was invented by Digital Equipment Corporation (DEC). The DEC algorithm and the IEEE 802.1 algorithm are not exactly the same, nor are they compatible. IEEE 802.2IEEE LAN protocol that specifies an implementation of the LLC sublayer of the data link layer. IEEE 802.2 handles errors, framing, flow control, and the network layer (Layer 3) service interface. The maximum transmit value for LLC flow control is 127. Used in IEEE 802.3 and IEEE 802.5 LANs. IEEE 802.3 This is the specification that describes Ethernet. It is an IEEE LAN protocol that specifies an implementation of the physical layer and the MAC sublayer of the data link layer. IEEE 802.3 uses CSMA/CD access at different speeds over different physical media. Extensions to the IEEE 802.3 standard specify Fast Ethernet. IEEE 802.4IEEE LAN protocol that specifies an implementation of the physical layer and the MAC sublayer of the data link layer. IEEE 802.4 uses token-passing

Chapter 1: General Networking Theory access over a bus topology and uses a token bus LAN architecture. This specification describes the Token Ring Bus.

10

IEEE 802.5- IEEE LAN protocol that specifies an implementation of the physical layer and MAC sublayer of the data link layer. IEEE 802.5 uses token passing access at 4 or 16 Mbps over STP cabling and is similar to IBM Token Ring. IEEE 802.6IEEE MAN specification builds on DQDB technology. IEEE 802.6 supports data rates of 1.5 to 155 Mbps. This is the specification that describes Municipal Area Networks (MAN). More 802.x standards 802.7 802.8 802.9 802.10 802.11 802.12 Broadband Fiber-optic LANs Integrated Voice & Data LAN/MAN Security Wireless Demand Priority Access LAN, 100 Base VGAnyLAN (HP's answer to FastEthemet)

Cabling and connector standards Crossover Cable Rj45 PIN 1 Rx+ 2 Rc3 Tx+ 6 TxR345 PIN 3 Tx+ 6 Tx1 Rc+ 2 ReI i Straight Through I Cable ! R> 45 PIN 1 Tx+ 2 Tx3 Rc+ 6 RcRJ45 PIN 1 Rc+ 2 Rc3 Tx+ 6 Tx-

PIN 1

RJ-43 tale issk - mt

11 Chapter 1: General Networking Theory

Top

View

Front
UieiA)

Protocol Mechanisms Connection-Oriented and Connectionless Service Connection-oriented services require connection establishment and termination. Packets are sequenced so that they can be reassembled in the proper order. Connection-oriented services also include a sliding window for flow control, builtin error recovery, and acknowledgment of data delivery. Connectionless services supply no message sequencing or guarantees of data delivery. The higher layers of the OSI model are responsible for error recovery, flow control, and reliability. Transmission Control Protocol (TCP) is considered connection-oriented, because the transmission of packets has the built-in ability to ensure delivery. User Datagram Protocol (UDP) would be considered a connectionless service. Maximum Transmission Unit (MTU) The Maximum Transmission Unit (MTU) that can be transmitted between two nodes on a local network generally is fixed for an entire network. The default values for several common technologies are: Network Type ARPANET FDDI 4 MB Token Ring 16 MB Token Rinq Ethernet X.25 MTU 1,007 4,352 2,002 4,352 1,500 128

bytes bytes bytes bytes bytes bytes

If the path between two devices includes segments with varying MTU values, traffic may need to be fragmented into smaller sections, in order to support the smallest MTU. This can also become an issue when dealing with encapsulated traffic, like a GRE tunnel. If there is a GRE tunnel over Ethernet, the MTU for traffic inside the tunnel needs to be smaller, so the GRE encapsulation will not exceed the Ethernet MTU.

Chapter 1: General Networking Theory Transmission Control Protocol (TCP) TCP is a connection-oriented Layer-4 (transport layer) protocol designed to provide reliable end-to-end transmission of data in an IP environment.

12

TCP groups bytes into sequenced segments, and then passes them to IP for delivery. These sequenced bytes have forward acknowledgment numbers that indicate to the destination host what next byte it should see. Bytes not acknowledged to the source host within a specified time period are retransmitted, which allows devices to deal with lost, delayed, duplicate, or misread packets. This time-out mechanism allows devices to detect lost packets and request retransmission. The receiving TCP process indicates the highest sequence number it can receive without overflowing its internal buffers. TCP hosts establish a connection-oriented session with one another through a "three-way handshake" mechanism, which synchronizes both ends of a connection by allowing both sides to agree upon initial sequence numbers. Each host first randomly chooses a sequence number to use in tracking bytes within the stream it is sending and receiving. Then, the three-way handshake proceeds in the following manner: The initiating host (Host-) initiates a connection by sending a packet with the initial sequence number ("X") and SYN bit (or flag) set to make a connection request of the destination host (Host-B). Host-B receives the SYN bit, records the sequence number of "X" and replies by acknowledging the SYN (with an ACK=X+1). Host-B includes its own initial sequence number ("Y"). As an example: An ACK of "20" means that Host-b has received bytes 0 through 19 and expects byte 20 next. This technique is called forward acknowledgment. Host- then acknowledges all bytes from Host-B with a forward acknowledgment indicating the next byte Host A expects to receive (ACK=Y+1). Data transfer can now begin.

TCP Sliding Window (Data Transfer) The receiver specifies the size of the window, meaning the number of data bytes the sender is allowed to send before receiving an acknowledgment. The initial window sizes are established during setup, but can change depending on flow control requirements. Consider this example sequence: The sender (Host-) has a sequence often bytes ready to send (numbered 1 to 10) to a recipient (Host-B) who has a defined window size of five. Host- will place a window around the first five bytes and transmit them together and wait for an acknowledgment.

13 Chapter 1: General Networking Theory Host-B will respond with an "ACK = 6" indicating that it has received bytes 1 to 5 and is expecting byte 6 next. Host- then moves the sliding window five bytes to the right and transmits bytes 6 to 10. Host-B will respond with an "ACK = 1 1 " indicating that it is expecting sequenced byte 11 next. In this packet, the receiver might indicate that its window size is 0 (because, for example, its internal buffers are full). Host-A won't send any more bytes until Host-B sends a subsequent packet with a window size greater than 0.

TCP has mechanisms for expanding and contracting the window size depending on flow control needs, starting with a small window sizes and increasing over time as the link proves to be reliable. When TCP sees that packets have been dropped (ACKS are not received for packets sent), it tries to determine the rate at which it can send traffic through the network without dropping. Once data starts to flow again, it slowly begins the process again. This may create oscillating window sizes if the main problem has not been resolved, so the window size is slowly expanded after each successful ACK is received. This phase of increasing the window size slowly is called "slow start." TCP Flags (Control Bits) Here are the flags used in the TCP handshaking and data transfer processes: URG: ACK: PSH: RST: SYN: FIN: Urgent Pointer field significant Acknowledgment field significant Push Function tells the receiver to send the data as soon as possible Reset the connection Synchronize sequence numbers to initiate a connection No more data from sender

The SYN & ACK flags are used for connection establishment. The FIN & RST flags are used for connection termination. User Datagram Protocol (UDP) UDP is a connectionless Layer-4 (transport layer) protocol designed for the transmission of data in an IP environment. It is basically an interface between IP and upper-layer processes, and uses protocol ports to distinguish between applications. Unlike the TCP, UDP does not have the ability to ensure reliability, or provide flow-control and error-recovery functions. UDP's primary benefit is its simplicity, since its headers contain fewer bytes and consume less network overhead than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer protocol might provide error

Chapter 1: General Networking Theory

14

and flow control. An application that benefits from the lack of error correction is VoIP; packets received out of order are better dropped than rebroadcast. UDP is the transport protocol for several well-known application-layer protocols, including Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name System (DNS), and Trivial File Transfer Protocol (TFTP). Address Resolution Protocol (ARP) When two machines on a given network need to communicate, they usually have the IP address of their destination, but need the MAC address to talk. Address Resolution Protocol (ARP) provides the ability for a host to dynamically discover the layer-2 MAC address of a particular layer-3 IP address. When a workstation attempts to communicate with an IP address it follows this process: A host compares the IP address and subnet mask of the desired destination with its own IP address and subnet mask to determine if the traffic is local, or if it is to be sent to the default gateway. Remember that if the MAC address of the default gateway is not known, it will follow the ARP process to obtain it. If the traffic is local the host looks at the ARP cache, an index of recently acquired IP-to-MAC address combinations. If the appropriate address is there, communication is established. If the IP address is not in the local ARP table, the source host will send an ARP request packet containing the network-layer address, seeking to resolve it to a MAC address for the desired destination. All hosts on the network receive this request, but only the host with the specified network address will respond. If present and functioning, the host with the specified address responds with an ARP Reply packet containing its MAC address. The originating device receives the ARP Reply packet, stores the MAC address in its ARP cache for future use, and begins exchanging packets with the destination host. If the host is not on the local network, AND the originating device doesn't realize that to be the case, AND the router on the local network is configured to perform Proxy ARP (Cisco default setting), the router will look up the network address in its route table; if it finds it, it will return the MAC address of its local interface to the ARP-ing source station. While unusual, that can happen, depending on the host's OS version, if the user forgets to configure a default gateway, or configures the device with its own address as the default gateway. This can cause a station to ARP for every unknown address, local or not.

The show arp exec command will display the contents of a Cisco router's ARP cache, while the show ip arp command will show the IP specific entries. Reverse Address Resolution Protocol (RARP) functions in the opposite direction, and is used to map MAC-layer addresses to IP addresses. It is most commonly

15 Chapter 1: General Networking Theory used by diskless workstations that do not know their IP addresses when they boot. RARP requires a RARP server with table entries of MAC-layer-to-IP address mappings. Passive Interface When enabled on an interface, this command allows the interface to receive routing updates, but does not allow it to forward routes out of this interface. For routing protocols such as OSPF and EIGRP, this will prevent any updates being received because those protocols rely on forming adjacencies with neighbor routers. With passive-interfaces these adjacencies cannot occur. With protocols such as RIP and IGRP, the routes will still be received, just not sent back out. Example 1-1: RouterA(config)# router rip RouterA(config-router)# passive-interface serial 0

Ethernet Segment A

Serial Segment A

Ethernet Segment 8

Figure 1-1. Connectivity Bridged Environment Refer to the above diagram, and assume the devices are bridges or switches. A packet will retain the actual source and destination MAC addresses when crossing a bridge or switch, provided they are in the same VLAN. Cisco switches and bridges will not modify the Layer 2 MAC address of a frame when switching (the only exception being SR/TLB). In the above diagram, if Devices A and were two switches or two bridges, packets sent from Host A to Host would have the source MAC address of Host A's Ethernet adapter and the Destination MAC Address of Host B's Ethernet adapter and would not be changed by the switches. If a packet were to be lost anywhere between Host A and Host B, the originator would retransmit the missing packet. Routed Environment Continuing to refer to the above diagram, for a routed environment, the source MAC address begins as the address of the sending host. The destination MAC

Chapter 1: General Networking Theory

16

address is that of the router's local interface. Once the router processes the packet, the source MAC address is changed to the address of the router's forwarding port, and the destination is changed to the packets next hop. This process is repeated as the packet traverses the network, each hop changing the source and destination MAC addresses. A host sending a frame to the router for processing to a remote destination will use the routers local interface as a destination MAC address. The host receiving the frame from the router will see a source MAC address of the local router interface. If in the devices in the diagram are routers: When a packet from Host A to Host first leaves Host A it will have a source MAC address of Host A and destination MAC address of Router A's local Ethernet port on Segment 1. After being received and processed by Router A, the packet from Host A to Host will have a source MAC address of Router A's Serial port and destination MAC address of Router B's local Serial port on Serial Segment A. After being received and processed by Router B, the packet will have a source MAC address of Router B's Ethernet port and a destination MAC address of Host on Ethernet Segment

Commands The Cisco command-line interface is what you will become all too familiar with. Remember, the IOS is your friend. You should learn to know the IOS like your own backyard. You should know the official Cisco IOS documentation like your own autobiography. Show Commands The primary way to extract information from a Cisco router is through the use of a long list of show commands. These exec commands provide a snapshot of the conditions under evaluation. The show interface command is a good exampleit provides information about the interface specified, and each interface type will have information unique to that type of interface. In the examples section, you will see output from the show running-config command and the show interface commands for Serial and Token-Ring interfaces. Determining the Hardware Configuration The EXEC commands show hardware and show version will display hardware detail of the router. snmp-server user Configures users and defines authentication and privacy for those users.

17 Chapter 1: General Networking Theory Syntax: snmp-server user usemame groupname { s n m p v l [ snmpv2 | snmpv3 [encrypted] [auth {md5 | sja} auth-password [priv des56 priv-password]]} [access access-list-name] [no] snmp-server user username groupname { s n m p v l | snmpv2 | snmpv3} username User's name. May contain 1 to 32 alpha and/or numeric characters. groupname Groupname to which user is associated. snmpvl SNMP Version 1. The least secure of the security models. snmpv2 SNMP Version 2. The second least secure of the security model. It allows for the transmission of informs and counter 64, which allows for integers twice the width of what is normally allowed. snmpv3 SNMP Version 3. Most secure. encrypted Specifies whether the auth-password and priv-password appear in encrypted format (hiding the true characters of the string). Encrypted string must be entered as a hexadecimal string. This keyword can only be specified when using SNMPv3. auth{md5 | sja} If auth is specified, messages sent by this user will be authenticated using the authentication protocol HMAC_MD5-96 or HMAC-SHA-96) and auth-password. The auth-password string must not exceed 64 characters. If specifying encrypted, then auth-password should be the digest string using the selected message digest algorithm as specified in RFC 2574. MD5 has 16 octets in the digest and SHA has 20. If not specifying encrypted, user enters the authentication password string for the MD5 or SHA. The agent converts the password to a digest, as described in

Chapter 1: General Networking Theory RFC 2574. This argument can be specified only when using SNMPv3. If auth is not specified, authentication is not done. priv des56 priv-password

18

If priv is specified, messages sent by this user will be encrypted using CBC-DES and priv-password. The priv-password must not exceed 64 characters. If specifying encrypted, user must enter a 16-octet DES keying hexadecimal format for the des-password. If encryption keyword is not used, user enters a password string. The agent will generate a suitable 16-octet DES key from the password string. This argument can only be used with the SMMPv3 and when authentication is specified. If priv is not specified, then messages are not encrypted. access access-iist-name Configures a standard access list name. Name of access list may be up to 24 characters. By default, all IP addresses are permitted. Description: The snmp-server user command configures a user, assigns user to a group, and defines authentication and privacy for this user. The ACL associated with a user overrides the ACL assigned to the user's group. If no ACL is defined, the user's group ACL takes precedence. Use the no snmp-server user command to delete users. Reverse Telnet Configuration The no exec command is an optional command for reverse telnet configuration. Adding this line reduces contention over the asynchronous port. By default, the switch starts EXECs on all lines. When you want to allow an outgoing connection only for a line, use the no exec command. At times, this can make it difficult to use a reverse telnet session. The command no exec will fix this. When a user tries to use Telnet to access a line with the no exec command configured, the user gets no response when pressing the Return key at the login screen. File Copy The following copy commands are valid: /erase bootflash: flash: ftp: null: nvram: Erase destination file system. Copy from bootflash: file system Copy from flash: file system Copy from ftp: file system Copy from null: file system Copy from nvram: file system

19 Chapter 1: General Networking Theory rep: system: tftp: Copy from rep: file system Copy from system: file system Copy from tftp: file system

Command Line Interface Shortcuts Keystroke Ctrl- Ctrl-B or the left arrow key Ctrl- Ctrl-D Ctrl- Ctrl-F or the right arrow key Ctrl-K Ctrl-L; Ctrl-R Ctrl-N or the down arrow key Ctrl-P or the up arrow key Ctrl-U; Ctrl- Ctrl-W Esc Esc D Esc F Delete key or Backspace key Redistribution When a network is subnetted, use the keyword subnet to redistribute protocols into OSPF. Without subnet, OSPF redistributes only the major network boundaries. Reachability For EIGRP and OSPF, each the ID of every does not necessarily need to be reached by every other router in the network. Function Jumps to the first character of the command line. Moves the cursor back one character. Escapes and terminates prompts and tasks. Deletes the character at the cursor. Jumps to the end of the current command line. Moves the cursor forward one character. Deletes from the cursor to the end of the command line. Repeats current command line on a new line. Enters next command line in the history buffer. Enters previous command line in the history buffer. Deletes from the cursor to the beginning of the command line. Deletes last word typed. Moves the cursor back one word. Deletes from the cursor to the end of the word. Moves the cursor forward one word. Erases mistake when entering a command; re-enter command after using this key.

Chapter 1: General Networking Theory Cisco Discovery Protocol (CDP)

20

A proprietary Data Link layer protocol used between Cisco devices to pass information about local conditions. CDP uses data-link multicasts addresses with no protocol ID or Network layer field. This means two systems with different Network layer protocols can still learn about each other through CDP. The way to prevent the packets being passed is to configure either the no cdp enable command on those interfaces on which you do not want to run CDP, or no cdp run at the global level. You can configure a MAC-layer filter to deny a multicast address as an alternative method to block these packets. By default, CDP packets are sent every 60 seconds. This can be changed using the cdp timer command. CDP is both media- and protocol-independent. It runs on all media that support the Subnetwork Access Protocol (SNAP), including Ethernet, Token Ring and Frame Relay. It comes enabled by default on all supported interfaces to both send and receive CDP information. Some interfaces, such as ATM, do not support CDP. Network management applications can use CDP to learn the device type and the SNMP agent address of neighboring devices, enabling applications to send SNMP queries to neighboring devices. On Demand Routing (ODR) uses information from CDP.

21 Chapter 1: General Networking Theory Chapter 1 Questions 1-1. In a bridged environment Host 1 sends a packet to Host 2. The packet is lost and needs to be retransmitted. Which device will handle the retransmission? a) Host 1 b) Host 2 c) Bridge 1 d) Bridge 2 1-2. How would you prevent routing information being forwarded out of Ethernet interface 0? a) Use the command 'passive interface ethernet 0' in the router config b) Use the command 'passive-interface ethernet 0' in the router config c) Use the command 'passive interface' in the interface config d) Use the command 'passive-interface' in the interface config e) Use the command 'no router rip' f) You cannot disable sending routing information on an interface 1-3. Which of the following are not examples of a Layer-1 specification? a) RS-232 b) V.36 c) V.35 d) RJ-45 e) V.24 1-4. Which of the following represents 802.3? a) Token Ring b) FDDI c) Ethernet d) LLC 1-5. Which of the following represents 802.5? a) Token Ring b) Token Ring bus c) FDDI d) lOOVG-AnyLAN 1-6. How many bits are in a MAC address? a) 12 b) 24

Chapter 1: General Networking Theory c) 48 d) 36 1-7. Which of the following are TCP flags: a) URG b) ACK c) SYN d) FIN e) PSH f) 3 of the above g) All of the above 1-8. Error checking and recovery occur at what layer of the OSI model? a) Application b) Presentation c) Session d) Transport e) Network f) Data Link g) Physical 1-9. Which layer of the OSI model defines the electrical, mechanical, procedural, and functional specifications for the link between communicating network systems? a) Application b) Presentation c) Session d) Transport e) Network f) Data Link g) Physical 1-10. Which layer of the OSI model is divided into LLC and MAC sub-layers? a) Application b) Presentation c) Session d) Transport e) Network f) Data Link

22

23 Chapter 1: General Networking Theory g) Physical 1-11. IP and IPX run at what layer of the OSI model? a) Application b) Presentation c) Session d) Transport e) Network f) Data Link g) Physical 1-12. What is the MTU size on a Cisco Ethernet interface?

a) 4,352 bytes
b) 1,500 bytes

c) 2,002 bytes
d) 1,007 bytes

e) 128 bytes
1-13. Which of the following are not useful for blocking network loops? a) Split horizon b) Holddowns c) Bridges d) Triggered updates e) Switches f) Poison Reverse 1-14. Which of the following are examples of DV routing protocols? a) RIP b) ISIS c) RIP v2 d) EIGRP e) IGRP f) OSPF g) Static routes 1-15. Which of the following are examples of Link State routing protocols? a) RIP b) ISIS c) RIP v2 d) EIGRP

Chapter 1: General Networking Theory e) IGRP f) OSPF g) Static routes 1-16. Which of the following are examples of a Hybrid routing protocol? a) RIP b) ISIS

24

c) RIP v2
d) EIGRP e) IGRP f) OSPF g) Static routes 1-17. Which of the following would be considered disadvantages of Link State routing protocols? (Choose all that apply) a) Slow Convergence b) Memory and CPU utilization c) Routing loops d) Ongoing excessive bandwidth utilization e) Flooding the network during the initial discovery process f) Heat 1-18. Which of the following would be considered a disadvantage of Distance Vector routing protocols? (Choose all that apply) a) Slow Convergence b) Memory and CPU utilization c) Routing loops d) Ongoing excessive bandwidth utilization e) Flooding the network during the initial discovery process f) Heat 1-19. Which of the following are connection-oriented? (Choose all that apply) a) UDP b) IP C)TFTP d) FTP 1-20. The RIP Poison Reverse technique advertises a route back on the interface it was learned from with a metric of what? a)l

b) 0

25 Chapter 1: General Networking Theory c) 14 d) 15 e) 16 f) 17 1-21. What would be a summary address for the following? 172.16.25.0/24 172.16.26.0/24 172.16.27.0/24 172.16.28.0/24 172.16.29.0/24 172.16.30.0/24 a) 172.16.24.0/20 b) 172.16.24.0/21 c) 172.16.24.0/22 d) 172.16.24.0/23 e) 172.16.24.0/24 f) 172.16.24.0/25 1-22. What would be a summary address for the following? 172.108.169.0/24 172.108.170.0/24 172.108.171.0/24 172.108.172.0/24 172.108.173.0/24 172.108.174.0/24 a) 172.108.168.0/18 b) 172.108.168.0/19 c) 172.108.168.0/20 d) 172.108.168.0/21 e) 172.108.168.0/22 f) 172.108.168.0/23 g) 172.108.168.0/24 h) 172.108.168.0/25 1-23. Which of these could be contained in an IP datagram? (Choose all that apply) a) TCP data

Chapter 1: General Networking Theory b) Ethernet frame c) Physical bits d) UDP data e) An ARP packet 1-24. What would be the most efficient supernet mask for the addresses 192.168.4.0/24, 192.168.5.0/24, 192.168.6.0/24?

26

a) 192.168.4.0/22 b) 192.168.0.0/18 c) 192.168.1.0/24


d) None of the above 1-25. Routers R4, R5, and R6 are connected via Frame Relay. Router R5 is a hub router, and Routers R4 and R6 are spoke routers. If R5 has splithorizon enabled, which of the following are you most likely to see? a) Router R4 cannot see routes from R5 b) Router R4 cannot see routes from R6 c) Router R5 cannot see routes from R4 or R6 d) Router R6 cannot see routes from R5 e) Router R6 cannot see routes from R4 1-26. Layer 6 of the 7-Layer OSI model is responsible for: a) Common Data Compression and Encryption Schemes b) Establishing, managing, and terminating communication sessions c) Synchronizing communication d) Determining resource availability e) None of the above 1-27. Which of the following is a component of the Data Link Layer of the OSI model?

a) NIC
b) Repeater c) Multiplexer d) Hub e) Router 1-28. Which statement is true regarding the use of TFTP? a) TFTP lies at the Transport layer and runs over IP. b) TFTP lies at the Application layer and runs over FTP. c) TFTP lies at the Transport layer and runs over ICMP. d) TFTP lies at the Application layer and runs over TCP.

27 Chapter 1: General Networking Theory e) TFTP lies at the Application layer and runs over UDP. 1-29. In a data communication session between two hosts, the session layer in the OSI model generally communicates with what other layer of the OSI model? a) The Physical layer of the peer b) The data link layer of the peer c) The peer's presentation layer d) The peer's application layer e) The peer's session layer 1-30. Which layers do the OSI model and the TCP/IP models share in common? (Choose all that apply) a) Application b) Presentation c) Session d) Transport

e) Data link
f) Physical 1-31. Under the OSPF process of your router's configuration, you type in redistribute igrp 25 metric 35 subnets in order to redistribute your OSPF and IGRP routing information. What affect did the subnets keyword have in your configuration change? a) It resulted in OSPF recognizing non-classful networks. b) It had no effect since IGRP will summarize class boundaries by default. c) It forced IGRP into supporting VLSM information. d) It caused OSPF to accept networks with non-classful masks. 1-32. Which routing protocols do not need to have their router ID reachable by other routers within any given network in order to maintain proper network connectivity? (Choose all that apply)

a) EIGRP
b) OSPF c) BGP d) LDP e)TDP f) None of the above 1-33. Which of the following does On Demand Routing use to transport ODR information from router to router?

a) RIP
b) BGP

Chapter 1: General Networking Theory c) CDP d) UDP e) LSP

28

1-34. A router running multiple protocols learns how to reach a destination through numerous different methods. Which of the following information will the router use first to determine the best way to reach the given destination? a) The length of the network mask of a route. b) The administrative distance of a route. c) The metric of a route. d) None of the above. 1-35. You are deciding which routing protocol to implement on your network. When weighing the different options, which of the following are valid considerations? a) Distance vector protocols have a finite limit of hop counts whereas link state protocols place no limit on the number of hops. b) Distance vector protocols converge faster than link state protocols. c) RIP is a distance vector protocol. RIP v2 and OSPF are link state protocols. d) Distance vector protocols only send updates to neighboring routers. Link state protocols depend on flooding to update all routers in the within the same routing domain. 1-36. Which of the following are Distance Vector routing protocols? (Choose all that apply) a) OSPF b) BGP c) RIP version 1 d) ISIS e) EIGRP f) RIP version 2 1-37. As the administrator of the EnableMode network, you are planning to use a dynamic routing protocol to replace the static routes. When comparing link state and distance vector routing protocols, what set of characteristics best describe Link-State routing protocols? a) Fast convergence and lower CPU utilization b) High CPU utilization and prone to routing loops c) Slower convergence time and average CPU utilization d) Fast convergence and greater CPU utilization e) None of the above

29 Chapter 1: General Networking Theory 1-38. A customer has a router with an interface connected to an OSPF network, and an interface connected to an EIGRP network. Both OSPF and EIGRP have been configured on the router. However, routers in the OSPF network do not have route entries in the route table for all of the routers from the EIGRP network. The default-metric under OSPF is currently set to 16. Using this information, what is the most likely cause of this problem? a) The 'subnets' keyword was not used under the OSPF process when redistributing EIGRP into OSPF b) EIGRP is configured as a Stub area, and therefore routes will not be redistributed unless a route-map is used to individually select the routes for redistribution. c) The 'subnets' keyword was not used the EIGRP process when redistributing between OSPF into EIGRP. d) The default metric for OSPF is set to 16, and therefore all EIGRP routes that are redistributed are assigned this metric, and are automatically consideredunreachable by EIGRP. e) A metric was not assigned as part of the redistribution command for EIGRP routes redistributing into OSPF, and the default behavior is to assign a metric of 255, which is considered unreachable by OSPF. 1-39. You need to make a new cable that is to be used for connecting a switch directly to another switch using Ethernet ports. What pinouts should be used for this cable?

a) l - > 3 , 2->6, 3->l, 6->2 b) 1->1, 2->2, 3->3, 6->6, c) l->4, 2->5, 4 - > l , 5->2 d) l->5, 2->4, 4->2, 5->l e) l->6, 2->3, 3->2, 6->l
1-40. What is the maximum transmit value for LLC flow control, as defined formally in the IEEE 802.2 LLC standard? a) 15 b) 127 c) 256 d)1023

e) 4096
1-41. Which is the proper signal for pin 6 of a PHY without an internal crossover MDI Signal according to the IEEE 802.3 CSMA/CD specification? a) Receive + b) Transmit + c) Receive d) Transmit -

Chapter 1: General Networking Theory e) Contact 6 is not used.

30

1-42. What protocols are considered to be UDP small servers? (Choose all that apply) a) Echo b) Daytime c) Chargen d) Discard e) DHCP f) Finger 1-43. Which protocols are considered to be TCP small servers? (Choose all that apply). a) Echo b) Time c) Daytime d) Chargen e) Discard f) Finger g) DHCP 1-44. Which of the following statements are NOT true regarding the TCP sliding window protocol? (Choose all that apply) a) It allows the transmission of multiple frames before waiting for an acknowledgement. b) The size of the sliding window can only increase or stay the same. c) The initial window offer is advertised by the sender. d) The receiver must wait for the window to fill before sending an ACK. e) The sender need not transmit a full window's worth of data. f) The receiver is required to send periodic acknowledgements. 1-45. With regard to TCP headers, what control bit tells the receiver to reset the TCP connection? a) ACK b) SYN c) SND d) PSH e) RST f) CLR 1-46. Router A is running BGP as well as OSPF. You wish to redistribute all OSPF routes into BGP. What command do you need to change to ensure that ALL available OSPF networks are in the BGP routing table?

31 Chapter 1: General Networking Theory a) redistribute ospf 1 match external b) redistribute ospf 1 match external 1 c) redistribute ospf 1 match external all internal all d) redistribute ospf 1 match internal all external 1 external 2 e) redistribute ospf 1 match internal external 1 external 2 f) None of the above 1-47. You wish to copy a file from a server into router NL1. Which of the following IOS copy commands is NOT valid? a) copy tftp: flash: b) copy tftp flash c) copy tftp\\flash\\ d) copy tftp: //flash:\\ 1-48. On your Terminal Server you are seeing spurious signals on line 6 of an asynchronous port due to contention issues. What command will fix this issue? a) flowcontrol hardware b) transport input none c) no exec d) exec-timeout 0 0 1-49. From the IOS command line interface, you accidentally press the Esc keys while typing in a configuration line. What is the result of this action? a) The cursor will move to the beginning of the entire command b) The cursor will move back one character. c) The cursor will move back one word d) The cursor will remain in the same location. e) Noting, this is not a valid shortcut. 1-50. Which command will display both the local and all remote SNMP engine Identification information? a) Show SNMP ID b) Show engine c) Show SNMP enginelD d) Show SNMP engine ID e) Show SNMP stats f) Show SNMP mib g) Show SNMP users 1-51. What would occur as a result of the clear ip route * command being issued? (Choose two)

Chapter 1: General Networking Theory

32

a) A router would recalculate its entire table and re-establish its neighbor relationships. b) A router would recalculate its entire routing table but its neighbor relationship would not be affected. c) Only link state routing protocols would be recalculated and only those neighbor relationships re-established. d) Only the routing table would be recalculated) e) Only its neighbor relationship would be re-established. 1-52. Which IOS example will configure a remote user in a group called remotegroup to receive traps at the v3 security model and the authNoPriv security level? a) snmp engineid remote 16.20.11.14 000000100alacl51003 snmp enable traps config snmp manager b) snmp-server group remotegroup v3 noauth snmp-server user remote remotegroup remote 16.20.11.14 v3 snmp-server host 16.20.11.14 inform version 3 noauth remoteuser config c) snmp-server group remotegroup v3 noauth snmp-server user remoteAuthllser remoteAuthGroup remote 16.20.11.14 v3 auth md5 password 1 d) snmp-server group remotegroup v3 priv snmp-server user remote PrivUser remotePrivGroup remote 16.20.11.14 v3 auth md5 passwordl priv des56 password2

33 Chapter 1: General Networking Theory

Chapter 1 Answers 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 1-18 1-19 1-20 1-21 1-22 1-23 1-24 1-25 1-26 1-27 1-28 1-29 1-30 1-31 1-32 1-33 1-34 1-35 1-36 1-37 a b b c a c g d g f b c, a, c, b, f d b, a, c, d d b d a, d a b, a a a a a, a a a, b, c c a a c, f d

Chapter 1: General Networking Theory 1-38 1-39 1-40 1-41 1-42 1-43 1-44 1-45 1-46 1-47 1-48 1-49 1-50 1-51 1-52

34

a a
b

c
a, c, d a, c, d, b, c

c c c c
b, d b

Chapter 2

Bridging & LAN Switching


General Bridging Rules Common Cisco wisdom dictates, "If you cannot route, you bridge." Usually this is a result of using a protocol that does not support layer-3 routing, such as LAT, MOP, NetBIOS or SNA. Bridging can take a congested Token Ring or Ethernet segment and separate it into multiple collision domains, although it will remain a single broadcast domain. Bridges, like switches, are layer-2 devices. In any bridged environment, the source or destination MAC addresses are never changed, although with Translational Bridging addresses will sometimes be bit-swapped.

In the bridging section we will be looking at: Concurrent Routing and Bridging (CRB) Integrated Routing and Bridging (IRB) Multi-Layer Switching (MLS)

Concurrent Routing and Bridging (CRB) Normally, a networking device either bridges or routes protocols across all of its interfaces. With CRB, you can bridge protocols on some interfaces and route different protocols on other interfaces. CRB is applied to transparent bridging only, not source-route bridging. Interfaces are placed in transparent bridge-groups. An interface can be a member of only one bridge-group. Different bridge-groups on the same router cannot exchange bridged traffic. The major limitation to CRB is that you cannot receive a bridgeable frame and route it, or inversely, receive a routed packet and bridge the frame. The two are separate and cannot be forwarded to each other. Other limitations of CRB include: CRB cannot route and bridge the same protocol on an interface.

Chapter 2: Bridging and LAN Switching An interface can only be in one Transparent bridge-group.

36

The basic CRB configuration begins with bridge X protocol ieee, or whatever your protocol is. Next, you would define bridge crb. Then, you would assign each interface to a bridge-group with the bridge-group X command, where X is the bridge-group number. If two interfaces are in the same bridge-group, they would share the same bridge-group number. A common "gotcha" in configuring CRB is: if a bridge-group is assigned to an interface and you issue the bridge crb command, the IOS will generate a bridge X route ip command if an IP address is configured on the interface that has the bridge-group X assigned. This is because the IP address represents a protocol that has been configured for routing. If you apply the bridge-group X command after the bridge crb command, then it does not automatically create a bridge X route ip command. You must manually configure the bridge-group 1 route ip command globally so that the protocol specified, on that interface, is routed instead of bridged. RememberWith CRB, you can bridge or route the same protocol but not on the same interface, nor can they interconnect. Integrated Routing and Bridging (IRB) Unlike CRB, IRB allows the router to route & bridge the same protocol and for these bridged and routed packets to communicate. In order to use IRB, you create a Bridged Virtual Interface (BVI). After the BVI is configured, the router can send routable protocols that were bridged to the BVI to be routed. For example, an IP packet arrives on a router's interface as a bridged protocol and the destination is out another interface that is not configured for bridging. The router will send the packet to the appropriate interface to be routed. With IRB, you must configure the protocols that you want the BVI to be able to route. IRB can be especially useful as a means of connecting bridged and routed networks during network migrations when the two types of networks must communicate. It provides a border checkpoint for the two networks to pass through. The basic IRB configuration begins with bridge X protocol ieee, or whatever your protocol is. Next, you would define bridge irb. Then, you would create the bridge virtual interface with the command interface bvi X, where the X represents your bridge-group number. Assign an IP address to your BVI interface. Then, assign each interface to a bridge-group with the bridge-group X command, where X is the bridge-group number. If two interfaces are in the same bridge-group, they would share the same bridge-group number. Finally, the bridge X route ip command is entered. This enables routing between interfaces with IP addresses which are not in the bridge-group. It also enables bridging between interfaces that are in the bridge-group. The traffic between the bridges and routed networks is able to communicate.

37 Chapter 2: Bridging and LAN Switching A common behavior of IRB is this: if bridge irb is configured and no interface has the appropriate bridge-group X command configured, the BVI will shutdown. Once the bridge-group X command is applied to an interface, the BVI will come back up. Remember that in CRB, if an IP address is configured on an interface in which the bridge-group command is configured, once bridge irb is applied, it will create bridge X route ip automatically. If you want that interface to bridge IP, you need to enter the no bridge X route ip command. The default behavior of IRB is to bridge all protocols. The BVI interface is a normally routed interface that does not support bridging, but does represent its corresponding bridge-group to the routed interfaces. RememberIRB allows routing and bridging in whatever combination you need. LAN Switching All nodes on an Ethernet network can transmit at the same time, so the more nodes you have, the greater the possibility of collisions happening, which can slow down data transmission. LAN segmentation translates into breaking up collision domains by decreasing the number of workstations per segment, using bridges or switches. Switches are sometimes called micro-segmentation devices, because they commonly have as few as one host per collision domain. Switching is normally a layer-2 data manipulation that forwards through the network by destination MAC address. High end Cisco gear is capable of multi layer switching. These are the common Cisco switching techniques: Store-and-forwardReceives the complete frame before forwarding. Copies the entire frame into the buffer and then checks for CRC errors. Higher latency than other techniques. This technique is used on Cat5000s. Cut-throughChecks the destination address as soon as the header is received and immediately forwards it out, lowering the latency level. Fragment-freeWaits for first 64 bytes before forwarding. Unable to verify checksum Fast switchingThe default switching type. It can be configured manually through use of the ip route-cache command. The first packet is copied into packet memory, while the destination network or host information is stored in the fast-switching cache. Process Switching This technique doesn't use route caching, so it runs slow; however, slow usually means SAFE. To enable, use the command no protocol route-cache. Optimum SwitchingFrom its name you can understand what it ishigh performance! This is the default on 7500's.

Chapter 2: Bridging and LAN Switching Trunking

38

Trunks are implemented to transport the packets of multiple VLANs over a single network link. They can be either IEEE 802.1Q or Cisco's proprietary InterSwitch Link (ISL). If you are using another vendor's switch in your network and require a trunk, you will need to use the IEEE method. This has also become common in pure Cisco networks because of the reduced overhead when compared to ISL. The 802.1Q trunking method is the industry standard for trunk links, and can be used as an alternative to inter-switch linking (ISL). Using 802.1Q or ISL alone will not cause any bridging loops. Trunks are configured for a single Fast-Ethernet, Gigabit Ethernet, or Fast- or Gigabit EtherChannel bundle and another network device, such as a router or second switch. Notice that I specifically excluded 10Mb Ethernet ports, which cannot be used for trunking. For trunking to be enabled on EtherChannel bundles, the speed and duplex settings must be configured the same on all links. If part of an EtherChannel bundle fails, traffic will still be passed, but at a reduced rate. For trunking to be auto-negotiated on Fast Ethernet and Gigabit Ethernet ports, the ports must be in the same VTP domain. Fast Etherchannel simply provides a way to bond multiple Ethernet links into one larger channel. It will not introduce STP loops in the network. ISL is Cisco's proprietary protocol for interconnecting multiple switches and maintaining VLAN information as traffic goes between switches. It operates in a point-to-point configuration, at full- or half-duplex, and will support up to 1000 VLANs. The Ethernet frame is encapsulated with a 26-byte header that transports VLAN IDs between switches and routers. Not all switches support all encapsulation methods; for instance the Cat2948G and Cat4000 series switches support only 802.1Q encapsulation. In order to determine whether a switch supports trunking, and what trunking encapsulations are supported, look to the hardware documentation or use the show port capabilities command. There are five trunking modes: On-Forces the port to become a trunk port, even if the neighboring port does not agree with the change. Off-Forces the port to become non-trunking, even if the neighboring port does not agree with the change. Desirable-Causes the port to actively seek to convert the link to a trunk. The port becomes trunked if the neighboring port is set to either "on", "desirable", or "auto" modes. Auto-Makes the port available to serve as a trunk link. The port becomes a trunk port if the neighboring port is set to either " o n " or "desirable" modes. This is the default mode for both Fast- and Gigabit Ethernet ports.

39 Chapter 2: Bridging and LAN Switching Nonegotiate-Puts the port into trunking mode permanently, but does not pass Dynamic Trunk Protocol (DTP) frames, which is what allows the trunk to negotiate ISL and 802.1Q links. This means the neighboring port must be manually configured as a trunk port in order to establish a trunk.

The available trunking encapsulation settings for trunks: Inter-Switch Link (ISL)-A Cisco-proprietary trunking encapsulation that adds a 26-byte header and 4-byte trailer to the frame. ISL supports the processing of untagged frames. ISL supports up to 1024 VLANs. IEEE 802.1Q (dotlq)-An industry-standard trunking encapsulation that does not change the size of the frame. Because multiple vendors support d o t l q , it is becoming more common in newer switched networks. 802.lq uses a tag protocol ID of 0x8100 (Ethertype 8100). 802.lq allows the encapsulation of multiple trunks within a single trunk. 802.1Q tag is only 4 bytes in length. 802.1Q supports up to 4096 VLANs. Negotiate-The port negotiates with its neighbor port to mirror its encapsulation configuration, either ISL (preferred) or 802.1Q trunk. This configuration option is only available in switch software release 4.2 and later.

Trunking Facts: For trunking to be auto-negotiated on Fast Ethernet and Gigabit Ethernet ports, the ports must be in the same VTP domain. For trunking to be enabled over EtherChannel bundles the speed and duplex settings must be configured the same on all links. If part of an EtherChannel bundle fails, traffic will still be passed, but at a slower rate.

Catalyst 6500 SPAN Information SPAN sends packets for analysis to a destination port (also known as a monitor port) on a switch. If the trunking mode of a SPAN destination port is "on" or "nonegotiate" during SPAN session configuration, SPAN packets forwarded by the destination port are encapsulated as specified by the trunk type; however, the destination port stops trunking, and the show trunk command reflects the trunking status for the port prior to the SPAN session configuration. Gigabit Ethernet Gigabit Ethernet is a standard described under IEEE 802.3z, defined in 1998. The 802.3ab document described the 1000BASE-T standard in 1999. Both describe Gigabit speed implementations, the difference being that 802.3z uses fiber and 802.3ab uses copper. Dynamic Inter-Switch Link Protocol (DISL) Dynamic Inter-Switch Link Protocol is only used when you have two Cisco devices, connected together by a Fast Ethernet link. DISL will ease the

Chapter 2: Bridging and LAN Switching configuration burden because only one end of the ISL link needs to be configured. Fast EtherChannel(FEC)

40

EtherChannel is a Cisco proprietary method for aggregating the bandwidth of up to eight Fast Ethernet links per channel (or eight Gigabit Ethernet links per channel) on a switch and having them appear to be one logical connection. The requirements are that all the ports be in the same VLAN; have the same speed and duplex settings; and, if the switch is not a Cat6000, that contiguous ports be used. Besides increasing the bandwidth available between devices, this also adds a level of protection, because if one of the links within the EtherChannel were to go down, the traffic would continue to pass at the reduced rate without interruption. Using PAgP to Configure Fast EtherChannel The port aggregation protocol (PAgP) supports automatic creation of Fast EtherChannel links. PAgP packets are sent between FEC-capable ports to negotiate the forming of a channel. The protocol learns the capabilities of port groups dynamically and informs neighboring ports. PAgP first identifies paired channel-capable links. It then groups ports into a channel. The channel is added to the spanning tree as a single bridge port. An outbound broadcast or multicast packet is transmitted from one port in the channel only, and not from every port in the channel. Broadcast and multicast packets sent from one port in a channel are blocked from returning on any other port of the channel. There are four channel modes: on, off, auto, and desirable. The four channel modes are user configurable. PAgP packets are exchanged only between ports in auto and desirable mode. Only auto-desirable, desirable-desirable, and on-on configurations will allow formation of a channel. Ports configured in on or off mode do not exchange PAgP packets. The default channel mode is auto. The most robust configuration for an EtherChannel is to set both switches to desirable mode. This is a robust configuration when one side or the other encounters an error or is reset. Either auto or desirable mode will allow connected ports to negotiate with each other to determine if they can form a channel. The determination depends on trunking state, port speed, and native VLAN. Ports in different channel mode can form an EtherChannel as long as the modes are compatible. Examples are: A port in desirable mode can form an EtherChannel with a port in desirable or auto mode.

41 Chapter 2: Bridging and LAN Switching A port in auto mode can form an EtherChannel with a port in desirable mode. A port in auto mode cannot form an EtherChannel with a port that is also in auto mode, since neither port initiates negotiation. A port in on mode can form a channel only with a port in on mode because ports in on mode do not exchange PAgP packets. A port in off mode cannot form a channel with any other port.

All ports in a channel must have the same speed settings. You cannot mix and match different types of Ethernet ports (such as 10M, 100M, GIGE) in the same channel. Similarly, all ports need to have identical duplex settings. Only the combinations of auto-desirable, desirable-desirable, and on-on will allow a channel to be formed. If a device on one side of the channel, such as a router, does not support PAgP, the other device must have PAgP set to on. The PAgP modes are: off: PAgP will not run. Channel is forced to remain down. on: PAgP will not run. Channel is forced to come up. auto: PAgP runs passively. Channel formation desired, but not initiated. desirable: PAgP runs actively. Channel formation is desired and initiated.

If the output of the show interface status command shows a status of errdisable, the LED associated with the port on the front panel will be solid orange. Some possible reasons for the interface going into err-disable state are: port-channel misconfiguration duplex mismatch EtherChannel guard detects a misconfigured EtherChannel Bridge Protocol Data Unit (BPDU) Guard violation UniDirectional Link Detection (UDLD) condition late-collision detection link-flap detection security violation Port Aggregation Protocol (PAgP) flap Layer 2 Tunneling Protocol (L2TP) Guard DHCP snooping rate-limiting

To detect the unidirectional links before the forwarding loop is created, Cisco uses the UDLD protocol. UDLD is a Layer 2 protocol that works with the Layer 1 mechanisms to determine the physical status of a link. At Layer 1, autonegotiation takes care of physical signaling and fault detection. UDLD does some things that auto-negotiation cannot do, such as detecting the identities of neighbors and shutting down misconnected ports.

Chapter 2: Bridging and LAN Switching UDLD works by exchanging protocol packets between the neighboring devices. In order for UDLD to work, both devices on the link must support UDLD and have it enabled on respective ports. Enabling both auto-negotiation and UDLD, allows Layer 1 and Layer 2 detection mechanisms to work together to prevent physical and logical unidirectional connections and malfunctioning of other protocols.

42

Each switch port configured for UDLD will send UDLD protocol packets containing the port's own device/port ID, and the neighbor's device/port IDs seen by UDLD on that port. Neighboring ports should see their own device/port ID (echo) in packets received from the other side. If the port does not see its own device/port ID in any incoming UDLD packets for a specific length of time, the link is considered unidirectional. Virtual LAN (VLAN) A VLAN is an extended logical network configured independent of the physical network layout. Each port on a switch can be defined to join a specific VLAN. VLAN ports on a switch can be assigned statically using a VLAN management application or by working directly within the switch. Dynamic VLANs are a more convenient approach in which ports on a switch that can automatically determine their VLAN assignments. Each hub segment connected to a switch port can be assigned to only one VLAN. Unlike a physical subnet, devices do not need to be connected to a single physical cable segment. Devices can be part of a subnet and still be connected to different switches in different locations. However, since each VLAN is a separate broadcast domain, routing between them must be enabled if data is to be passed. Most VLANs use frame filtering (frame tagging) with user-defined offsets to examine particular information about each frame, and uniquely assign a userdefined ID to each frame header. VLAN Trunk Protocol (VTP) The name VLAN Trunk Protocol (VTP) is misleading, since it doesn't really have much to do with actual trunking, except that switches use the VTP information to know what VLANs to carry over a trunk line. VTP is really a way to update the VLANs that a switch recognizes as valid on multiple switches by updating a single "server" switch. All of the rest of the switches in the VTP domain get a copy of the list of valid VLANs from the server switch. In a switched environment a subnet corresponds to a VLAN, and a VLAN may map to a single Layer 2 switch, or it may span several switches, especially at the access layer. Also, it is likely that one or more VLANs may be present on any particular switch. VLAN Trunk Protocol (VTP) is a layer-2 messaging protocol that centralizes the management of VLAN additions, deletions and changes on a

43 Chapter 2: Bridging and LAN Switching network-wide basis. This simplifies the management of large switched networks with many VLANs. When you add a new VLAN to the network you only have to define it on a single switch. The rest of the switches are defined automatically through their VTP membership. If you decide not to use VTP, then each VLAN will need to be configured manually on each individual switch, which is a lot of work and can easily lead to problems if you mistype a single character. If you have more than one group of switches, and each group has a different set of VLANs that it has to recognize, you should assign a separate domain for each group of switches. VTP domains consist of one or more interconnected switches that share the same VLAN configuration. A switch can only be configured as a member of a single VTP domain. You can specify the global VLAN configuration for the domain using either the CLI or an SNMP session. Early switching design specifications promoted the ability of VTP to create global VLAN groups that could span vast networks. In reality all this does is generate unnecessary and expensive wide area traffic for little gain. Most network designers want to limit VTP advertisements sent over slow and expensive wide area links, so the recommendation is not to set up VTP domains which span WAN links, Switches which are part of VTP domains can be configured to operate in one of three VTP modes: ServerAdvertises VLAN configuration to other switches in the same VTP domain and synchronize with other switches in the domain. Can create, modify, and delete VLANs as well as modify VLAN configuration parameters such as VTP version and VTP pruning for the whole domain. This is the default mode for a switch. ClientAdvertises VLAN configuration to other switches in the same VTP domain and synchronize their VLAN configuration with other switches using advertisements received over trunk links. Unable to create, change, or delete VLAN configurations. Transparent Does not advertise its VLAN configuration and does not synchronize its VLAN configuration with other switches. A switch running VTP version 2 will forward VTP advertisements, but will not act on them.

Advertisement types include: requests from clients, summary advertisements and subset advertisements. Advertisements carry two types of information: Global InformationIncludes VTP Domain Name, VTP Configuration Revision Number, Update Identity, Update Timestamp, MD5 Digest VLAN Information-Includes VLAN ID, VLAN Name, VLAN Type, VLAN State

VTP advertisements carry configuration revision numbers that are incremented every time a change is made. This allows identification of the most recent updates to the network topology. When a switch finds an advertisement with a

Chapter 2: Bridging and LAN Switching

44

higher configuration revision number, it will save the new VTP database, writing over the old one. A VLAN that does not exist in the new database is automatically deleted from the switch. Any ports that were in the VLAN will be orphaned. A common mistake is to add a switch that has been used on a separate or test network to a production network, without being aware of the revision number. Since test networks change much more frequently than production networks, the new switch is likely to have a higher configuration revision number than the production VTP domain. The result is that the entire production domain's VTP database gets overwritten and any ports assigned to the lost VLANs lose their VLAN membership and become unavailable to users. If you ever receive a call that all the switched ports on a network have suddenly locked up and no traffic is being passed, one of the first places to look is a new switch added to the network. Good documentation and control over physical access to network devices are probably your best defense against this. Also, to prevent this from happening, the command clear config all (on a set-based switch) or write erase (on an IOS-based switch) should always be used before any new switch is added to a production network. Note that all switches in the VTP domain must run the same VTP version. VTP Version 2 has features not supported in VTP version 1, including Token Ring LAN Switching and VLANs, unrecognized Type Length Value, Version Dependent Transparent Mode and Consistency Checks. In general, don't enable VTP version 2 in the VTP domain unless you are ready to migrate all the switches to that version. However, if the network is Token Ring, you must use VTP version 2. VTP pruning increases bandwidth. It does this by controlling traffic flow to the vital trunk links, and blocking flooded traffic to VLANs in the pruning eligible list. VTP pruning can only be enabled on a VTP server, and that will enable it for the entire management domain. VLAN 1 is always pruning-ineligible, and VLANs 2 through 1000 are pruning-eligible. VTP pruning is disabled by default. Spanning-Tree Protocol (STP) 8 0 2 . I d Spanning-Tree Protocol (STP) is a Layer 2 link management protocol using the Spanning Tree Algorithm (STA) to calculate the best loop-free path through a switched network. STP is designed to run on bridges and switches to provide path redundancy and prevent undesirable loops from forming in the network Switches send and receive spanning-tree frames at regular intervals. These spanning-tree frames are used to construct a loop-free path, forcing redundant data paths into a standby (blocked) state. This is operationally transparent to the network hosts.

45 Chapter 2: Bridging and LAN Switching Types of STP on Cisco Switches Common Spanning Tree (CST) When connecting a Cisco switch to a non-Cisco device through an 802.IQ trunk, the Cisco switch combines the spanning-tree instance of the 802.IQ native VLAN of the trunk with the spanning-tree instance of the non-Cisco 802.IQ switch. The primary advantages of CST are that only one set of BPDU's are used; it is only necessary to track changes for a single instance of STP, and non-Cisco switches can be added to the mesh. However, with only one STP algorithm running, sub-optimal paths are more likely to be selected than under other methods. With CST, less bandwidth will be used to negotiate a root bridge, although with only one root bridge for the entire network, it may take longer for STP to recalculate when a change occurs. Per-VLAN Spanning Tree (PVST) A Cisco proprietary method of connecting through ISL VLAN trunks, the switches maintain one instance of spanning tree for each VLAN allowed on the trunk. This is the default STP used on ISL trunks. Since each VLAN has its own instance of STP, there is more granular control of the path selection process, and fewer sub-optimai paths may be invoked. Because the size of the STP topology is reduced, this can have the effect of reducing convergence time and increasing scalability and stability. Per VLAN Spanning-Tree Plus (PVST+) A Cisco proprietary method of connecting through 802.IQ VLAN trunks, the switches maintain one instance of spanning tree for each VLAN allowed on the trunk, versus non-Cisco 802.IQ switches which maintain one instance for ALL VLANs. This is the default STP used on ISL trunks. Since each VLAN has its own instance of STP, there is more granular control of the path selection process, and fewer sub-optimal paths may be invoked. Because the size of the STP topology is reduced, this can have the effect of reducing convergence time and increasing scalability and stability. PVST+ has replaced PVST and is now the default spanning tree protocol used on Cisco switches. Besides the features offered by PVST, PVST+ also offers Layer 2 load balancing. Multiple Spanning Trees (MST) 802.1s When using 802.Id (spanning tree), 802.lq (vlans), and multiple switches you can run into issues with spanning tree choosing unexpected paths and/or disabling desired paths. To solve this problem, 802.1s (MST) has been introduced by the IEEE. MST runs the spanning tree protocol for each VLAN that has the same network topology. Thus, the optimum paths will be chosen for each vlan.

Chapter 2: Bridging and LAN Switching Multiple Instance STP (MISTP) MISTP is Cisco's proprietary multiple spanning tree protocol. MISTP inspired MST (covered above). On Cisco switches, MISTP is an alternative to PVST+. MISTP-PVST+ Mode This spanning tree protocol option is available on the newer Cisco switches and is used to support older switches that can only support PVST+. This mode will allow you to use MISTP on your switch but still communicate with switches running PVST+. Rapid Spanning Tree Protocol (RSTP) 802.lw The IEEE 802.lw standard, Rapid Spanning Tree Protocol (RSTP) is an enhancement to 802.Id. RSTP offers more rapid spanning tree convergence, and industry standard alternatives to Cisco's PortFast, UplinkFast and BackboneFast. With RSTP, you can reduce the convergence time of a port from the 50-second default of 802.Id to, as low as, 1 second. This fast convergence is critical for latency-sensitive applications like voice and video. Redundancy Without Loops

46

STP runs on bridges and switches in order to prevent loops in the network. There are different flavors of STP, with IEEE 802.Id being the most common. It is found in situations where you want to allow redundant links, but not loops. Redundant links are important as backups in case of a link failure in a network. If a primary fails, the backup links are activated so that users can continue using the network with minimal service interruptions. Without STP running on the bridges and switches, such a situation could result in loops. To provide this desired path redundancy, as well as to avoid a loop condition, STP defines a series of paths that span all switches in an extended network. It forces less desirable links into a standby (blocked) state, while leaving others in a forwarding state. If a link in forwarding state becomes unavailable, STP recognizes the topology change in the network, and recalculates the best paths. STP allows the redundant links, but with all but the best blocked, so that the switched fabric looks logically (but not physically) like this diagram:

47 Chapter 2: Bridging and LAN Switching Figure 2-2. STP redundant links Root Bridges and Switches The key to STP is the selection of a root bridge, which becomes the focal point in the network. All other decisions in the network, such as which ports are blocked and which ports are put in forwarding mode, are made from the perspective of this root bridge. When implemented in a switched network, the root bridge is usually referred to as the "root switch." Depending on the type of spanning-tree enabled, each VLAN may have its own root bridge/switch. In this case, the root for the different VLANs may all reside in a single switch, or it can reside in varying switches, depending on the estimates of the Network Architect. You should remember that selection of the root switch for a particular VLAN is extremely important. You can allow the network to decide the root using arbitrary criteria, or you can define it yourself. With either option there is risk; the switches are unlikely to select the optimal root by themselves, but a bad network architect can make an even worse choice. You should control the selection of the root whenever possible, but great care must be taken to understand what the consequences of a bad decision can be. Bridge Protocol Data Units (BPDUs) All switches exchange information to use in the selection of the root switch, as well as for subsequent configuration of the network. This information is carried in Bridge Protocol Data Units (BPDU). The BPDU contains parameters that the switches use in the selection process. Each switch compares the parameters in the BPDU that they are sending to their neighbor with the one that they are receiving from their neighbor. BPDUs are multicast frames sent out periodically by switches to announce their existence, resources and recent changes to a switch's configuration. They: Propagate bridge IDs in order for the selection of the root switch to take place. Are used to determine loop locations within a network. Provide notification of network topology changes. Remove loops by placing redundant switch ports in a backup state.

As a BPDU leaves a port, it applies the root port cost. Path Cost is the total sum of all of the port costs, and is what STP uses to determine which ports should forward and which ports should block. If the path cost is the same for several ports, STP will use the lowest port ID. The thing to remember in the STP root selection process is that "smaller is better." If the Root ID advertised on Switch A is a lower value than the Root ID

Chapter 2: Bridging and LAN Switching

48

that its neighbor (Switch B) is advertising, then Switch A's information is better. Switch stops advertising its Root ID, and instead accepts that of Switch A. In a bridged network environment running under IEEE 802.10, a bridge takes maximum age, forwarding delay, and hello time parameters from the root bridge BPDU. The BPDUs are in the following format:
2 1 1 1 8 4 Bridge
ID

2 2
Port ID

2
Message Max Age Age

Octets

Protocol Version Message Flags Root Root ID Type ID Cost

Hello Forward Tine Delay

Protocol IDthe packet is a BPDU. VersionBPDU version used. Message Typethe stage of the negotiation. Flagstwo bits used to indicate a change in topology and indicate acknowledgement of the TCN BPDU. Root IDroot bridge priority (2 bytes) followed by the MAC address (6 bytes). Root Path Costtotal cost to or from this bridge to the designated root bridge. Bridge IDbridge priority (2 bytes) followed by the MAC address (6 bytes), where smaller is better, and the lowest value wins! The default bridge priority is 0x8000 (3276810). Port IDID of the port from which BPDUs are sent, a root port, made up of the configured port priority and the bridge MAC address. Message Agetimers for aging messages (only have effect on the network if the root bridge is configured with this parameter). Maximum Agemaximum message age before information from a BPDU is dropped as being too old in a situation in which no BPDUs are being received, (only has effect on the network if the root bridge is configured with this parameter). The default value is 20 seconds. Hello Timetime between BPDU configuration messages sent by the root bridge (only has effect on the network if the root bridge is configured with this parameter). The default value is 2 seconds. Forward Delaystops a bridge from forwarding data temporarily to allow information about a topology change to disseminate to all parts of the network. This allows ports which need to be turned off in the new topology to be switched off before the new ports are turned on (only has effect on the network if the root bridge is configured with this parameter). Operation Under STP Selection of the root switch is the most important STP decision. Before deciding to configure STP, you should determine which switch will be the root of the

49 Chapter 2: Bridging and LAN Switching spanning tree. It does not necessarily have to be the most powerful switch, but it should be the most central switch on the network. The network will be logically laid out from the perspective of this root device. You should not change the the root switch configuration if you can possibly avoid it, since reconfiguration triggers spanning tree recalculation, which will affect network performance. Backbone switches, since they raely have their configuration changed, are often defined as root switches. Backbone switches are usually more powerful than other switches, and are centrally placed within the network. They are also less likely to be disturbed during moves and changes within the network. Selecting a backup root bridge is recommended as a matter of good practice. The backup will have a bridge priority in between the default and the primary root bridge. If the primary root switch were to fail, human intelligence would still be the determining factor in recalculating the spanning tree. "Bridge priority" is the main variable that defines the root bridge. The switch with the lowest bridge priority will be selected as the root switch. After determining the selection of a root switch, one of the two commands set spantree root {vlan_id} (for set-based switches) or spanning-tree vlan {vlan_id> root primary (for IOS-based switches) are used to set the "bridge priority" of the switch. The default switch priority is 32768. This command sets the switch to run using a low priority in the tree, making it the root switch. For set-based switches and IOS-based switches, using this command will change the priority to 8192 (one forth of 32768). On IOS-based switches with an IOS of 12.1(8)EA1 or later, the priority will be set to 24576. Here is an example of this command on an IOS-based switch. In this example the root switch for vlan 100 will be set: cat(config)# spanning-tree vlan 100 root primary vlan 100 bridge priority unchanged at 24576 vlan 100 bridge max aging time unchanged at 20 vlan 100 bridge hello time unchanged at 2 vlan 100 bridge forward delay unchanged at 15 cat(config)# You could also issue a command specifying a priority: cat(config)# spanning-tree vlan 100 priority 24576 STP Step-by-Step The root switch selection process starts with each switch transmitting BPDU to its directly connected switch neighbors on a per-VLAN basis. As the BPDUs go

Chapter 2: Bridging and LAN Switching through the network, each switch compares the BPDU it sent out to the ones it has received from its neighbors. From this comparison of transmitted and received BPDU's, the switches determine the root switch. The switch with the lowest priority in the network wins this election process. (Remember, there may be one root switch identified for each VLAN, depending on the type of STP selected.) At any time STP is calculated or recalculated, the switches use these rules: Rule One for STP: All ports of the root switch must be in forwarding mode. Next, each switch determines their best path to get to the root. The switches compare the information in all the BPDUs received on all their ports. The port with the smallest information contained in its BPDU is used to get to the root switch; that port is called the root port. After a switch figures out its root port, it proceeds to Rule Two.

50

Rule Two for STP; The selected root ports need to be set to forwarding mode. For each LAN segment, the switches communicate with each other to determine which switch on that LAN segment is best to use for moving data from that segment to the root bridge. This switch is called the designated switch. Rule Three for STP: In any given LAN segment, the port of the designated switch that connects to the LAN segment must be placed in forwarding mode. Rule Four for STP: All other ports in all the switches within a VLAN must be placed in blocking mode. This is only for ports that are connected to other bridges or switches. Ports connected to workstations or PCs should not be affected by STP calculation; they remain forwarded. Users on your network may, however, see a delay during the recalculation process, as all data transmissions are halted. STP Timers Forward delay timerAmount of time a port will remain in the listening and learning states before going into the forwarding state. Hello timerHow often the switch broadcasts Hello messages to other switches. Maximum age timerHow long protocol information received on a port is stored by the switch.

Port State Progression in STP STP domain ports will progress through the following states: BlockingListens for BPDUs from other bridges, but does not forward them or any traffic.

51 Chapter 2: Bridging and LAN Switching ListeningAn interim state while moving from blocking to learning. Listens for frames and detects available paths to the root bridge, but will not collect host MAC addresses for its address table. Learning Examines the data frames for source MAC addresses to populate its address table, but no user data is passed. ForwardingOnce the learning state is complete, the port will begin its normal function of gathering MAC addresses and passing user data. Disabled Either there has been an equipment failure, a security issue or the port has been disabled by the system administrator.

Notes about STP Port States: A port in blocking state does not participate in frame forwarding. The switch always goes into blocking state immediately following switch initialization. When a port changes from the listening state to the learning state, it is preparing to participate in frame forwarding. A port in the forwarding state actually forwards frames (User data, BPDUs, etc.).

STP Broadcast Domain Characteristics Where redundant links exist, any but the one with the least distance from the root switch are blocked. STP convergence can take upwards of 50 seconds. Broadcast traffic within the layer-2 domain (VLAN) interrupts every host. Broadcast storms within the layer-2 domain affect the whole domain. Isolating problems can be time consuming. Network security options within the layer-2 domain (VLAN) is limited.

Topology ChangesTCN, TCA, and TC


Normally, BPDU traffic is received from the root bridge on the root port, but is never sent to the root bridge. If a bridge has a topology change, it sends a TCN (Topology Change Notification) BPDU out its root port towards the root bridge. The next upstream bridge acknowledges receipt of the TCN by replying with a BPDU which has the TCA (Topology Change Acknowledgement) bit set. The TCN/TCA process is repeated hop-by-hop, until the Root bridge receives the TCN BPDU. When the root is aware of a change, it sends out BPDUs with the TC (Topology Change) bit set. STP Enhancements There are three major enhancements available for Spanning Tree, as it is applied on Cisco devices: PortFast By default, all ports on a switch are assumed to have the potential to have bridges or switches attached to them. Since each of these

Chapter 2: Bridging and LAN Switching

52

ports must be included in the STP calculations, they must go through the four different states whenever the STP algorithm runs (when a change occurs to the network). Enabling PortFast on the user access ports is basically a commitment between the network architect and the switch, agreeing that the specific port does not have a switch or bridge connected, and therefore this port can be placed directly into the forwarding state; the port can avoid being unavailable for 50 seconds while it cycles through the different bridge states, and also simplifies the STP recalculation and reduces the time to convergence. PortFast suppresses TCN generation whenever the port goes up or down. UplinkFastConvergence time on STP is 50 seconds. Part of this is the need to determine alternative paths when a link between switches is broken. This is unacceptable on networks where real-time or bandwidth-intensive applications are deployed (basically any network). If the UplinkFast feature is enabled (it is not by default) AND there is a least one alternative path whose port is in a blocking state AND the failure occurs on the root port of the actual switch, not an indirect link; then UplinkFast will allow switchover to the alternative link without recalculating STP, usually within 2 to 4 seconds. STP can skip the listening and learning states before unblocking the alternate port. BackboneFast BackboneFast is used at the Distribution and Core layers, where multiple switches connect together, and is only useful where multiple paths to the root bridge are available. This is a Cisco proprietary feature that speeds recovery when there is a failure with an active link in the STP. Usually when an indirect link fails, the switch must wait until the maximum aging time (max-age) has expired, before looking for an alternative link. This delays convergence in the event of a failure by 20 seconds (the max-age value). When BackboneFast is enabled on all switches, and an inferior BPDU arrives at the root portindicating an indirect link failurethe switch rolls over to a blocked port that has been previously calculated. The primary difference between UplinkFast and BackboneFast is that BackboneFast can detect indirect link failures, and is used at the Distribution and Core layers; while UplinkFast is aware of only directly connected links, and is used primarily on Access layer switches. If UplinkFast is turned on for the root switch, it will automatically disable it. BackboneFast can be enabled both on set-based and IOS-based switches.

Multicast Cisco switches use two protocols, IGMP and CGMP, to manage multicast by allowing directed switching of multicast traffic, and to dynamically configure switch ports so that IP multicast traffic is forwarded only to appropriate ports: Internet Group Management Protocol (IGMP)IGMP is a standard protocol to manage the multicast transmissions passed to routed ports. One problem with this protocol is if a VLAN on a switch is set to receive, all the workstations on that VLAN will get the multicast stream. Cisco Group Management Protocol (CGMP)CGMP is a Cisco-proprietary protocol to control the flow of multicast streams to individual VLAN port

53 Chapter 2: Bridging and LAN Switching members. CGMP requires IGMP to be running on the router. This solves the problem cited above of all workstations receiving the multicast stream. IEEE 802.ID-conformant bridges will not forward frames sent to this multicast destination address, regardless of the state of the bridge's ports, or whether or not the bridge implements the MAC Control sub-layer. The globally assigned 48-bit multicast address 01-80-C2-00-00-01 is reserved for use in MAC Control PAUSE frames for inhibiting transmission of data frames from a DTE in a full duplex mode IEEE 802.3 LAN. You will find more on Multicasting in Chapter 7. Multi-Layer Switching (MLS) The need for improved network performance keeps increasing. To keep connections fast and reliable, networks must adjust rapidly to changes and failures to find the best paths. Networks must also be as transparent as possible to end users. Determining the best path is the job of routing protocols. This can be a CPUintensive process. Using switching hardware for finding best paths can increase performance significantly. An MLS goal is to achieve this performance increase. Multi-layer switching is the ability of a switch to support packet switching and routing in hardware, with optional support for Layers 4 through 7 switching in hardware as well. Hardware switching and routing offers high throughput at near line rates even with all ports simultaneously sending traffic. The basic purpose of MLS is for switch hardware bypass a router in setting up a communication path for packets between two devices in different VLANs. With multilayer switches, the focus is performance. Multilayer switches have packetswitching throughputs in the millions of packets per second, compared to general-purpose routers which reach their limit at just above a million packets per second. Hardware switching requires a Layer 3 engine (the route processor) to load packet processing information from software (switching information, access lists, QoS, and other information) to the hardware. Cisco Catalyst switches use either standard multilayer switching (traditional MLS) or Cisco Express Forwarding (CEF-based MLS). All Catalyst switches support CEF-based MLS. The components of MLS are the MLS switching engine (MLS-SE) and the MLS route processor (MLS-RP), with multilayer switching protocol (MLSP) used to communicate: The MLS-SE is a MLS-enabled switch, which normally requires a router to route between subnets/VLANs. With special hardware and software, MLS-SE can handle the rewrite of the packet.

Chapter 2: Bridging and LAN Switching The MLS-RP is the MLS-enabled router, which performs the conventional function of routing between subnets/VLANs.

54

Multilayer Switching Protocol (MLSP), used to communicate between MLS-SE and MLS-RP.

A packet passing through a routed interface has its non-data portions o rewritten as the packet heads to its destination, hop by hop. This may seem confusing because a Layer 2 device appears to take on a Layer 3 task. In reality, the switch only rewrites Layer 3 information and "switches" between subnets/VLANs. The router is still responsible for route calculations and best-path determination. This becomes clearer if you think of the routing and switching functions as separate, especially when they are within the same chassis (as with an internal MLS-RP). Think of MLS as a more advanced form of route cache, separating the cache from the router functions on a switch. MLS requires both the MLS-RP and the MLS-SE. MLS-RP can be internal (installed in the switch chassis) or external (connection via cable to a switch trunk port): Internal MLS-RP examples are the Route Switch Module (RSM) and Route Switch Feature Card (RSFC). You need to install the RSM in the slot or the RSFC in the supervisor engine of a Catalyst 5500/5000 series switch. You do the same with the Multilayer Switch Feature Card (MSFC) for Catalyst 6500/6000 series switches. External MLS-RPs examples include Cisco 7500, 7200, 4700, 4500 or 3600 series routers. To support MLS IP, all MLS-RPs require Cisco IOS releases 11.3WA or 12.0WA subsequent. Also, remember that you must enable MLS for a router to be an MLS-RP.

The MLS-SE is a switch with special hardware. For a Catalyst 5500/5000 series switch, you need to install a NetFlow Feature Card (NFFC) on the supervisor engine. Supervisor engines HG and IIIG have a NFFC already installed. You also need at least Catalyst OS (CatOS) 4.1.1 software. The third component of MLS is Multilayer Switching Protocol (MLSP). You will need to understand the basics of MLSP to perform effective MLS troubleshooting procedures. MLSP is used to communicate between MLS-RP and MLS-SE. MLSP tasks include: Enabling MLS Installation of MLS flows (cache information) Update or deletion of flows Management and export of flow statistics

MLSP also allows the MLS-SE to:

55 Chapter 2: Bridging and LAN Switching o Learn Layer 2 MAC addresses of MLS-enabled router interfaces Check the MLS-RP flowmask Confirm that the MLS-RP is operating

MLS-RP sends out multicast "hello" packets at 15 second intervals using MLSP. If MLS-SE misses three of these, then MLS-SE assumes that MLS-RP has failed or that the connection to the MLS-RP has been lost.

Candidate Packet

t
/

Enable Packet

Layer 3 Switched

A Figure 2-3. MLSP essentials

Figure 2-3 illustrates three steps in creating a shortcut with MLSP: the candidate, enable, and cache steps. MLS-SE checks for the cache MLS entry. If the MLS cache entry and packet information match, the switch rewrites the packet header locally. This rewrite is a shortcut and bypasses the router. The packet does not forward to the router. Packets that do not match are forwarded to the MLS-RP as candidate packets. These packets may be locally switched. A candidate packet which has passed through the MLS flowmask will have the information in its packet header rewritten (without changing the data portion). The router then sends the packet on the next hop along the destination path. The packet is now an enabler packet. If the packet returns to the same MLS-SE from which it left, a MLS shortcut is created and placed into the MLS cache. Now the switch hardware (instead of the router) locally rewrites that packet and all similar packets that follow (a "flow"). With use of the flowmask (essentially an access list), a network administrator can adjust the scope of these flows, according to: Destination address. Destination and source addresses. Destination, source, and Layer 4 information.

You should note that the first packet of a flow always passes through the router. After that, the flow is locally switched. Each flow is unidirectional. Communication between devives, for example, requires setup and use of two shortcuts. To emphasize, the main purpose of MLSP is to set up, create, and maintain these shortcuts.

Chapter 2: Bridging and LAN Switching

56

To create a MLS shortcut, the same MLS-SE must see both the candidate and enabler packets for a particular flow. This is why network topology is important to MLS. Remember, the purpose of MLS is to improve network performance significantly by bypassing the router. The switch defines the communication path between two devices in different VLANs. For some network topologies and configurations, MLS provides a simple and effective method for increasing performance. The three MLS components, MLS-SE, the MLS-RP, and MLSP, unload vital router resources by allowing other network components to perform router functions. Catalyst Configuration Commands PortFast causes a spanning tree port to enter the forwarding state immediately, bypassing listening and learning states. You should use PortFast only when connecting a single end station to a switch port. Otherwise, you could create a network loop. You can use PortFast on switch ports connected to a single workstation or server to connect to the network immediately, instead of having to wait for the spanning tree to converge. UplinkFast allows uplink groups to load balance between redundant links and offers fast convergence after a spanning tree topology change. An uplink group is a set of ports for a VLAN, with only one port forwarding at any one time. An uplink group is made up of a root port (which is forwarding) and a set of blocked ports (but not self-looped ports). The uplink group offers an alternate path if the currently forwarding link fails. BackboneFast initiates when a root port or blocked port on a switch receives inferior BPDUs from its designated bridge. An inferior BPDU identifies one switch as both the root bridge and the designated bridge. When a switch receives an inferior BPDU, it indicates that a not directly connected switch link (an indirect link) has failed (in other words, the designated bridge has lost its connection to the root bridge). Under normal spanning tree rules, the switch ignores inferior BPDUs until the configured maximum aging time is reached. Then, the switch attempts to determine whether there is an alternate path to the root bridge. If the inferior BPDU arrives on a blocked port, the root port and other blocked ports on the switch become alternate paths to the root bridge. (Selflooped ports are not treated as alternate paths to the root bridge.) If the inferior BPDU arrives on the root port, all blocked ports become alternate paths to the root bridge. If the inferior BPDU arrives on the root port and there are no blocked ports, the switch assumes that it has lost connectivity to the root bridge, causes the maximum aging time on the root to expire, and becomes the root switch according to normal spanning tree rules. If the switch finds alternate paths to the root bridge, it uses these alternate paths to transmit a new kind of PDU called the Root Link Query PDU. The

57 Chapter 2: Bridging and LAN Switching switch sends the Root Link Query PDU out via all alternate paths to the root bridge. If the switch determines that it still has an alternate path to the root, it causes the maximum aging time on the ports on which it received the inferior BPDU to expire. If all alternate paths to the root bridge indicate that the switch has lost connectivity to the root bridge, the switch causes the maximum aging times on the ports on which it received an inferior BPDU to expire. If one or more alternate paths can still connect to the root bridge, the switch makes all ports on which it received an inferior BPDU its designated ports and moves them out of the blocking state (if they were in blocking state), through the listening and learning states, and into the forwarding state.

Other Catalyst Configuration Considerations A port in auto-sensing mode will have its speed and duplex configuration determined by autosensing. An error message is generated if you attempt to set the transmission type of autosensing ports. On a 10/100 module, if a port speed is set to auto, its transmission type (duplex) will also set to auto automatically. The only two configurable choices settings are full and half duplex. Network designers can use STP PortFast BPDU guard to enforce the STP domain borders and maintain a predictable active topology. With STP PortFast enabled, devices behind the ports are not allowed to influence the STP topology. This is done by disabling the port with PortFast configured upon reception of BPDU. The port is transitioned into errdisable state, and a message printed on the console. This is done with the spanning-tree portfast bpduguard enable command. Weighted Round Robin (WRR) WRR supports improved bandwidth sharing at the egress port. The wrr command is used to define the bandwidths for egress WRR through scheduling weights. Four queues participate in WRR unless you set up an egress expedite queue. The expedite queue is a strict-priority queue that is used until empty before one of the WRR queues is used. WRR weights are used to partition the bandwidth between different queues in the event all queues are nonempty. For example, entering weights of 1:3 means that one queue is allocated 25 percent of the bandwidth and the other queue is allocated 75 percent as long as both queues have ata. There is no dependency order for the wrr-queue bandwidth command. If you enable egress priority, the weight ratio is calculated using the first three parameters; otherwise, all four parameters are used. Using weights of 1:3 does not necessarily lead to the same results as entering weights at 10:30. Weights at 10:30 mean more data is serviced from each queue and the latency of packets serviced from the other queue increases. You should set weights so that at least one maximum sise packet at a time can be

Chapter 2: Bridging and LAN Switching

58

serviced from the lower priority queue. For the higher priority queue, set weights so that multiple packets can be serviced at any one time. To map CoS values so as to drop thresholds for a queue, use the wrr-queue cos-map command. Use the no form of this command to return to default settings. wrr-queue cos-map queue-id threshold-id cos-1 ... cos-n Syntax Description queue-id threshold-id cos-1 ... cos-n Defaults The default settings are these: Receive queue Receive queue Receive queue Receive queue 1/drop threshold 1/drop threshold 2/drop threshold 2/drop threshold 1, transmit queue 1/drop threshold 1: CoS 0 and 1 2, transmit queue 1/drop threshold 2: CoS 2 and 3 3, transmit queue 2/drop threshold 1: CoS 4 and 6 4, transmit queue 2/drop threshold 2: CoS 7 Queue number; valid value is 1 Threshold ID; valid values are from 1 to 4 CoS value; valid values are from 0 to 7

59 Chapter 2: Bridging and LAN Switching Chapter 2 Questions 2-1 Which bridging method uses a BVI? a) IRB b) CRB c) SRB d) Transparent e) SR/TLB 2-2 Which bridging method allows you to both bridge and route the same protocol on a router? a) IRB b) CRB c) SRB d) Transparent e) SR/TLB 2-3 Which bridging method allows you to bridge and route different protocols on a router, but not perform both operations for the same protocol? a) IRB b) CRB c) SRB d) Transparent e) SR/TLB 2-4 When configuring CRB, which of the following is true? a) All protocols are routed by default b) All protocols are bridged by default, except IP c) All protocols are bridged by default d) BVI interface must be configured before you can route protocols, and after enabling CRB 2-5 When configuring IRB, which of the following is true? a) All protocols are routed by default b) All protocols are bridged by default, except IP c) All protocols are bridged by default d) BVI interface must be configured before you can route protocols, and after enabling IRB 2-6 Which of the following are true about bridging on Cisco routers? a) By default, bridging is disabled on all Cisco routers.

Chapter 2: Bridging and LAN Switching b) Bridging techniques are no longer seen in modern network configurations. c) Bridging techniques are broadcast intensive, and this can flood slower WAN links. d) By default, bridging is enabled on all Cisco routers. e) Many non-routable protocols, most importantly SNA, are very time sensitive, and delays can cause loss of data or session connectivity. f) Bridging is no longer a current technology, and is not currently supported. 2-7 Which of the following would be considered the limitations of CRB? a) CRB does not pass traffic between the routed and bridged domains. b) CRB cannot route and bridge the same protocol on an interface. c) CRB cannot route and bridge different protocols d) An interface can only be in one Transparent bridge-group. e) An interface cannot be in a Transparent bridge-group. 2-8 What is the purpose of Spanning-Tree? (Choose all that apply.) a) To allow for redundant paths b) To adjust path costs c) To alter the root bridge d) To prevent loops in a multi-path bridged network 2-9 What is a BPDU? a) Bridge Protocol Datagram Unit b) Bridge Protocol DEC Unit c) Bridge Protocol Data Unit d) None of the above are correct 2-10 What causes Spanning-Tree to calculate on a switch? a) Topology change b) Bridge stop receiving BPDUs from a neighbor bridge c) Bridge powers on d) All of the above 2-11. How do you make a switch be the root bridge for VLAN 2? a) set spantree priority 64 2 b) set spantree priority 32768 2 c) bridge 2 priority 126 d) bridge 2 priority 32768

60

61 Chapter 2: Bridging and LAN Switching 2-12. What is the default bridge priority for DEC? a) 128 b)32768 c) 32786 d)65535 e) 16 2-13. What is the default bridge priority for IEEE? a) 128 b) 32768 c) 32786 d) 65535 2-14. Which of the following is an end station that performs data forwarding? a) LEC b) LECS c) LES d) BUS 2-15. Which command will show you if a switch supports ISL, 802.IQ or both? a) show port capabilities b) show port trunk c) show port trunk capabilities d) show port isl e) show port 802.lq 2-16. What does VTP stand for? a) Virtual Trunking Protocol b) VLAN Trunking Protocol c) VLAN Trunking Packet d) Virtual Trunking Packet 2-17. If two switches are connected via a Gigabit Ethernet port, and both sides are set for auto, which of the following is true? a) The trunk will auto negotiate and come up b) The trunk will never come up unless both sides are set to desirable or on c) The trunk will never come up unless one side is set to desirable or on d) Gigabit Ethernet links do not support trunking

Chapter 2: Bridging and LAN Switching

62

2-18. Which of the following are NOT valid modes for trunking? (Choose all that apply.) a)ISL b) 802.1Q

c) 802.1
d) LANE e) STA 2-19. What is CGMP used for? a) Group Multicasting b) Group Messaging c) Gateway member protocol d) Gigabit multicasting protocol 2-20. How often are CDP packets sent by default? a) 10 seconds b) 30 seconds c) 60 seconds d) 120 seconds 2-21. Which of the following are not valid trunking modes? a) Nonegotiate b) Auto c) Desirable d) Off e) On f) Forwarding 2-22. If a trunk is enabled on an EtherChannel bundle and part of the bundle fails, what will happen to the trunk? a) Traffic will still be passed, but a percentage of packets will be lost b) Traffic will still be passed, but at a slower rate c) Traffic will still be passed, but queuing will be disabled d) Traffic will still be passed at the same rate as before the failure e) Traffic will not be passed f) The trunk will fail 2-23. For trunking to be enabled on EtherChannel bundles, what must be configured the same on all links? (Choose all that apply.)

63 Chapter 2: Bridging and LAN Switching a) MAC addressing b) Speed settings c) Line encoding d) DLCIs e) Duplex settings f) LMIs 2-24. What are the two types of STP that can be configured on a Cisco switch? a) STPvl b) ML_STP c) STA d) PVST e) BPDU f) CST g) STPv2 2-25. Which of the following types of STP maintains one instance of the spanning tree for each VLAN allowed on the trunk? a) STPvl b) ML_STP c) STA d) PVST e) BPDU f) CST g) STPv2 2-26. What are the two primary differences between UplinkFast and BackboneFast? (Choose all that apply.) a) UplinkFast is for BackboneFast is b) UplinkFast is for Distribution and Distribution and Core layer switches, while for Access layer switches. Access layer switches, while BackboneFast is for Core layer switches.

c) UplinkFast can detect indirect link failures, while BackboneFast is aware of only directly connected links. d) BackboneFast can detect indirect link failures, while UplinkFast is aware of only directly connected links. 2-27. What is the purpose of SSRP? a) Provides the ability to have a redundant default gateway in an IP network

Chapter 2: Bridging and LAN Switching b) Provides the ability to have a redundant LEC/BUS service in an ATM LANE network c) Provides the ability to have a redundant LES/BUS service in an ATM LANE network d) Provides the ability to have a redundant LECS/BUS service in an ATM LANE network e) Provides the ability to have a redundant LEC/LECS service in an ATM Lane network

64

f) Provides the ability to have a redundant DHCP server in an IP network g) None of the available options 2-28. How many VTP domains can a single switch belong to? a) Zero b) One c) Two d) Three e) Four f) Five g) None of the available options 2-29. What is the result of implementing VTP pruning? a) Hiding VLANs from Access layer switches b) Eliminating VLANs from the domain c) Enhanced network bandwidth use by reducing unnecessary flooded traffic d) Flattening the network to a single VLAN e) Flattening the network to a limited number of VLANs 2-30. Which of the following statements best describes CGMP? a) Allows switches to discriminately forward multicasts b) Replaces IGMP c) Predates IGMP d) It is a Cisco proprietary protocol e) Allows routers to discriminately forward multicasts f) Is supported by multiple vendors 2-31. CDP works at which OSI Layer? a) Layer 1 b) Layer 2 c) Layer 3

65 Chapter 2: Bridging and LAN Switching d) Layer 7 2-32. You have a network with 3550 switches running 12.1(13)EA and Catalyst 4000 switch running Catalyst software version 7.6. You want to add a new 3550 to the VTP domain, but you cannot remember your VTP password. Which of the following commands will show you the password? (More than 1 answer) a) 'Show config' on the Cat 4000 switch b) 'Show run' on the Cat 4000 switch c) 'Show run' on a 3550 switch d) 'Show vtp password' on a 3550 switch e) 'Show vtp status' on a 3550 switch 2-33. In a bridged LAN, the number if BPDU's with the TCA bit set is incrementing rapidly. What could be the cause of this? (Choose all that apply). a) BPDU's with the TCA bit set is part of the normal operation of a bridged LAN. b) Improper cabling is being used in the network. c) There is no spanning tree portfast configured on the ports connecting 2 workstations. d) The root switch is experiencing problems due to high CPU utilization and is not sending any BPDUs. e) None of the above) 2-34. The EnableMode LAN is a bridged network running the 802.ID spanning tree protocol. Which of the following are paramenters that a bridge will receive from the root bridge. a) Maxage b) Root Cost c) Forward delay d) A,B, and C e) None of the above 2-35. A small office LAN contains only one switch, which was put in place without any of the default configurations changed. You have noticed that somebody in the office has looped a cable by connecting one end to port 4/37 and the other to port 4/38 as shown below. Which of the following statements is true? a) Port 4/38 will be blocked. b) Both ports will be forwarding. c) Port 4/37 will be blocking. d) Both ports will be blocked) e) Port 4/38 will continuously move between the listening and learning states.

Chapter 2: Bridging and LAN Switching f) Port 4/37 will be stuck in the learning state. 2-36. Which of the following statements regarding Transparent Bridge tables are FALSE? (Choose all that apply.) a) Decreasing the bridge table aging time would reduce flooding. b) Increasing the bridge table aging time would reduce flooding. c) Bridge table entries are learned by way of examining the source MAC address of each frame.

66

d) Bridge table entries are learned by examining destination MAC addresses of each frame. e) The bridge aging time should always be more than the aggregate time for detection and recalculation of the spanning tree) 2-37. What spanning-tree protocol timer determines how often the root bridge send configuration BDPUs? a) STP Timer b) Hold Timer c) Hello Timer d) Max Age Timer e) Forward Delay Timer 2-38. Troubleshooting STP convergence errors reveals that a switched network has multiple bridging loops, which is periodically causing problems. What Cisco IOS switching feature, if used improperly, would most likely cause these errors? a) Port Fast b) Uplink Fast c) Backbone Fast d) Dotlq Trunking e) Fast EtherChannel 2-39. The speed and duplex settings are being configured for each port in a Catalyst switch. When trying to set the duplex mode on Port 1/1, what does the following message mean: "Port 1/1 is in auto-sensing mode"? a) Port 1/1 has auto-negotiated the duplex correctly. b) An error has occurredthe duplex setting of auto is not valid. c) CDP has detected that both sides are set for auto-negotiating. d) An error has occurredthe duplex is now mismatched 2-40, What is the Cisco recommended best practice PaGP setting for ports Etherchannel trunks? a) on - on b) auto - auto c) desirable - on

67 Chapter 2: Bridging and LAN Switching d) desirable - auto e) desirable - desirable 2-41. You wish to implement Ethernet Channels in your switched LAN. Which of the following are valid statements that should be kept in mind before this implementation? (Choose all that apply) a) Ports within a Fast Ether Channel need to have identical duplex and speed settings. b) Port Aggregation Protocol (PAGP) facilitates the automatic creation of Fast Ether channels links. c) Ports within a Fast Ether Channel may be assigned to multiple VLANs. d) Fast Ethernet Channels can not be configured as a trunk. e) Only Fast Ethernet ports can be channeled. 2-42. A new Catalyst switch is added to the switched LAN. Users attached to the new switch are having connectivity problems. Upon troubleshooting, you realize that the new switch is not dynamically learning any VLAN information via VTP from the other switches. What could be causing this problem? a) The other switches are different Catalyst models. b) There are no users on one of the existing switches. c) The other upstream switches are VTP clients. d) The VTP domain name is not properly configured) e) The native VLAN on the trunk is VLAN 1. 2-43. The network is implementing a new Layer 3 Switching architecture. When an IP packet is Layer 3-switched from a source in one VLAN to a destination in another VLAN, what field in a packet will be rewritten? a) Layer 3 destination address b) Layer 3 source address c) Layer 2 TTL d) Layer 3 TTL e) Layer 3 Transport Protocol 2-44. By default, which of the following VLANs are eligible for pruning in a Catalyst 6509 switch? (Choose all that apply)

a) VLAN 1 b) VLAN 2 c) VLAN 999 d) VLAN 1000 e) VLAN 1001 f) VLAN 4094

Chapter 2: Bridging and LAN Switching 2-45. You have ISL trunks configured between two Catalyst switches, and you wish to load share traffic between them. Which method of load sharing can you utilize? a) Load sharing of traffic over parallel ISL trunks on a per flow basis. b) Load sharing of traffic over parallel ISL trunks on a per VLAN basis. c) Load sharing of traffic over parallel ISL trunks on a per packet basis. d) Automatic round robin load sharing of VLAN traffic.

68

2-46. You are trying to bring up an ISL trunk link between two switches. The trunk mode on the local end is set to auto. However, the ISL trunk never comes up. What is the probable cause of this problem? (Choose all that apply.) a) The trunk mode on the remote end is set to on. b) The trunk mode on the remote end is set to off. c) The trunk mode on the remote end is set to auto. d) The trunk mode on the remote end is set to desirable. e) The trunk mode on the remote end is set to nonegotiate) 2-47. The LAN consists of numerous Catalyst switches and a large number of VLANs. You are seeing an excessive amount of broadcasts across your trunk links. In an effort to reduce unnecessary traffic, VLAN Trunk Protocol (VTP) pruning is enabled. Which of the following statements is true regarding this change? a) Traffic on VLAN 1 can be pruned. b) Pruning eligibility is determined by the amount of ports assigned to a VLAN. c) VTP pruning is a way to detect the removal of a VLAN within a VTP domain. d) VTP version 2 is backward compatible with VTP version 1. e) VTP pruning only affects traffic from VLANs that are pruning eligible) 2-48. After performing some testing on a Catalyst switch in a lab, it is connected to the production network to another Catalyst switch via the supervisor Gigabit Ethernet port. Soon after this, users complain that they have lost all connectivity to the network. What could have caused this to happen? a) You did not issue the set spantree uplinkfast enable 1/1 command before connecting to the corporate switch. b) You did not make the trunk mode set to on or desirable for the trunk to the supervisor of the other switch. c) You did not make the VTP mode transparent in the new switch. d) The dynamic CAM entries were not cleared after the new switch was connected to the network. e) The new switch had the wrong VTP domain name)

69 Chapter 2: Bridging and LAN Switching 2-49. A switch is configured for an ISL trunk, with the trunk mode set to on. A new switch is added to the network, but the trunk will not come up. What is the probable cause of this problem? a) The native VLANs are not the same. b) The trunks need to be set to "on" or "auto". c) The trunks need to be set to "desirable" or "nonegotiate". d) The VTP domain names carried in the Dynamic Inter-Switch Link (DISL) messages are not the same. e) The Unidirectional Link Detection timers are shorter than the Spanning Tree Protocol (STP) timers. 2-50. You are designing a new switched LAN and VLAN information will need to be shared between switches. What VLAN trunking protocol contains the following features? - 26 byte header and a 4 byte frame check sum - Supports up 1024 VLANs #NAME? a) ISL b) 802.Id

c) 802.lq
d) 802.lv

e) 802.10
2-51. What is a key advantage to configuring all switches in an enterprise network to VTP transparent mode? a) Ensures consistency between VLAN numbering for all switches in the switched network. b) Prevents system administrators from accidentally deleting VLAN information from all switches. c) Allows for more rapid deployment of VLANs throughout the enterprise. d) Reduces the size of the spanning tree network, and as a result, improves STP convergence time. e) Reduces the total number of VLANs required in the enterprise network. 2-52. A new switch has been configured as a VTP client, and added to the existing VTP domain. Shortly after the ISL link is brought up to the rest of the network, the whole network goes down. What could have caused this to happen? (Choose the most likely option). a) The configuration revision of the switch inserted was higher than the configuration revision of the VTP domain. b) This is not an issue that could be related to the inserted switch since it was configured as a VTP client. c) The inserted switch was incorrectly configured for VTP v2 and caused an unstable condition.

Chapter 2: Bridging and LAN Switching d) VLAN 1 was incorrectly deleted on the switch before insertion causing an unstable condition. 2-53. The network is bonding some of the Ethernet connections via PaGP in order to increase the backbone bandwidth. In PagP, what mode combination will allow a channel to be formed? a) Auto-auto b) Desirable-on c) On-auto d) Auto-desirable 2-54. EnableMode is using extended VLANs (VLAN IDs 1006-4094) on their switches. What should he VTP mode be set to before configuring extended-range VLANs? a) Client b) Server c) Transparent d) Client or Server e) Client or Transparent f) Server or Transparent 2-55. Both ISL and 802.IQ is being used in the network. When comparing the differences in ISL and 802.IQ, which of the following are true? (Select three) a) 802.lq allows the encapsulation of multiple trunks within a single trunk. b) 802.lq supports fewer VLANs than ISL. c) ISL is more efficient than 802.lq due to its smaller header size. d) ISL supports the processing of untagged frames.

70

e) 802.lq uses a tag protocol ID of 0x8100 2-56. Your Catalyst switch is configured to support Multi Layer Switching (MLS). The switch contains an access list designed to prevent certain users from using ports 20 and 21 to reach the Internet. Because of this, which flow mask will be needed to create each MLS shortcut? a) Destination flow mask b) Full flow mask c) Source flow mask d) Partial flow mask e) Destination-source mask f) Session flow mask 2-57. While looking through the log files of your Catalyst switch, you notice that the following two messages are displayed somewhat infrequently:

71 Chapter 2: Bridging and LAN Switching %MLS-4-MOVEOVERFLOW:Too many moves, stop MLS for 5 sec(2u000000) %MLS-4-RESUMESC:Resume MLS after detecting too many moves What is the most likely cause of this problem? a) A transitory Spanning Tree loop b) A permanent Spanning Tree loop c) A faulty cable d) Faulty switch port e) A Pinnacle sync failure 2-58. You have just recently implemented the Multilayer switching feature on your Catalyst Switch. How will this change affect your network? a) The MLS Switching Engine will forward all traffic b) The MLS Route Processor will forward all traffic. c) The MLS Switching Engine will forward the first packet in every flow. d) The MLS Route Processor will forward the first packet in every flow. 2-59. A workstation has been connected to the LAN using a Category 5e cable. The workstation can connect to the rest of the network through the switch (i.e has full connectivity), but is suffering from much slower than expected performance. Looking at the interface statistics on the switch, many "runts" are being detected. Using software to read the counters on the workstation NIC, many FCS and alignment errors are occurring. What is the most likely cause of these errors? a) Bad Network Interface Card on the workstation b) Bad cable between the workstation and the switch c) The port has erroneously been configured as an 802.lq trunk port d) Mismatched speed settings between the workstation and the switch e) Mismatched duplex setting between the workstation and the switch f) None of the above. 2-60. How much data can be carried in a standard Ethernet frame? a) Up to 4096 bytes b) No limit c) Up to 1500 bytes d) Up to 1518 bytes e) Up to 4400 bytes 2-61. Which of the following will cause a switch port to go into the err-disable state? (Choose all that apply) a) Duplex mismatch. b) Unidirectional Link Detection.

Chapter 2: Bridging and LAN Switching c) AN incorrect VTP domain name is configured on the switch. d) Ethernet channeling is configured on the port. e) VLANs on the trunk were not matching on both sides. 2-62. 802.1Q trunking uses which Ethertype to identify itself? a)8100 b) 8021 c) 802A d) 2020 e) None of the above

72

2-63. In an 802.3 LAN, PAUSE frames are used for inhibiting data transmissions for a period of time. Which MAC address does this PAUSE mechanism use in order to accomplish this? a) 00-00-00-00-00-00 b) 00-00-OC-OO-OO-OF c) 01-04-0C-07-AC-3C d) 01-80-C2-00-00-01 e) FF-FF-FF-FF-FF-FF 2-64. A single end station failure can be prevented from disrupting the Spanning Tree algorithm in a LAN according to the 802.ID specification. 802.ID recommends preventing this by: a) Clearing the Topology Change flag. b) Re-setting the Topology Change flag to one (1). c) Configuring the Bridge Forward Delay to less than 1/2 of the Bridge Maxage. d) Disabling the 801.ID Change Detection Parameter. e) Disabling the Topology Change Notifications. 2-65. Which of the following are true regarding Unidirectional Link Detection? (Choose all that apply.) a) UDLD uses auto-negotiation to take care of physical signaling and fault detection. b) Both devices on the link need to support Unidirectional Link Detection. c) It works by exchanging protocol packets between the neighboring devices. d) It performs tasks that auto negotiation cannot perform. e) UDLD is a layer one protocol. 2-66. The BootCamp switched LAN network is upgrading many of the switch links to Gigabit Ethernet. Which of the following IEEE standards are used for Gigabit Ethernet? (Choose all that apply) a) 802.3z

73 Chapter 2: Bridging and LAN Switching b) 802.3ab c) 802.3ad d) 802.3af e) All of the above 2-67. Which of the following statements regarding the use of SPAN on a Catalyst 6500 are true? (Choose all that apply.) a) With SPAN an entire VLAN can be configured to be the source. b) If the source port is configured as a trunk port, the traffic on the destination port will also be tagged, irrespective of the configuration on the destination port. c) In any active SPAN session, the destination port will not participate in Spanning Tree. d) It is possible to configure SPAN to have a Gigabit port as the destination port. e) In one SPAN session it is possible to monitor multiple ports that do not belong to the same VLAN. 2-68. The network includes a Full Duplex Gigabit link between a Router and a Switch. Periodically, you notice the collision counter incrementing slowly. What could be the cause of this problem? a) The Router is receiving too much traffic and is asserting the Collision signal to be slow down the rate that the switch is sending traffic. b) Both the Router and the Switch are transmitting at the same time. c) The switch and the router might be running an ISL trunk. d) A bug or faulty equipment. e) A few collisions are normal. 2-69. The network is experiencing network connectivity problems soon after an end-user disconnected her PC and connects a switch with an unknown configuration into an access layer switch port, which has spanning-tree portfast configured. What should be configured on the access layer switch to prevent the network connectivity problems? a) Cisco2950(config-if)# spanning-tree portfast bpdufilter enable b) Cisco2950(config-if)# spanning-tree portfast bpduguard enable c) Cisco2950(config-if)# no spanning-tree portfast d) Cisco2950(config-if)# spanning-tree link-type point-to-point e) Cisco2950(config-if)# spanning-tree link-type shared f) Cisco2950(config-if)# no spanning-tree backbonefast g) Cisco2950(config-if)# no spanning-tree uplinkfast 2-70. A LAN switch has been configured as shown below; Cisco2950(config)# wrr-queue bandwidth 10 20 70 1 Cisco2950(config)# no wrr-queue cos-map

Chapter 2: Bridging and LAN Switching Cisco2950(config)# wrr-queue cos-map 10 1 Cisco2950(config)# wrr-queue cos-map 2 2 4 Cisco2950(config)# wrr-queue cos-map 3 3 6 7 Cisco2950(config)# wrr-queue cos-map 4 5 What does the IOS configuration displayed in the above configuration accomplish on a Catalyst 2900 switch? a) It enables frames with a CoS 0 or CoS 1 marking to be serviced by WRR (Weight Round Robin) queing with a weighting value of 1.

74

b) It enables frames with a CoS 5 marking to be serviced by the expedite queue. c) It guarantees 10% of the link bandwidth to Queue 1 and 20% to queue 2 and 70% to queue 3. Queue 4 is not used. d) It sets up the 3 CoS-to-DSCP mappings and DSCP-to-CoS mappings. e) It sets up the WRR queueing where frames with a CoS of 3 or 6 or 7 will have the highest priority.

75 Chapter 2: Bridging and LAN Switching Chapter 2 Answers 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 2-13 2-14 2-15 2-16 2-17 2-18 2-19 2-20 2-21 2-22 2-23 2-24 2-25 2-26 2-27 2-28 2-29 2-30 2-31 2-32 2-33 2-34 2-35 2-36 2-37 a a b c d a, c, a, b, d a, d c d a a b a a b c c, d, a c f b b, d, f d b, d c b c a, d b a, b, d b, c d a a, d c

Chapter 2: Bridging and LAN Switching 2-38 2-39 2-40 2-41 2-42 2-43 2-44 2-45 2-46 2-47 2-48 2-49 2-50 2-51 2-52 2-53 2-54 2-55 2-56 2-57 2-58 2-59 2-60 2-61 2-62 2-63 2-64 2-65 2-66 2-67 2-68 2-69 2-70 a b a, b d d b, c, d b b, c, c d a b a d c a, d, b a d c a, b a d d a, b, c, d a, b a, c, d, d b

76

Chapter 3

Internet Protocol (IP)


IP Addressing IP is the routed protocol of the Internet, and is the default protocol in most networks today. This layer-3 routed protocol has two primary responsibilities: providing connectionless, best-effort delivery of datagrams; and providing fragmentation and reassembly of datagrams to support data links with different maximum-transmission unit (MTU) sizes. Addresses are 32 bits long, with the most significant bits specifying the network, as determined by a subnet mask. This subnet is either derived from the first few bits of the address, or specified directly; depending on whether you are using classful (confirming to major address boundaries) or classless (further subnetting classful addresses) addressing. IP addresses are written in dotteddecimal format, with each set of eight bits separated by a period. The minimum and maximum packet headers for IP are 20 and 24 bytes, respectively. Though a long discussion on Subnet Masks is possible, for our purposes let us just discuss the major classesA, B, C, D, and E. Only the first three are available for commercial use; the others are special purpose address ranges. The left-most (high-order) bits indicate the network class. Here are the basics about the different classes of IP addresses: IP Address Class A C D High-Order Default Subnet Address Range Mask Bit(s) 0 10 110 1110 1111 255.0.0.0 1.0.0.0 to 127.255.255.255 255.255.0.0 128.0.0.0 to 191.255.255.255 255.255.255.0 192.0.0.0 to 223.255.255.255 N/A 224.0.0.0 to 239.255.255.255 N/A 240.0.0.0 to 254.255.255.255

Purpose Few large organizations Medium-size organizations Relatively small organizations Multicast groups (RFC 1112) Experimental

Note that, under RFC1918, the following IP addresses are reserved for private networks and are not routable on the Internet:

Chapter 3: Internet Protocol (IP) 10.0.0.0 to 10.255.255.255 /8 (within Class A) 172.16.0.0 to 172.31.255.255 /12 (within Class B) 192.168.0.0 to 192.168.255.255 /16 (within Class C) Also, note that the network 127.0.0.0/8 is reserved for loopback use with 127.0.0.1 being the localhost. Remember that the default Subnet Mask is only a default; it can be adjusted as necessary (depending on the routing protocol) by the network designer. Subnetting

78

IP addresses are made up of two pieces of information, the network that the host can be found on, and the unique address of the host. The network segment is on the left, the host portion on the right, but where the delineation occurs depends on the definition of the subnet mask. The subnet mask provides the ability to have an extended network prefix by taking bits from the host portion of the address, and add them to the network prefix. For example, a classful Class C network prefix consists of the first 24 bits of the IP address (three octets); but the network prefix can be extended into the fourth octet to provide more granularity to the configuration. The key to subnet masking is to understand binary notation instead of dotted decimal. Translating:

255.255.255.224 = 11111111.11111111.11111111.11100000 255.255.248.0 = 11111111.11111111.11111000.00000000


As you can see, the bits are set to 1 if the corresponding bit in the IP address is included in the network part of the address, and the bits are set to 0 if the bit is part of the host number. If is also common to designate the subnet mask in the /bits ("slash bits") format. This is simply the number of bits dedicated to the network part of the IP address. In the two examples above, the /bits designations would be /27 and /21. Subnetting Tricks I have found the following easy-to-remember chart helps me to do quick subnet mask calculations. If you take a few seconds at the beginning of the test session and write this out from memory on a piece of scratch paper, it can be a useful timesaver during any exam that requires subnetting and binary conversion. Line # 1 Bits 12 3 4 5 6 7 8

79 Chapter 3: Internet Protocol (IP) Line Line Line Line #2 #3 #4 #5 Binaries Subnet Hosts Nets 128 128 126 2 64 192 62 4 32 224 30 8 16 240 14 16 8 248 6 32 4 252 2 64 2 254 0 128 1 255 0 256

How to create the chart: Line #1Write the numbers one through eight from left-to-right. Besides being a handy column header, this provides the number of bits in a subnet. Line #2Starting with 1 and working from right-to-left, double each number. This gives you the column values for hex-to-binary conversion. Line #3Write out your subnets. You can derive these values by adding the number above to the number on the left (example: 64+128=192). Line #4The number of hosts per subnet can be derived by subtracting two from the values in row #2 (if the value is <0, round up to 0). Line #5Start with two in the left most column, and double each number going across. This will give you the number of networks for each subnet.

Example #1What would be the subnet mask, and the number of subnets and hosts of 192.168.8.0/29? The natural subnet mask for 192.168.8.0 is /24, so we move our finger along the chart to column five because we've added five bits to the mask. From this we find the following values: Subnet mask - 255.255.255.248 Number of subnets - 32 Number of hosts per subnet- 6

Example #2You have a network that will never require more than 60 host addresses, what is the most efficient subnet to use? Looking at line four of the chart we find that that next largest value over 60 is 62, and we find the following values in the column: Subnet mask-255.255.255.192 Number of hosts per subnet - 62 Number of subnets - 4

Route Summarization Reducing the number of networks being advertised between routers simplifies the routing table, reduces memory and CPU requirements, and makes the network more logical. Route summarization enhances network performance and reclaims bandwidth that would otherwise be used to pass routes back and forth. You will find a major part of Chapter 2 dedicated to this topic.

Chapter 3: Internet Protocol (IP) Ports and Sockets Port Numbers are used by IP to pass information to the upper OSI layers; they provide the mechanism for cooperating applications to communicate. Numbers below 1024 are well known ports, and above 1024 are dynamically assigned ports. You will usually find registered ports are for vendor specific applications in the range above 1024. Here are some common IP Ports and their associate application:
2 0 / 2 1 TCP 23 TCP 25 TCP 49 TCP/UDP 53 TCP/UDP 68 UDP 67 UDP 69 UDP 79 TCP 110 TCP 137 TCP 161 TCP/UDP 162 TCP/UDP 179 TCP 389 TCP 443 TCP 514 UDP 1741 TCP 7500 UDP 7500 TCP 7580 TCP 7588 TCP 42340 TCP 42342 UDP 42343 TCP 4 2 3 4 4 TCP FTP Telnet SMTP TACACS-r/TACACS DNS BootP Client BootP Server TFTP Finger POP3 NetBios Name Server SNMP Polling SNMP Traps BGP LDAP HTTPS Syslog CVV2000 HTTP Server ESS Service ESS Listening ESS HTTP ESS Routing CVV2000 Daemon Mgr Osagcnt JRun ANI HTTP Server

80

IPX sockets are part of the IPX stack, and are used much like port numbers in IP. They direct data encapsulation in the IPX Header to the appropriate upper layer protocols. There are well-known ones, others that are assigned to proprietary applications, and a series of numbers used randomly by clients, just like in IP. Also like IP ports, they identify the process on the server or client that needs to get the data in the packet. Here are some common IPX sockets and their associate applications:
0 x 4 5 1 j N C P 0x452 6x453 SAP RIP

81 Chapter 3: Internet Protocol (IP)

0x455 0x456 0x457 0x85be 0x9001 0x9004 0x9086

NctBios Diagnostic Serialization IPX EIGRP NLSP IPXWAN IPX Ping

Network Address Translation (NAT) NAT operates on a router connecting two networks with different addressing schemes together. The translation operates in conjunction with routing. Internal and external address numbers are associated in pairs and translated by the NATing device. Overloading is using port numbers to allow multiple external addresses to share a smaller number of internal addresses. The most common use of NAT is to provide Internet connectivity to internal networks that use private addressing. It is also commonly used when an organization uses different addressing schemes internally, perhaps during an upgrade or when one company acquires another and the networks must be merged. These are the different types of addresses used by Network Address Translation (NAT): Inside local addressAddresses assigned for use on the local network. These will usually be taken from the private address pools. These are the normal inside addresses. Inside global addressA legitimate IP representing one or more inside local IP addresses to the outside world. This would be the "real" address that gets translated into one or more outside Local Addresses. Outside local addressThe IP address of an outside host as it appears to the inside network. Not necessarily a legitimate address, it was allocated from an address space routable on the inside. This is the inside the network address that corresponds to the "real" address present to the outside world. Using overloading, many outside local addresses can be combined in a smaller number of inside global addresses. Outside global addressA legitimate IP address assigned to a host on the outside network by the host's owner. The address was be allocated from globally routable address or network space.

CIDR and VLSM Classless Inter-Domain Routing (CIDR) and Variable Length Subnet Masking (VLSM) are used to consolidate addresses with identical high-order bits to reduce the size of the routing table. What's the difference between CIDR and VLSM? The simple answer is that CIDR tends to be associated with external protocols like BGP, while VLSM is used with

Chapter 3: Internet Protocol (IP)

82

internal protocols, like OSPF. In practice, you will often find the two terms used interchangeably. Here are some clues as to how these two standards are commonly used: CIDR is often called super-netting and is referred to as route aggregation using "super-net networks" to reduce the number of entries in the global routing table. In most cases this will be associated with BGP. VLSM is usually referred to in connection with route summarization, "subnetting a subnet" to create more subnet prefixes. In most cases, this will be associated with OSPF and other Interior Gateway Protocols (IGPs).

Both of these standards reduce the size of routing tables by creating aggregate routes, minimizing the significance of network classes and supporting the advertising of IP prefixes. Remember that CIDR is primarily used by BGP. Hot Standby Router Protocol (HSRP) A potential failure point in an IP network is the default gateway device that leads to the external network. When configured for IP over Ethernet, FDDI, and Token Ring local-area networks, HSRP provides for an automatic backup router to replace a failed primary. HSRP is also compatible with IPX, AppleTalk, and DECnet. HSRP is designed for networks that require continuous access to resources off the local network. The HSRP default value is 100, and the higher-valued priority defines which router is to be designated as primary active router. HSRP routers exchange three types of multicast messages: HelloThe hello message passes information about HSRP priority and state information. It also acts as a heartbeat on the primary, making sure the others know it's alive. By default, hello messages are sent at three second intervals. CoupWhen a standby router takes over the function of an active router, coup message is sent. ResignWhen the active router is about to shut down, or when a router with a higher priority sends its hello message, the active router will send out a resign message.

At any time, HSRP-configured routers are in one of the following states: ActiveThe router is doing what it does, route. StandbyWaiting, waiting, waiting. Speaking and listeningThe router is sending and receiving hello messages. ListeningThe router is receiving hello messages.

83 Chapter 3: Internet Protocol (IP) Network Time Protocol (NTP) Introduction to NTP Synchronizing timekeeping among distributed time servers and clients is done through NTP, using Coordinated Universal Time (UTC), which is also known as Greenwich Mean Time. NTP runs over UDP, using port 123 as both the source and destination. UDP, in turn runs over IP. RFC 1305 defines architectures, entities, protocols and algorithms used by NTP Version 3. Typical Internet paths involve multiple gateways, highly dispersive delays and unreliable nets. NTP is specifically designed to maintain accuracy and robustness under these conditions. In the NTP model a number of primary reference sources, synchronized by wire or radio to national standard timekeepers, are connected to backbone gateways, and operated as primary time servers. A set of nodes on a network are identified and configured using NTP. The nodes form a synchronization subnet, often referred to as an overlay network. While multiple masters (primary servers) may exist, there is no requirement for an election protocol. The purpose of NTP is to convey timekeeping information from these primary servers to other time servers via the Internet and also to cross-check clocks and reduce errors due to equipment or propagation failures. A number of secondary time server local-net hosts or gateways, run NTP with one or more of the primary servers. To reduce the protocol overhead, the secondary servers distribute time via NTP to other local-net hosts. A client sends an NTP message to other servers and processes replies as they are received. The server interchanges addresses and ports, overwrites certain fields in the message, recalculates the checksum and returns the message immediately. Information included in the NTP message allows clients to determine the server time with respect to local time and adjust their local clocks accordingly. The message includes information to calculate expected timekeeping accuracy and reliability, as well as select the best from several servers. The accuracy of NTP depends on the precision of the local-clock hardware and accurate measurement of device and process latencies. The primary reference source under NTP is usually a radio clock or an atomic clock attached to a time server. NTP then distributes this time across the network. An NTP client polls its server over its polling interval (between 64 and 1024 seconds). This interval changes dynamically depending on network conditions between NTP server and client. No more than one NTP transaction per minute is required to synchronize two machines to within a millisecond of each another. Server is defined by a number called the stratum, with the topmost level (primary servers, with directly attached radio or atomic clocks) assigned as one

Chapter 3: Internet Protocol (IP)

84

and each level downwards (secondary servers) in the hierarchy assigned as one greater than the preceding level. The stratum describes how many NTP hops away a machine is from a primary server. A client running NTP automatically chooses the NTP server with the lowest stratum number that it is configured to communicate with. This effectively builds a self-organizing tree of NTP speakers. NTP works well over non-deterministic path lengths of packet-switched networks, because it makes robust estimates of the following three key variables: Clock offset represents the amount to adjust the local clock to bring it into correspondence with the reference clock. Roundtrip delay provides the capability to launch a message to arrive at the reference clock at a specified time. Dispersion represents the maximum error of the local clock relative to the reference clock.

With NTP, you can routinely achieve clock synchronization levels of 10 milliseconds over long distance (2,000+ km) wide-area networks, and of 1 millisecond for local-area networks. The issue that NTP deals with is maintaining accurate time in a community where some clocks cannot be trusted. A "truechimer" is a clock that maintains timekeeping accuracy to a published and trusted standard. A "falseticker" is a clock that does not. There are two ways NTP avoids synchronization to a server whose clock cannot be trusted. First, NTP never synchronizes with a server that is not itself synchronized. Second, NTP compares the time reported by several servers, and will not synchronize with a server whose time differs significantly from the others, even if its stratum is lower.

Associations between machines running NTP are usually statically configured. Each machine is given the IP address of all other machines with which it should form associations. Accurate timekeeping is made possible by exchanging NTP messages between each pair of machines within an association. In a LAN environment, NTP can instead be configured to use IP broadcast messages. This reduces configuration complexity because each machine need be configured to send or receive broadcast messages. However, the accuracy of timekeeping is marginally reduced because the information flow is only one-way. You should use the security features of NTP to avoid accidental or malicious setting of incorrect time. The two security features available are: An access list-based restriction scheme An encrypted authentication mechanism.

85 Chapter 3: Internet Protocol (IP) If a network is isolated from the Internet, and time is determined within the network, Cisco's NTP implementation of NTP allows a machine to be configured so that it acts as though it is synchronized through NTP. Other machines then synchronize to that machine through NTP. Cisco implementation of NTP supports stratum 1 service in certain Cisco IOS software releases. If a release supports the ntp refclock command, it is possible to connect a radio or atomic clock. Some Cisco IOS releases support either the Trimble Palisade NTP Synchronization Kit (Cisco 7200 series routers only) or the Telecom Solutions GPS device. NTP Design Criteria A synchronization subnet synchronizes its clients with an available NTP server. This selection is usually from among the lowest stratum servers. However, this is not always desirable, since NTP operates under the premise that each server's time is distrusted to some degree. NTP prefers to have access to several lower stratum time sources (at least three) and then apply an algorithm to detect disagreement on the part of any one source. When all servers are in agreement, NTP selects the best server in terms of lowest stratum, closest (shortest network delay), and claimed precision. Ideally each client should have three or more sources of lower stratum time, but several of these may only be providing backup service or represent lesser quality in terms of network delay and stratum. For example, a same-stratum peer can provide good backup service if that receives time from lower stratum sources the local server cannot access directly. Association Modes The following sections describe the association modes used by NTP servers: Client/Server Symmetric Active/Passive Broadcast

Client/Server Mode Clients or dependent servers can be synchronized to a group member, but no group member can synchronize to the client or dependent server. This provides protection against protocol attacks or malfunctions. Dependent clients and servers commonly operate in client/server mode. Client/server mode is the most common Internet configuration. It operates under a remote-procedure-call (RPC) protocol with stateless servers. Under this method, a client sends a request to the server and expects a reply in the future. This could be described as a poll operation, in that the client polls the time and authentication data from the server. The server requires no prior configuration.

Chapter 3: Internet Protocol (IP)

86

A client is configured in client mode by using the server command and specifying the domain name server (DNS) name or address. The most common client/server model is for a client sends an NTP message to one or more servers and processes the replies as they are received. The server interchanges addresses and ports, overwrites certain fields in the message, recalculates the checksum and returns the message to the client. The NTP message allows the client to determine the server time with respect to local time and adjust its local clock accordingly. The message also includes information to calculate the expected timekeeping accuracy and reliability, as well as select the best from among several servers. The client/server model is adequate for local networks involving a public server and workstation clients, but to use the full power of NTP requires client/servers or peers to be hierarchically distributed and dynamically reconfigurable. Full use of NTP also requires sophisticated algorithms for association management, as well as data manipulation and local-clock control. Normally, NTP servers with a sizeable population of clients operate as a group of three or more mutually redundant servers, each operating with three or more stratum 1 or stratum 2 servers in client/server modes, with all other members of the group operating in symmetric modes. This provides protection against failures in which one or more servers fail to operate or provide incorrect time. NTP algorithms are designed to resist attacks when some configured synchronization sources accidentally or purposely provide incorrect time. Under these conditions, a special voting procedure identifies spurious sources and discards incorrect data. In the interest of reliability, designated hosts can be equipped with external clocks and used for backupin case the primary and/or secondary servers fail, or communication paths between them fail. An association configured in client mode, usually through a declaration in the server configuration file, indicates that time is to be obtained from the remote server, but that time will not be provided to the remote server. Symmetric Active/Passive Mode In this mode, a group of low stratum peers operate as mutual backups. Each peer references one or more primary sources, such as a radio clock, or a subset of reliable secondary servers. Should one peer lose its reference sources or cease operation, the other peers automatically reconfigure so that time values can flow from surviving peers to others in the group. In some contexts this is described as a push-pull operationa peer either pushes or pulls the time depending on the specific configuration. Symmetric-active mode is usually configured by a configuration file peer declaration. This indicates to the remote server that a member of the association needs to obtain time from the remote server and that the member is also willing to supply time to the remote server if necessary. Symmetric-active mode is

87 Chapter 3: Internet Protocol (IP) appropriate where a number of redundant time servers are interconnected through multiple network paths. This is true for most stratum 1 and stratum 2 servers on the Internet today. Use of symmetric modes requires two (or more) servers operating as a redundant group. The servers in the group arrange synchronization paths for maximum performance, depending on network jitter and propagation delay. When a group member fails, the remaining members automatically reconfigure. Peers are configured in symmetric active mode by using the peer command, specifying the DNS name or address of the other peer. The other peer is similarly configured in symmetric active mode. You should note that if the other peer is not specifically configured in this way, a symmetric passive association is activated upon arrival of a symmetric active message. An intruder could impersonate a symmetric active peer and inject false time values. This means that symmetric mode should always be authenticated. Broadcast and Multicast Mode When accuracy and reliability requirements are modest, clients can be configured to use broadcast or multicast modes. The advantage is that clients do not need to be configured for a specific server, allowing the same configuration file to be used for all operating clients. Broadcast mode requires a broadcast server to be on the same subnet. Since routers do not propagate broadcast messages, only broadcast servers on the same subnet are used. Normally, servers with dependent clients don't use these modes. The choice of broadcast mode is made for configurations with one or several servers and a potentially large client population. Server configuration uses the broadcast command and a local subnet address. Client configuration uses the broadcastclient command, allowing the broadcast client to respond to broadcast messages received on any interface.

Since an intruder can impersonate a broadcast server and inject false time values, broadcast mode should always be authenticated. NTP Architecture Three structures can be employed in NTP architecture. Flat peerAll routers peer with each other, with a few geographically separate routers configured to point to external systems. The time convergence gets longer with each new NTP mesh member.in ar HierarchicalThe routing hierarchy is followed down through the NTP hierarchy. Core routers have a client/server relationship with external time sources, internal time servers have a client/server relationship with core routers, internal customer (non time servers) routers have a client/server

Chapter 3: Internet Protocol (IP) relationship with internal time servers, and so on down the tree. These relationships are called hierarchy scales. StarAll routers have a client/server relationship with a few time servers at the core. Dedicated time servers are the center of the star. These are most often UNIX systems synchronized with external time sources or their own GPS receiver.

88

A hierarchical structure is a preferred technique because it provides consistency, stability, and scalability. Figure 3-1 shows a example of NTP architecture with a hierarchical structure:

Internet Primary Servers (Stratum 1)

Campus Secondary Servers (Stratum 2)

|< = to buddy in another subnet

Jfc = to buddy in another subnet

Department Servers (Stratum 3)

Workstations (Stratum 4)

Figure 3 - 1 . NTP architecture with a hierarchical structure Clock Technology and Public Time Servers NTP is a utility for synchronizing system clocks, providing a precise time base for networked workstations and servers. In the NTP model, primary and secondary servers pass timekeeping information through the Internet to cross-check clocks and correct errors arising from equipment or propagation failures. This synchronization allows events to be correlated when system logs are created and other time-specific events occur.

89 Chapter 3: Internet Protocol (IP) There are over 50 public primary servers synchronized directly to UTC by radio, satellite, or modem in the Internet NTP subnet. About 100 public secondary servers are synchronized to primary servers, providing synchronization to in excess of 100,000 clients and servers on the Internet. Public NTP Time Server lists are updated frequently. There are also many private primary and secondary servers which are not normally available to the public. Client workstations and servers with a relatively small number of clients normally do not synchronize to primary servers. When highly accurate time services are required, such as one-way metrics for Voice over IP (VoIP) measurements, network designers may choose to employ private external time sources. Figure 3-2 shows relative accuracy of the current technologies.

Timing Accuracy

1015

(1 second over 32 million years)

Hydrogen Maser Loran C, GPS, CDMA Cesium Rubidium


108

Quartz

Core Technology Figure 3-2. Time service accuracy Until recently, high quality external time sources have not been deployed in enterprise networks due to their high cost. However, as Quality of Service (QoS) requirements increase and the cost of time technology decreases, external time sources for enterprise networks are becoming an affordable option. The ntp server Global Configuration Command An NTP association can be a peer association (in which a system either synchronizes to another system or allows the other system to synchronize to it), or it can be a server association (the system synchronizes to another system, and not the other way around). An NTP server can only be configured as a peer to another NTP server. To allow the system clock to be synchronized by a time server, use the ntp server global configuration command.

Chapter 3: Internet Protocol (IP) IP Services Many different IP services are required to make even the most basic IP transaction work. Among these are DNS, ICMP, and ARP. Although most people never think about these services in their daily Web surfing, these are an extremely important part of IP networks. Domain Name Service (DNS)

90

DNS provides a way to map a numeric IP address to an alphanumeric name. This is done for human convenience. IP addresses work well for routers and hosts (and some network engineers), but even the best technophiles have an easier time remembering text strings than network addresses. DNS is hosted either locally or through a service on the Internet. NS (Name Service), PTR (Pointer), MX (Mail Exchange), and A records are all DNS resource record types. Internet Control Message Protocol (ICMP) Internet Control Message Protocol (ICMP) is a network-layer protocol that provides messages about IP packet processing including errors to a source station. The most common methods for sending out ICMP packets are through the ping and traceroute commands. ICMPs provides some useful messages, including: Destination UnreachableThe router is unable to send the package to its destination. Echo Request Sent by a ping from the host to test node reachability across an internetwork. Echo Reply Indicates that the node can be reached successfully. Redirect Sent by the router to the source host to stimulate more efficient routing. Time ExceededSent by the router if an IP packet's time-to-live field (expressed in hops or seconds) reaches zero. The TTL field is in the IP headerits primary purpose is to prevent loops by allowing packets to die naturally over time.

The ping exec command (which stands for Packet InterNetwork Groper) is a common method for troubleshooting device accessibility. It uses a series of ICMP echo messages to determine wheteher a remote host is active and how much round-trip delay there is between the source and destination. Character ! U Q M ? Description Reply received Time out Destination unreachable Source quench (destination too busy). Could not fragment Unknown packet type

91 Chapter 3: Internet Protocol (IP)

&

Packet lifetime exceeded

The traceroute exec command is used to discover the specific route packets take in reaching their destination. BOOTP The BOOTstrap Protocol was developed to load diskless workstations and was then adapted to provide hosts with required TCP/IP information for Internet access. A BOOTP client (host) broadcasts a request onto the network. The BOOTP server, which listens for these requests, generates responses from a predefined configuration database. BOOTP differs from DHCP in it does not use leases. All IP information assigned by the BOOTP servers does not expire. BOOTP Procedure. A client uses a BOOTP request to retrieve address and filename information from a server, then a file transfer occurs. The file transfer will typically use the TFTP protocol, since both phases normally are in PROM on the client. BOOTP can also work with other protocols such as SFTP or FTP. BOOTP procedure examplea diskless workstation broadcasts a bootp request on port 67. A server responds to this request on port 68. The server provides the client with two pieces of information. IP address of client and host name of the server. File name required by client to boot.

The client then uses TFTP to get this file from the server and boot. Dynamic Host Configuration Protocol (DHCP) A DHCP server can be configured to allocate addresses dynamically to workstations, eliminating the need for static IP addresses. DHCP requests incorporate bootp packets. In order for a router to propagate these bootp packets, an ip helper-address must be configured at the router. The DHCP server can be configured to provide much more information than just an IP address, including: Subnet mask Default gateway Name resolution servers

Chapter 3: Internet Protocol (IP) DHCP information is provided on the basis of a timed release that allows the server to release information for future use. IP Applications The Internet Protocol suite includes several application-layer tools, including:

92

TelnetThis application provides a remote terminal session to a networked device. Telnet is insecure as it passes all information across the connection, including usemame and password, in plaintext. This is a consideration when using telnet across the Internet. There are better and more secure remote access applications, such as Secure Shell (SSH). File Transfer Protocol (FTP)This protocol provides the ability to move files to and from computers around the world. FTP servers and clients are used to host the data and provide the interface for users to access them. FTP can also perform operations other than file transfers, such as directory listings, or file deletions. One of the major limitations is that usemames and passwords are passed in clear text. There are two different FTP modes: passive mode and active mode. With passive mode FTP, both the control and data TCP sessions are initiated from the client. In FTP active mode the client uses the PORT command to tell the server on which port it expects the server to send the data. Trivial File Transfer Protocol (TFTP)This internet-standard protocol is used to transfer files between local and remote hosts, but it is more limited in scope than FTP because it cannot perform operations other than file transfers, such as directory listings, or file deletions. It does not perform any authentication when transferring files, so a username and password on the remote host are not required. TFTP is typically used to load software and configuration files to a network device, and you will often use it on Cisco routers to load new versions of IOS and configuration files, as well as to backup these files for later recovery. TFTP uses UDP port 69. Simple Network-Management Protocol (SNMP)The protocol provides information about SNMP enabled network devices. Simple Mail Transfer Protocol (SMTP)This protocol provides electronic mail services. The SMTP protocol does NOT validate a sender's address.

Simple Network Management Protocol (SNMP) SNMP is a standard Application layer (layer-7) protocol for Network Management Stations (such as CiscoWorks) to gather information about networked devices. SNMP uses UDP for transit. SNMP requires the following three components: 1. SNMP ManagerThe server platform that is used to request and set parameters and has overall control of the network. CiscoWorks is Cisco's product for this function. 2. SNMP Agent Software running on a device that can provide information about the status of the unit.

93 Chapter 3: Internet Protocol (IP) 3. MIBA Management Information Base (MIB) is a collection of information that is organized hierarchically. MIBs have a logical tree structure, are comprised of managed objects, and use object identifiers for each specific unit and service. The root of the tree is unnamed and splits into three main branches, belonging to different standards organizations, while lower-level object IDs are allocated by associated organizations. The primary SNMP commands on a Cisco router are under the snmp-server config-mode command. So that you have an idea of the different options you can configure with the snmp-server command, here are the options to that command: router(config)#snmp-server ? chassis-id community contact context enable enginelD group host ifindex inform location manager packetsize String to uniquely identify this chassis Enable SNMP; set community string and access privs Text for mib object sysContact Create/Delete a context apart from default Enable SNMP Traps or Informs Configure a local or remote SNMPv3 enginelD Define a User Security Model group Specify hosts to receive SNMP notifications Enable ifindex persistence Configure SNMP Informs options Text for mib object sysLocation Modify SNMP manager parameters Largest SNMP packet size Message queue length for each TRAP host Enable use of the SNMP reload command Limit TFTP servers used via SNMP SNMP trap options Assign an interface for the source address of all traps Set timeout for TRAP message retransmissions Define a user who can access the SNMP engine Define a SNMPv2 MIB view

queue-length
system-shutdown tftp-server-list trap trap-source trap-timeout user

view
router(config)#

To enable SNMP on a Cisco router, enter the command snmp-server community.

Chapter 3: Internet Protocol (IP) SNMPvl Operations Get: Allows the Network Management Server (NMS) to retrieve an object variable from the agent.

94

GetNext: Allows the NMS to retrieve the next object variable from a table or list within an agent. In SNMPvl, when a NMS wants to retrieve all elements of a table from an agent, it initiates a Get operation, followed by a series of GetNext operations. Set: Allows the NMS to set values for object variables within an agent. SNMPv2 additional Operations (also has SNMPvl Ops) GetBulk: Allows efficiently retrieval of large blocks of data. Inform: Allows one NMS to send trap information to another NMS and to then receive a response. In SNMPv2, if the agent responding to GetBulk operations cannot provide values for all the variables in a list, it provides partial results. SNMPv3 SNMPv3 adds security and remote coomunication capabilities but otherwise operates in the same way as SNMPv2 and SNMPvl. The main security enhancements defined in SNMP version 3 are authentication, privacy, and access control. SNMPv3 introduces the User-based Security Model (USM) for message security and the View-based Access Control Model (VACM) for access control. SNMP Communities SNMPvl and SNMPv2 both use the concept of communities to establish trust between managers and agents. An agent has three community names (strings): read-only, read-write, and trap. These community names act like a password. The three community strings control different activities: The read-only community string provides read-only access to data values. For example, it allows you to read the number of packets that have been transferred through the ports on your router, but doesn't let you reset the counters. The read-write community string allows data values to be read and modified. You can read the counters, reset their values, reset the interfaces or change the router's configuration. The trap community string allows you to receive traps (asynchronous notifications) from the agent.

95 Chapter 3: Internet Protocol (IP) SNMPv3 encrypts all transmissions. SNMPv3 enables the responder (usually an SNMP agent) to: Authenticate the user generating the request Guarantee the integrity of the message using a digital signature Apply complex and granular access-control rules to each request.

Under SNMPv3, the network administrator can specify combinations of these levels of protection (unsecured, authenticated and authenticated with encryption). An unlimited number of access-control rules can be defined through the SNMP agent or manager. This level of security was impractical in the 1990s, but today's hardware devices can support firmware with advanced SNMP security and full Web management services. SNMP Traps and Notifications Either traps or information requests can be used to send SNMP notifications. Traps are unreliable, since the receiver does not send acknowledgments and the sender cannot determine if the traps were received. Inform requests are acknowledged an SNMP response PDU. If the sender never receives the response, the inform request can be sent again.

Thus, informs are more likely to reach their intended destination. The disadvantage is that informs consume more resources in the agent and in the network. A trap is discarded as soon as it is sent. An inform request must be held in memory until a response is received or the request times out. Traps are sent only once, while an inform message may be retried several times. The retries increase traffic and contribute to a higher network overhead. GRE Tunnel and Internet Browsing In Figure 3-3, the client establishes a TCP session with the Web server when the client needs access to a page on the Internet. In the process, the client and Web server announce their maximum segment size (MSS), indicating to each other that they will accept TCP segments up to this size. On receipt of the MSS option, each device calculates the size of the segment that can be sent. This is called the Send Max Segment Size (SMSS), and is set to the smaller of the two MSSs.

Chapter 3: Internet Protocol (IP)

96

10.10.10.10

G E Tunnel R R1

J -~ H
R2

Gateway

10.1.3.4 Web Seruer MTU =1500

Figure 3-3 GRE tunnel and Internet browsing Assume that the Web server in Figure 3-3 decides that it can send packet sizes up to 1500 bytes. The server sends a 1500 byte packet to the client. In the IP header, it sets the "don't fragment" (DF) bit. When the packet arrives at R2, the router tries to encapsulate the packet into the tunnel packet. In the case of the GRE tunnel interface, the IP maximum transmission unit (MTU) is 24 bytes less than the IP MTU of the real outgoing interface. For an Ethernet outgoing interface that means the IP MTU on the tunnel interface would be 1500 minus 24, or 1476 bytes. Router R2 is trying to send a 1500 byte IP packet into a 1476 byte IP MTU interface. This isn't possible, so R2 would need to fragment the packet, creating one packet of 1476 bytes (data and IP header) and one packet of 44 bytes (24 bytes of data and a new IP header of 20 bytes). Router R2 and then GRE would need to encapsulate both of these packets to get 1500 and 68 byte packets, respectively. These packets could now be sent out the real outbound interface, which has a 1500 byte IP MTU. But the server set the DF bit in the packet received by R2. Router R2 can't fragment the packet, and instead, it must instruct the Web server to send smaller packets. It does this by sending an Internet Control Message Protocol (ICMP) type 3 code 4 packet (Destination Unreachable; Fragmentation Needed and DF set). This ICMP message contains the correct MTU to be used by the server. The server receives this message and adjusts the packet size to 1476 bytes.

Blocked ICMP Messages


ICMP messages are often blocked along the path to the Web server. When this happens, the ICMP packet never reaches the Web server, and consequently data cannot be passed between client and server.

97 Chapter 3: Internet Protocol (IP) There are four possible solutions to resolve this problem: 1. Find out where along the path the ICMP message is blocked, and determine if you can get it allowed. 2. Set the MTU on the Client's network interface to 1476 bytes, forcing the SMSS to be smaller, so packets won't have to be fragmented when they reach R2. Note that, if you change the MTU for the Client, you should also change the MTU for all devices that share the network with this Client. On an Ethernet segment, this could be a large number of devices. 3. Use a proxy-server (or, better, a Web cache engine) between R2 and the Gateway router, and let the proxy-server request all Internet pages. 4. If the GRE tunnel runs over links that can have an MTU greater than 1500 bytes plus the tunnel header, then another possible solution is to increase the MTU to 1524 (1500 plus 24 for the GRE overhead) on all interfaces and links between the GRE endpoint routers. Transport I P / M u I t i Protocol Label Switching (IP/MPLS) IP/MPLS has become mainstream in many network backbones because of its efficiency and cost saving. Many mobile operators are adopting IP/MPLS as a technology that will allow them to cap investment in ATM and other Layer 2 infrastructure elements while deploying and supporting 2.5G and 3G services. The move to IP/MPLS includes mobile operators using two main mobile technology roadmaps: Global System for Mobile Communications (GSM), running data applications using Enhanced Data Rates for GSM Evolution (EDGE), GPRS, and wireless LANs and moving to the all-IP network via the Universal Mobile Telecommunications Service (UMTS) architecture CDMA, with data services are enabled using solutions such as CDMA2000, 1 x Radio Transmission Technology (lxRTT), Evolution Data Optimized Overlay (EV-DO), and WLANs.

Other mobile operators using different technologies, such as Fast Low-Latency Access with Seamless Handoff-Orthogonal Frequency Division Multiplexing (FLASH-OFDM) and iBurst, are evaluating IP/MPLS for efficient transport of voice, data, and video. IP/MPLS provides a common network backbone to encapsulate and transport mobile traffic. Multiservice IP networking solutions and products for mobile networks from Cisco are helping to transform the design, profitability, and cost-effectiveness of evolving mobile GSM and CDMA networks around the world. Cisco, a pioneer of IP/MPLS, provides important traffic engineering and QoS technologies so mobile operators can deploy IP/MPLS backbones-as thousands of service providers around the world have done-with confidence and operational integrity.

Chapter 3: Internet Protocol (IP)

98

Mobile operators are at different stages of migrating to 3G/4G mobile network services, architectures, and applications. Standards bodies such as 3GPP are recommending IP for mobile network traffic. The development of 4G standards is moving toward IP-addressable mobile devices. Until now, mobile operators have been deploying mobile services for voice, data, and multimedia in disparate parts of their existing networks. Many are now actively engaged in researching or deploying both existing and new mobile services using a IP/MPLS backbone. With its global leadership in IP/MPLS and its broad networking experience, products, and solutions, Cisco is helping mobile operators take advantage of the many compelling benefits of a migration to a converged wireless network backbone using IP/MPLS. Cisco IP/MPLS products and solutions-proven successful through deployments by most wireline service providers worldwide-provide mobile operators the end-toend traffic engineering, QoS, security, scalability, resiliency, and management enhancements for deploying data-, voice-, and video-based mobile services. These carrier-class, industry-leading features run on the broad Cisco line of powerful hardware, ranging from the Cisco CRS-1 Carrier Routing System, the world's most powerful router; to Cisco 12000 Series edge routers, which can scale from digital signal level 0 (DSO) on channelized interfaces up to multiple ST-64 or 10 Gigabit Ethernet; to Cisco 7600 Series routers, the best-performing provider edge and enterprise metropolitan-area network (MAN) and WAN routers. Cisco IP/MPLS Traffic Engineering Management and QoS technologies provide these benefits: Enhanced applications Support voice, data, and video traffic types Apply various QoS attributes

Optimization Optimize the routing of multiple classes and types of traffic Optimize the use of available network bandwidth by using alternate paths to move traffic efficiently Manage hard bandwidth classes and delay constraints to support strong service levels

End-to-end quality of service Enable rapid recovery from disruptions to help ensure fault transparency to users and network applications Effectively manage delay and delay variation, bandwidth, and packet loss parameters Create greater network resilience to withstand link or node failures

99 Chapter 3: Internet Protocol (IP) Improved network performance More efficiently use transmission capacity Monitor service-level agreements (SLAs) with end-to-end service assurance agents Monitor overall network performance Maintain high network availability

Cisco IP/MPLS Traffic Engineering brings to an IP/MPLS backbone the same traffic engineering capabilities as conventional Layer 2 ATM and Frame Relay. Traffic engineering and QoS features in Cisco IOS Software, provide network tools to manage end-to-end mobile voice, data, and video services. MPLS Overview Originally developed by Cisco as tag switching in the 1990s, MPLS is now an industry standard, combining connection- and connectionless-oriented networking. MPLS does this by intelligent IP routing based on available bandwidth, latency, and utilization information. MPLS complements IP technology, using the intelligent IP routing and the switching paradigm associated with ATM. MPLS has become a core technology for next-generation networks. MPLS allows easier introduction of new IP services and also allows for the consolidation of multiple networks and provides a cost effective way for using multiple Layer 2 technologies. Mobile operators have been leaders in designing core packet networks based on MPLS. The way which MPLS works is that MPLS labels are superimposed on IP packets at the edge of the MPLS network, and later removed at the destination edge or penultimate hop. In a packet environment, MPLS labels are added between the Layer 2 and the Layer 3 header. In ATM networks, MPLS labels are added in the virtual path identifier or virtual channel identifier (VPI/VCI) field. One function of MPLS labels is to assign packets to groups of Forwarding Equivalence Classes (FECs). Packets belonging to the same FEC get similar treatment, providing QoS for different traffic types. MPLS has the effect of imposing a connection-oriented framework on a connectionless IP network. This offers a foundation for the same reliable QoS and bandwidth utilization associated with an ATM network, but without ATM's equipment expense and processing overhead. MPLS labels summarize information about how to direct a packet, including: Precedence Destination Packet route specified by traffic engineering VPN membership

Chapter 3: Internet Protocol (IP) QoS information from Resource Reservation Protocol (RSVP)

100

Labels are used by MPLS forward traffic across an MPLS-enabled backbone. A packet entering the MPLS domain has a label is superimposed on the packet, and the label (instead of the IP header) determines the next hop. The MPLS label is stripped at the MPLS domain exit. When a labeled packet arrives at a label switch router (LSR), the incoming label determines the packet's path inside the MPLS network. MPLS label forwarding then swaps this label with the appropriate outgoing label and sends the packet on to the next hop. The MPLS backbone can be configured to accept Layer 2 virtual LAN (VLAN) traffic by configuring label edge routers (LERs) at each MPLS backbone end. Packets are assigned MPLS labels by grouping or forwarding class. This MPLS lookup and forwarding system allows explicit control of routing, using source and destination. Cisco MPLS Traffic Engineering Cisco MPLS Traffic Engineering's task is to compute the best path from one given node to another without the path violating other constraints, such as latency and available bandwidth. Once a path is computed, traffic engineering then becomes responsible for establishing and maintaining the path forwarding state. The endto-end MPLS path is called the label switched path (LSP). For a backbone network, the LSP begins at the MPLS ingress router and ends at the MPLS egress route. Cisco IOS Software extends MPLS Traffic Engineering to select the optimal paths across an MPLS network, using bandwidth and administrative decision rules. Each Label Switch Router (LSR) in an IP/MPLS backbone network maintains a traffic engineering link-state database specifying the current network topology. Link-state database changes are distributed through the router forwarding packets to all devices to which it is attached (flooding). Flooding is the technique used by the Open Shortest Path First (OSPF) Protocol for link state database distribution and synchronization between network backbone routers. Older Interior Gateway Protocols (IGPs) make routing decisions using shortestpath algorithms, without considering bandwidth availability or traffic characteristics. MPLS Traffic Engineering, however, makes a virtual topology, constructed from virtual links, appear to the routing protocol, to be physical links. This allows constraint-based routing, traffic shaping, traffic policing, and failover. Traffic engineering, in effect, allows you to bypass routing IGP protocol information to send traffic over better paths. Cisco MPLS Traffic Engineering considers resources required by traffic as well as network availability when making routing decisions. Using routing and signaling capabilities of LSPs across a backbone, Cisco MPLS Traffic Engineering takes into account link bandwidth and traffic flow rates when determining routes for LSPs across the backbone network. MPLS constraint-based routing determines the

101 Chapter 3: Internet Protocol (IP) shortest path for traffic flows. Recovery from link or node failures is possible because the network adapts to new constraints. Direct Delivery vs. Indirect Delivery In direct delivery, both devices are on the same network segment (IP subnet) and no router is required to communicate between them. In indirect delivery the devices are on different network segments (IP subnets) and a router is necessary for the two to communicate. Internet Protocol Version 6 ( I P v 6 ) IPv6 address types are distinguished by the value of the high-order octet of the addresses: a value of OxFF (binary 11111111) identifies an address as a multicast address; 0x00 indicates loopback or unassigned addresses; any other value identifies an address as a Unicast address. Anycast addresses are taken from the Unicast address space, and are not syntactically distinguishable from Unicast addresses. Ipv6 addresses can be written in a compressed format by using a double colon to summarize at least one octet of continuous zeros. Anycast can be understood best by comparing it with Unicast and Multicast. IP Unicast allows a source node to transmit IP datagrams to a single destination node. The destination node is identified by a Unicast address. IP multicast allows a source node to transmit IP datagrams to a group of destination nodes. A multicast group identifies the destination nodes, and we use a multicast address to identify the multicast group. IP Anycast allows a source node to transmit IP datagrams to a single destination node out of a group of destination nodes. IP datagram will reach the closest destination node in the set of destination nodes, using routing measure of distance. The source node does not need to care about how to pick the closest destination node, as the routing system will figure it out (in other words, the source node has no control over the selection). The set of destination nodes is identified by an Anycast address. Valid Ipv6 Unicast or Anycast addresses: 1080:0:0:0:8:800:200C:417A 1080::8:800:200C:417A

Valid Ipv6 Multicast addresses: FF01:0:0:0:0:0:0:101 FF01::101 Valid Ipv6 Loopback addresses:

0:0:0:0:0:0:0:1

Chapter 3: Internet Protocol (IP)

102

::1

Unicast IPv6 A unicast address identifies a single interface within the scope of the unicast address type. The following list shows the types of IPv6 addresses: Aggregatable global unicast addresses Link-local addresses Site-local addresses Special addresses, including unspecified and loopback addresses Compatibility addresses, including 6to4 addresses

With the appropriate unicast routing topology, packets addressed to a unicast address are delivered to a single interface. Aggregatable global unicast addresses Aggregatable global unicast addresses, also known as global addresses, are identified by the Format Prefix of 001. Addresses of this type are designed to be aggregated or summarized to produce an efficient routing infrastructure. They are equivalent to public IPv4 addresses. Unlike the current IPv4-based Internet, which has a mixture of both flat and hierarchical routing, the IPv6-based Internet has been designed from its foundation to support efficient, hierarchical addressing and routing. Aggregatable global unicast addresses are globally routable and reachable on the IPv6 portion of the Internet. The region of the Internet over which the aggregatable global unicast address is unique (the scope) is the entire IPv6 Internet. The following illustration shows the fields of the aggregatable global unicast address:
13 bits S bits 24 bits 16 bits 64 bits

001

TLA ID

Res

NLA ID

SLA ID

Interface ID

Table 3-1 shows the fields in the aggregatable global unicast. Table 3 - 1 . Aggregatable Global Unicast Fields Field
Format Prefix

Description
Aggregatable global unicast addresses are identified by the Format Prefix of 001. Indicates the Top Level Aggregator (TLA)

103 Chapter 3: Internet Protocol (IP)

_.entifies the highest level in the routing hierarchy. TLAs are administered by Internet Assigned Numbers Authority (IANA) and allocated to local Internet registries that, in turn, allocate individual TLA IDs to large, global Internet service providers (ISPs). A 13-bit field allows up to 8,192 different TLA IDs. Routers in the highest level of the IPv6 Internet routing hierarchy do not have a default route. These default-free routers have routes with 16-bit prefixes that correspond to the allocated TLAs. Res Reserved for future use in expanding the size of either the TLA ID or the Next Level Aggregator (NLA) ID. jdicates the Next Level Aggregator (NLA) for the address, and is used to identify a specific customer site. The NLA ID allows an ISP to create multiple levels of addressing hierarchy, to organize addressing, and routing and to identify sites. The structure of the ISP's network is not visible to default-free routers. SLA ID Indicates the Site Level Aggregator (SLA) for the address, and is used by an organization to identify subnets within its site. The organization can use the 16 bits within its site to create 65,536 subnets or multiple levels of addressing hierarchy and an efficient routing infrastructure. With 16 bits of subnetting flexibility, an aggregatable global unicast prefix assigned to an organization is equivalent to that organization being allocated an IPv4 Class A network ID, assuming that the last octet is used for identifying nodes on subnets. The structure of the customer's network is not visible to the ISP. Interface ID Indicates the interface of a node on a specific subnet.

The following illustration shows how the fields within the aggregatable global unicast address create a three-level topological structure.
001 TLA ID 43 ts Public topology Res NLA ID SLA ID 16 bits Site topology Interface ID & 4 bits___ _ ^ Interface identifier

Table 3-2 shows the topological Internet routing structure and the associated definitions. Table 3-2 Topological Internet Routing Structure Topology
Public

Definition
The collection of larger and smaller ISPs that provide access to the IPv6 Internet. The collection of subnets within an organization's site.

Interface identifier

Identifies a specific interface on a subnet within an organization's site.

Chapter 3: Internet Protocol (IP) For more information about aggregatable global unicast addresses, see RFC 2374. Link-local addresses

104

Link-local addresses are used by nodes when communicating with neighboring nodes on the same link. For example, on a single link IPv6 network with no router, link-local addresses are used to communicate between hosts on the link. Link-local addresses are equivalent to Automatic Private IP Addressing (APIPA) IPv4 addresses using the 169.254.0.0/16 prefix. The scope of a link-local address is the local link. An IPv6 router never forwards link-local traffic beyond the link. A link-local address is required for Neighbor Discovery processes and is always automatically configured, even in the absence of all other unicast addresses. Link-local addresses are identified by the Format Prefix of 1111 1110 10. The address always begins with FE80. With the 64-bit interface identifier, the prefix for link-local addresses is always FE80::/64. Site-local addresses Site-local addresses are used between nodes that communicate with other nodes in the same site. Site-local addresses are identified by the Format Prefix of 1111 1110 11. They are equivalent to the IPv4 private address space, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. For example, private intranets that do not have a direct, routed connection to the IPv6 Internet can use site-local addresses without conflicting with aggregatable global unicast addresses. Site-local addresses are not reachable from other sites, and routers must not forward site-local traffic outside of the site. Site-local addresses can be used in addition to aggregatable global unicast addresses. The scope of a site-local address is the site, which is the organization internetwork. Unlike link-local addresses, site-local addresses are not automatically configured and must be assigned through the stateless address configuration process. The first 48-bits are always fixed for site-local addresses, beginning with FEC0::/48. After the 48 fixed bits is a 16-bit subnet identifier (Subnet ID field) that provides 16 bits with which you can create subnets within your organization. With 16 bits, you can have up to 65,536 subnets in a flat subnet structure, or you can subdivide the high-order bits of the Subnet ID field to create a hierarchical and aggregatable routing infrastructure. After the Subnet ID field is a 64-bit Interface ID field that identifies a specific interface on a subnet. The aggregatable global unicast address and site-local address share the same structure beyond the first 48 bits of the address. In aggregatable global unicast addresses, the SLA ID identifies the subnet within an organization. For site-local addresses, the Subnet ID performs the same function. Because of this, you can

105 Chapter 3: Internet Protocol (IP) assign a specific subnet number to identify a subnet that is used for both sitelocal and aggregatable global unicast addresses. Special addresses Table 3-3 shows the special IPv6 addresses. Table 3-3. IPv6 Addresses Special Address
Unspecified address
p

Description
j | I | | The unspecified address, 0:0:0:0:0:0:0:0 or ::, indicates the absence of an address, and is typically used as a source address for packets that are attempting to verify the uniqueness of a tentative address. It is equivalent to the IPv4 unspecified address of 0.0.0.0. The unspecified address is never assigned to an interface or used as a destination address. The loopback address, 0:0:0:0:0:0:0:1 or : : 1 , identifies a loopback interface, enabling a node to send packets to itself. It is equivalent to the IPv4 loopback address of 127.0.0.1. Packets addressed to the loopback address are never sent on a link or forwarded by an IPv6 router.

, back address | I

Compatibility addresses IPv6 provides 6to4 addresses to aid in the migration from IPv4 to IPv6 and to facilitate the coexistence of both types of hosts. The 6to4 address is used for communicating between two nodes running both IPv4 and IPv6 over an IPv4 routing infrastructure. The 6to4 address is formed by combining the prefix 2002: :/16 with the 32 bits of the public IPv4 address of the node, forming a 48bit prefix. For example, for the IPv4 address of 131.107.0.1, the 6to4 address prefix is 2002:836B;l::/48. Multicast IPv6 A multicast address identifies multiple interfaces, and is used for one-to-many communication. With the appropriate multicast routing topology, packets addressed to a multicast address are delivered to all interfaces that are identified by the address. IPv6 multicast addresses have the Format Prefix of 1111 1111. An IPv6 address is simple to classify as multicast because it always begins with FF. Multicast addresses cannot be used as source addresses. Multicast addresses include additional structure to identify their flags, scope, and multicast group, as shown in the following illustration.

Chapter 3: Internet Protocol (IP)


S bits 1111 1111 4 bits Flags 4 bits Scope 112 bits Oroup ID

106

The fields in the multicast address are shown in Table 3-4: Table 3-4. IPv6 Multicast Address Fields Field
Format Prefix Flags

Description
IPv6 multicast addresses are identified with the Format Prefix of 1111 1111. Indicates flags that are set on the multicast address. As of RFC 2373, the only flag defined is the Transient (T) flag. The T flag uses the low-order bit of the Flags field. If 0, the multicast address is a permanently assigned, well-known multicast address allocated by the Internet Assigned Numbers Authority (IANA). If 1, the multicast address is a not permanently assigned, or transient. Indicates the scope of the IPv6 internetwork for which the multicast traffic is intended. In addition to information provided by multicast routing protocols, routers use the multicast scope to determine whether multicast traffic can be forwarded. The following scopes are defined in RFC 2373: Scope field value Scope Node-local Link-local

Site-local
Organization-local Global For example, traffic with the multicast address of FF02::2 has a link-local scope. An IPv6 router never forwards this traffic beyond local link.

107 Chapter 3: Internet Protocol (IP)

sroup
ID

The Group ID field identifies the multicast group and is unique within the scope: Permanently assigned group IDs are independent of the scope. Transient group IDs are only relevant to a specific scope.
Multicast addresses from FF01:: through FFOF:: are reserved, well-known addresses. It is possible to have 2112 group IDs. However, because of the way in which IPv6 multicast addresses are mapped to Ethernet multicast MAC addresses, RFC 2373 recommends assigning the Group ID from the low order 32 bits of the IPv6 multicast address and setting the remaining original group ID bits to 0. By using only the low-order 32 bits, each group ID maps to a unique network interface multicast MAC address.

To identify all nodes for the node-local and link-local scopes, the following multicast addresses are defined: FF01::1 (node-local scope all-nodes address) FF02::1 (link-local scope all-nodes address)

To identify all routers for the node-local, link-local, and site-local scopes, the following multicast addresses are defined: FF01::2 (node-local scope all-routers address) FF02: :2 (link-local scope all-routers address) FF05::2 (site-local scope all-routers address)

Solicited-node address The solicited-node address facilitates efficient querying of network nodes during address resolution. IPv6 uses the Neighbor Solicitation message to perform address resolution. In IPv4, the ARP Request frame is sent to the MAC-level broadcast, disturbing all nodes on the network segment regardless of whether a node is running IPv4. For IPv6, instead of disturbing all IPv6 nodes on the local link by using the local-link scope all-nodes address, the solicited-node multicast address is used as the Neighbor Solicitation message destination. The solicited-node multicast address consists of the prefix FF02::1:FF00:0/104 and the last 24-bits of the IPv6 address that is being resolved. The following steps show an example of how the solicited-node address is handled for the node with the link-local IPv6 address of FE80::2AA:FF:FE28:9C5A, and the corresponding solicited-node address is FF02::1:FF28:9C5A:

Chapter 3: Internet Protocol (IP)

108

1. To resolve the FE80::2AA:FF:FE28:9C5A address to its link layer address, a node sends a Neighbor Solicitation message to the solicited-node address of FF02::1:FF28:9C5A. 2. The node using the address of FE80:;2AA:FF:FE28:9C5A is listening for multicast traffic at the solicited-node address FF02::1:FF28:9C5A. For interfaces that correspond to a physical network adapter, it has registered the corresponding multicast address with the network adapter. As shown in this example, by using the solicited-node multicast address, address resolution that commonly occurs on a link can occur without disturbing all network nodes. In fact, very few nodes are disturbed during address resolution. Because of the relationship between the network interface MAC address, the IPv6 interface ID, and the solicited-node address, in practice, the solicited-node address acts as a pseudo-unicast address for efficient address resolution. Multicast Listener Discovery Multicast Listener Discovery (MLD) enables you to manage subnet multicast membership for IPv6. MLD is a series of three Internet Control Message Protocols for IPv6 (ICMPv6) messages that replaces the Internet Group Management Protocol (IGMP) that is used for IPv4. The purpose of Multicast Listener Discovery (MLD) is to enable each IPv6 router to discover the presence of multicast listeners (that is, nodes wishing to receive multicast packets) on its directly attached links, and to discover specifically which multicast addresses are of interest to those neighboring nodes. This information is then provided to whichever multicast routing protocol is being used by the router, in order to ensure that multicast packets are delivered to all links where there are interested receivers. MLD is an asymmetric protocol, specifying different behaviors for multicast listeners and for routers. For those multicast addresses to which a router itself is listening, the router performs both parts of the protocol, including responding to its own messages. If a router has more than one interface to the same link, it need perform the router part of MLD over only one of those interfaces. Listeners, on the other hand, must perform the listener part of MLD on all interfaces from which an application or upperlayer protocol has requested reception of multicast packets. The use of multicasting in IP networks is defined as a TCP/IP standard in RFC 1112. This RFC defines addresses and host extensions for the way IP hosts support multicasting. The same concepts originally developed for IP version 4 (IPv4), also apply to IPv6. MLD is defined in RFC 2710. MLD messages are used to determine group membership on a network segment, also known as a link or subnet. MLD messages are sent as ICMPv6 messages. MLD message types are described in the Table 3-5:

109 Chapter 3: Internet Protocol (IP) Table 3-5. IPv6 Multicast Listener Discovery (MLD) Message Types MLD message
type
Multicast Listener Query

Description
Sent by a multicast router to poll a network segment for group members. Queries can be general, requesting group membership for all groups, or can request group membership for a specific group. Sent by a host when it joins a multicast group, or in response to an MLD Multicast Listener Query sent by a router. Sent by a host when it leaves a host group and is the last member of that group on the network segment.

Multicast Listener Report Multicast Listener Done

Chapter 3: Internet Protocol (IP) Chapter 3 Questions 3-1.

110

Given a class network with 3 bits of subnetting, what will the mask be? a) 255.255.224.0 b) 255.255.248.0 c) 255.255.255.224 d) All of the above e) None of the above

3-2.

Why is TTL useful? a) Life is short, enjoy it while you can b) To prevent the delivery of packets that have outlived their usefulness c) To keep looping packets from impacting the network d) A mechanism used to keep route tables smaller

3-3.

Which of the following subnet masks is invalid? a) 255.224.0.0 b) 255.255.0.128 c) 224.0.0.0 d) 255.255.255.192 e) 255.255.242.0

3-4.

Which subnet mask gives you 30 hosts per network? a) 225.255.255.128 b) 255.255.255.192 c) 225.255.255.224 d) 225.255.255.240 e) 225.255.255.248

3-5.

What is a valid subnet mask, using 5 bits of a class C? a) 255.255.255.240 b) 255.255.240.0 c) 255.255.255.248 d) 255.255.248.0

3-6.

If you have a host of 192.168.1.35 and a subnet mask of 255.255.255.192, what is the subnet's address? a) 192.168.0.0 b) 192.168.1.0 c) 192.168.1.32 d) 192.168.1.16

111 Chapter 3: Internet Protocol (IP) 3-7. A 32 bit number written in dotted-decimal format where the most significant bits determine the network classification is: a) Classless IP address b) Classful IP address c) IP address with variable subnet mask d) IP address based on subnet zero 3-8. What is ICMP used for? a) Congestion control b) Error messages c) Packet segmentation d) Control messages 3-9. The components of an IPv4 address are: a) 48 bit: 32 bits network, 16 bits of host b) 32 bits: fixed 24 bit network and 8 bit host c) 32 bit dotted decimal address with 4 octets: area, network, range, host d) 32 bit (4 octets) broken into network and host 3-10. What is true about IP addresses? a) IP addresses are listed in dotted-decimal format b) IP addresses can be supernetted c) IP addresses have four octets d) IP addresses have 32 bits e) All of the above 3-11. What was the TCP/IP protocol designed for? a) To match the OSI Reference Model exactly b) As a standard, non-vendor specific way to create internetworks c) As the most efficient method of packet transport d) As a LAN only protocol at first, but for more uses later e) As a management protocol for WANs 3-12. Given an IP address of 10.22.3.4 and a subnet mask of 255.255.255.240, what would the broadcast address be?

a) 10.22.3.255 b) 10.22.1.255 c) 10.22.1.15 d) 10.22.3.15


3-13. If you have a VLSM network that had been a class B, but you've added 6 bits to the network mask, which netmask would this be?

Chapter 3: Internet Protocol (IP) a)/20 b)/21 c)/22 d)/23 e)/24 f)/25 3-14. What is the standard transport protocol and port used for SYSLOG messages? a) UDP514 b) TCP 520 c) UDP 530 d) TCP 540 e) UDP 535

112

3-15. A new Syslog server is being installed in the EnableMode network to accept network management information. What characteristic applies to these Syslog messages?(Select three) a) Its transmission is reliable. b) Its transmission is secure. c) Its transmission is acknowledged. d) Its transmission is not reliable. e) Its transmission is not acknowledged. f) Its transmission is not secure. 3-16. A user is having problems reaching hosts on a remote network. No routing protocol is running on the router and it's using only a default to reach all remote networks. An extended ping is used on the local router and a remote file server with IP address 10.5.40.1 is pinged. The results of the ping command produce 5 " U " characters. What does the result of this command indicate about the network? a) An upstream router in the path to the destination does not have a route to the destination network. b) The local router does not have a valid route to the destination network. c) The ICMP packet successfully reached the destination, but the reply form the destination failed. d) The ping was successful, but congestion was experienced in the path to the destination. e) The packet lifetime was exceeded on the way to the destination host. 3-17. You are implementing NAT (Network Address Translation) on the BootCamp network. Which of the following are features and functions of NAT? (Choose all that apply)

113 Chapter 3: Internet Protocol (IP) a) Dynamic network address translation using a pool of IP addresses. b) Destination based address translation using either route maps or extended access-lists. c) NAT overloading for many to one address translations. d) Inside and outside source static network translation that allows overlapping network address spaces on the inside and the outside. e) NAT can be used with HSRP to provide for ISP redundancy. f) All of the above. 3-18. Which attributes should a station receive form a DHCP server? a) IP address, network mask, MAC address and DNS server b) IP address, DNS, default gateway and MAC address c) IP address, network mask, default gateway and host name d) IP address, network mask, default gateway and MAC address e) None of the above 3-19. Which Cisco specific method should be configured on routers to support the need for a single default gateway for LAN hosts when there are two gateway routers providing connectivity to the network? a) DHCP b) RIP c) OSPF d) HSRP e) VRRP 3-20. You are attempting to properly subnet the IP space of one of the BootCamp location. For the network 200.10.10.0 there is a need for 3 loopback interfaces, 2 point to point links, one Ethernet with 50 stations and one Ethernet with 96 stations. What option below would be the most efficient (for saving IP addresses)? a) 200.10.10.1/32, 200.10.10.2/32, 200.10.10.3/32 200.10.10.4/30, 200.10.10.8/30 200.10.10.64/26, 200.10.10.128/25 b) 200.10.10.0/32, 200.10.10.1/32, 200.10.10.2/32 200.10.10.4/32, 200.10.10.8/31 200.10.10.64/26, 200.10.10.128/25 c) 200.10.10.1/32, 200.10.10.2/32, 200.10.10.3/32 200.10.10.4/31, 200.10.10.8/32 200.10.10.64/27, 200.10.10.128/27 d) d) 200.10.10.1/31, 200.10.10.2/31, 200.10.10.3/31 200.10.10.4/30, 200.10.10.8/30 200.10.10.64/27, 200.10.10.128/26 e) There is not enough address available on that network for these subnets. 3-21. What option is the best way to apply CIDRI if a service provider wants to summarize the following addresses: 200.1.0.0/16, 200.2.0.0/16, 200.3.0.0/16, 200.5.0.0/16, 200.6.0.0/16, 200.7.0.0/16? a) 200.0.0.0/14, 200.4.0.0/15, 200.6.0.0/16, 200.7.0.0/16

Chapter 3: Internet Protocol (IP)

114

b) 200.0.0.0/16 c) 200.4.0.0/14, 200.2.0.0/15, 200.2.0.0/16, 200.1.0.0/16 d) 200.4.0.0/14, 200.2.0.0/15, 200.1.0.0/16 e) 200.0.0.0/18


3-22. Which Network Address Translation type describes the internal network that uses private network addresses? a) Inside local b) Inside global c) Outside local d) Outside global e) None of the above 3-23. A system administrator of BootCamp.com is using a private IP address space for the company network with many to one NAT to allow the users to have access to the Internet. Shortly after this, a web server is added to the network. What must be done to allow outside users access to the web server via the Internet? a) Use a dynamic mapping with the reverse keyword. b) Place the server's internal IP address in the external NAT records. c) There must be a static one to one NAT entry for the web server's address. d) Nothing more needs to be done as dynamic NAT is automatic. e) Place the server's IP address into the NAT pool. 3-24. A diskless workstation boots up and uses BOOTP to obtain the information it needs from a BOOTP server. How will the diskless client obtain the information it needs from the server? a) The BootP client will use a telnet application to connect to the server, after which the client will use the DHCP server to get hold of the memory image. b) The BootP client will obtain the memory image after which the client will use a second protocol to gather the necessary information. c) The BootP client will use a second protocol to gather the necessary information, and then the BootP server will send memory image. d) The BootP server will gather and provide the client with the information necessary to obtain an image and then the client will use a second protocol to obtain the memory image. e) None of above) 3-25. Which of the following DNS resource records are valid? (Choose all that apply) a) NS b) PTR C) MX

115 Chapter 3: Internet Protocol (IP) d) FQDN e) A f) None of the above 3-26. Select the mode that NTP servers can associate with each other: a) Client and Server b) Peer c) Broadcast/Multicast d) and C e) All the above 3-27. A customer wants to install a new Frame Relay router in their network. One goal is to ensure that the new router has the correct configuration to maintain a consistent time and date, like the other routers in the network. The customer wants to configure the new router to periodically poll a UNIX server that has a very reliable and stable clock for the correct time. This will synchronize the new router's clock with the UNIX server. What command should be configured on the new router to synchronize its clock with a centralized clock service? a) ntp master b) ntp server c) ip ntp clock d) ntp peer e) sntp master f) All of the above 3-28. What should be configured on redundant routers to support the need for a default gateway on LAN network hosts when there are two gateway routers providing connectivity to the rest of the network? a) DHCP b) RIP c) OSPF d) HSRP e) 3-29. With regard to the File Transfer Protocol (FTP), which of the following statements are true? a) FTP always uses one TCP session for both control and data. b) With passive mode FTP, both the control and data TCP sessions are initiated from the client. c) With active mode FTP, the server used the port command to tell the client on which port it wished to send the data.

Chapter 3: Internet Protocol (IP)

116

d) FTP always uses TCP port 20 for the data session and TCP port 21 for the control session. e) FTP always uses TCP port 20 for the control session and TCP port 21 for the data session. 3-30. You use a telnet application to access your Internet router. What statement is true about the telnet application? a) Telnet does not use a reliable transport protocol. b) Telnet is a secure protocol because it encrypts every message sent. c) Telnet sends user names, passwords and every other message in clear text. d) Telnet encrypts user names, passwords but sends every other message in clear text. e) Telnet uses UDP as transport protocol. 3-31. What is the method used by SMTP servers on Internet to validate the email address of the message sender? a) It checks the user address with the MTA sending the message. b) It validates the domain of the sender address with a DNS server. c) It does not check the sender address. d) It checks if the IP address of the MTA sending the message is not spoofed) e) It checks if the domain of the MTA sending the message matches with the domain of the sender of the message) 3-32. Upon which protocol or protocols does TFTP rely on? a) IP and TCP b) NFS c) FTP d) UDP e) ICMPand UDP f)TCP 3-33. Identify the TCP port numbers with their associated programs: 443, 389, 137, 110, and 23 in the proper sequence: a) BGP, POP3, SNMP, TFTP, Telnet b) LDAP, SNMP, TFTP, POP3, Telnet c) HTTPS, SNMP, POP3, DNS, Telnet d) Finger, DHCP Server, NetBios Name Server, POP3, Telnet e) HTTPS, LDAP, NetBios Name Server, POP3, Telnet f) None of the above 3-34. In your network, you want the ability to send some traffic around less congested links. To do this, you want to bypass the normal routed hop-

117 Chapter 3: Internet Protocol (IP) by-hop paths. What technology should you implement? What should you use? a) Traffic engineering b) Traffic tunneling c) Traffic policing d) Traffic shaping e) Traffic routing 3-35. What best describes the IPv6 Solicited-node Multicast address? a) For each unicast and anycast addresses configured on an interface of the node or a router, a corresponding solicited-node multicast addresses is automatically enabled. b) The solicited-node multicast address is scoped at the local link. c) Since ARP is not used the in IPv6, the solicited-node multicast addresses is used by nodes and router to learn the link layer address of the neighbor nodes and routers on the same local link. d) Duplicate Address Detection (DAD) is used to verify if the IPv6 address is already in used on it's local link, before it configure it's own IPv6 address with stateless auto-configuration, Solicited-node multicast addresses probe the local link to make sure. e) All of the above f) None of the above 3-36. What is a main difference between the IPv6 and IPv4 multicast? a) IPv6 has significantly more address space (128 bits), so overlapping addresses are less likely. b) Multicast Listener Discovery (MLD) replaces IGMP in IPv6 multicasts. c) MSDP and dense mode multicast is not part of IPv6 multicast. d) The first 8 bits of Ipv6 Multicast address are always FF (1111 1111). e) All of the above 3-37. What best describes the functionality of the Multicast Listener Discovery (MLD)? a) IPv6 routers use MLD to discover multicast listeners on directly attached links. b) For each Unicast and Anycast addresses configured on an interface of the node or a router, a corresponding entry is automatically enabled. c) The MLD addresses is scoped to the local link. d) Since the ARP is not used in the IPv6, the MLD is used by nodes and routers to learn the link layer address of the neighbor nodes and routers on the same local link. e) MLD is used to verify if the IPv6 address is already in use on it's local link, before it configure it's own IPv6 address with stateless autoconfiguration.

Chapter 3: Internet Protocol (IP) 3-38.

118

What best describes the functionality of the Multicast Listener Discovery (MLD)? a) IPv6 routers use MLD to discover multicast listeners on directly attached links. b) For each Unicast and Anycast addresses configured on an interface of the node or a router, a corresponding entry is automatically enabled. c) The MLD addresses is scoped to the local link. d) Since the ARP is not used in the IPv6, the MLD is used by nodes and routers to learn the link layer address of the neighbor nodes and routers on the same local link. e) MLD is used to verify if the IPv6 address is already in use on it's local link, before it configure it's own IPv6 address with stateless autoconfiguration.

3-39. Which types of SNMPvl messages are sent from the NMS (Network Management Station) using SNMP version 1 to the Agent? a) Trap, Get and Set b) Get, Set and Getnext c) Get, Set, Getnext and GetBulk d) Get, Set and GetBulk e) Trap only 3-40. What is the difference between the community formats of SNMPvl SNPMv2c? a) With SNPMvl, communities are sent as clear text and on SNPMv2c they are encrypted. b) On SNPMvl communities are encrypted and on SNPMv2c they are sent as clear text. c) There is no difference because both versions send encrypted communities. d) There is no difference because both versions send communities as clear text. e) SNMPv2c does not use communities. 3-41. Network management tools use Management Information Base (MIB) information to monitor and manage networks. Which of the following is NOT part of the MIB-2 specification, as defined in RFC 1213? (Choose all that apply) a) The System Group b) The TCP Group c) The Transmission Group d) The Enterprises Group e) The RMON Group f) The ICMP Group

119 Chapter 3: Internet Protocol (IP) 3-42. Which statements are true about the purpose and functionality between SNMP and MIBs? (Select three) a) A Management Information Base (MIB) is a collection of information that is organized hierarchically. b) A Management Information Base (MIB) is a collection of network device information that is organized in a bulk transfer mode to the management station. c) MIBs are accessed using a network-management protocol such as SNMP. d) MIBs are accessed using a network-management protocol such as TCP. e) MIBs are comprised of managed objects and are identified by the object identifiers. f) MIBs are comprised of managed objects and are identified by the Imhosts table. 3-43. Which options are true regarding the privacy capability using cryptography and the authentication method for SNMPvl, SNMPv2c and SNMPv3? (Choose all that apply) a) SNMPvl has no privacy and uses community for authentication. b) SNMPv2c has privacy and uses community for authentication. c) SNMPv2c has privacy and uses usernames for authentication. d) SNMPv3 has privacy and use community for authentication. e) SNMPv3 has privacy and uses usernames for authentication. 3-44. Which security features are defined in SNMPv3? (Select all that apply)

a) Authentication
b) Domain checking c) Accounting d) Privacy 3-45. What SNMP message type reports events to the NMS reliably?

a) Get
b) Response c) Inform d) Trap

e) Get Bulk

Chapter 3: Internet Protocol (IP) Chapter 3 Answers 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 3-17 3-18 3-19 3-20 3-21 3-22 3-23 3-24 3-25 3-26 3-27 3-28 3-29 3-30 3-31 3-32 3-33 3-34 3-35 3-36 3-37 a c b, c c c c b, d d b d c a d, e, f a a, b, c, d a d a c d a, b, c b b d b c c d a a

120

121 Chapter 3: Internet Protocol (IP) 3-38 3-39 3-40 3-41 3-42 3-43 3-44 3-45 a b d d, a, c, a, e a, d c

Chapter 4

IP Routing Protocols
Routing Protocol Concepts
ConvergenceThe process of bringing the routing tables on all the routers in the network to a consistent state. Different routing protocols will converge at different rates depending on their design. Link State protocols will usually converge faster than Distance Vector protocols. Convergence time is how long it takes for all the routers in a given system to share information. Load BalancingAllows the transmission of packets to a specific destination over two or more paths. This can depend on equal or unequal cost; and can be configured per packet or per destination. Quite often it takes some thought and manual configuration to achieve the desired result. Static RoutingThe information in a router's route table can be built manually through static route entries, or dynamically through a routing protocol. Static routes can point to a specific host, a network, a subnet, or a super-net. You can also have floating static routes; routes that have an Administrative Distance (AD) set higher than the in-use dynamic routing protocol. If the route learned through the dynamic routing protocol is lost, then the floating static route will come into play. This provides a pre-configured automatic fallback route.

IOS command to add a static route - ip route 192.168,10.0 255.255.255.0 192.168.1.1 With a floating static route - ip route 192.168.10.0 255.255.255.0 192.168.1.1 105 *
*The last argument of "105" on the ip route command is the administrative distance assigned to the route. MetricsAll routing protocols use metrics to calculate the best path. Some protocols use simple metrics, such as RIP which uses hop count. Others, such as EIFRP, use more meaningful information. Other metrics that you may encounter include load, delay, reliability and cost. Sometimes system administrators will manually configure the metrics on a router to control the routing behavior of their network. Route FlappingThe frequent changing of preferred routes as an interface or router goes into and out of operation (error condition). This process can create

123 Chapter 4: IP Routing Protocols problems in a network, especially in complex OSPF networks, as this information will cause the routers to constantly recalculate their OSPF database and flood the network with LSAs. Autonomous Systems (ASs)A group of routers sharing a single routing policy, which run under a single technical administration, and commonly with a single Interior Gateway Protocol (IGP). Each AS has a unique identifying number between 1 and 65,535 (64,512 through 65,535 are set aside for private use) usually assigned by an outside authority. Passing routing information between ASs is performed through an exterior gateway protocol, such as BGP. Route Taggingprovides the capability to have flexible policy controls by creating a 32-bit tag value specific to the local routing domain to advertise to external routes. This is particularly useful in transit ASs, where the IGP interacts with BGP. Periodic UpdatesRouting protocols that use this technique send their routing updates at a consistent update interval, typically anywhere from 10 to 90 seconds. If the link speed is slow, or if the link is DDR, this process can present problems in some networks. If the update interval is too high, then the convergence speed of the network may suffer. Passive-InterfacePrevents interfaces from sending routing updates. They will, however, continue to listen for updates. This command is applied in the router configuration, and specifies a physical interface. Routing Loops Routing loops occur when the routing tables of some or all of the routers in a given domain route a packet back and forth without ever reaching its final destination. Routing loops often occur during route redistribution, especially in networks with multiple redistribution points. There are several commonly used methods for preventing routing loops, including: HolddownsRoutes are held for a specified period of time to prevent updates advertising networks that are possibly down. The period of time varies between routing protocols, and is configurable. Holddown timers should be set very carefullyif they are too short, they are ineffective; too long and convergence will be delayed. Triggered updatesAlso known as flash updates, are sent immediately when a router detects that a metric has changed or a network is no longer available. This helps speed convergence. Instead of waiting for a certain time interval to elapse to update the routing tables, the new information is sent as soon as it is learned. Split horizonIf a router has received a route advertisement from another router, it will not re-advertise it back out the interface from which it was learned.

Chapter 4: IP Routing Protocols

124

Poison reverseOnce you learn of a route through an interface, advertise it as unreachable back through that same interface.

Administrative Distance When a route is advertised by more than one routing protocol, the router must decide which protocol's routes to use. The predefined Administrative Distances of routing protocols allow the router to make that decision, more or less telling the router the relative trustworthiness of the different protocols. For example, if a router were running two routing protocols such as OSPF and RIP, and both processes presented a route to a specific destination network, then by default the OSPF route would be put into the router's route table as OSPF's AD has a lower value than RIP's AD (lower value=more trusted). Remember that the values given below are defaults, and the network engineer has the power to change them. Under each of the respective routing protocol's configuration, you can set the routes created by that routing protocol default administrative distance. Also, when you create a static route, you will define the administrative distance. You may have the need to make the static route have a lower administrative distance than a route from a particular routing protocol and, thus, force the static route to be chosen over the dynamic route. In the case of configuring dial backup using static routes, the reverse is done; a route using the dial backup interface with a higher administrative is created. Common AD's: Route Source Directly Connected Static EIGRP Summary route EBGP EIGRP (Internal) IGRP OSPF ISIS RIP EGP ODR (On Demand Routing) EIGRP (External) IBGP BGP Local Unknown AD 0 1 5 20 90 100 110 115 120 140 160 170 200 200 255

125 Chapter 4: IP Routing Protocols Access Control Lists (ACLs) The most commonly configured method to filter anything on a Cisco router is an access-list. Access lists provide a network engineer with the tools necessary to control the flow of traffic and routing updates. There is more discussion of ACL's in Chapter 5 and Chapter 8. Different routing protocols offer different ways to allow particular routing updates in, out, or modify the updates in some manor. For instance, RIP and OSPF only support distribute-lists to control the routes that come in or go out. On the other hand, BGP supports distribute-lists, route-maps, prefix-lists, and filter-lists. Redistribution Redistribution is the process of sharing routes learned from different sources (usually routing protocols). For instance, you might redistribute the routes learned through OSPF to a RIP domain, in which case you might have problems with VLSM; or you might redistribute routes learned through static entries into EIGRP. With some routing protocols, like EIGRP, you can redistribute routes from one process of EIGRP to another process. In any case, redistribution is just the sharing of information learned from different sources, and it must be manually configured. When it comes to redistribution of routes from one routing protocol route-maps are usually used to filter the redistributed routes. Each of the different methods used to filter routing updates has its own set of pros and cons. These methods are described in the sections below. For an example of routing protocol redistribution, let's see what it looks like to redistribute RIP routes into OSPF: First, enter the OSPF routing configuration process. Router(config)# router ospf 100 As you can see, you can redistribute the RIP routes into OSPF with just the single command redistribute rip or you can specify the metrics to be used, to include subnetted networks, and whether to use a route-map to control the routes to be sent. We will redistribute rip with the bare minimum configuration. Router(config-router)# redistribute rip ? Metric Metric for redistributed routes metric-type route-map subnets OSPF/IS-IS exterior metric type for redistributed routes Route map reference Consider subnets for redistribution into OSPF

Chapter 4: IP Routing Protocols

126

tag <cr>

Set tag for routes redistributed into OSPF

Router(config-router)# redistribute rip % Only classful networks will be redistributed The IOS tells you that, as you didn't specify that you wanted subnetted networks be redistributed, only classful networks will be sent from RIP to OSPF. If you ask for help after the redistribute command, under the OSPF process, you can see that there are many different types of routes and routing protocols that can be redistributed into OSPF. Router(config-router)# redistribute ? Bgp Connected Egp Eigrp Igrp Isis iso-igrp metric metric-type mobile odr ospf rip route-map static subnets tag <cr> Border Gateway Protocol (BGP) Connected Exterior Gateway Protocol (EGP) Enhanced Interior Gateway Routing Protocol (EIGRP) Interior Gateway Routing Protocol (IGRP) ISO IS-IS IGRP for OSI networks Metric for redistributed routes OSPF/IS-IS exterior metric type for redistributed routes Mobile routes On Demand stub Routes Open Shortest Path First (OSPF) Routing Information Protocol (RIP) Route map reference Static routes Consider subnets for redistribution into OSPF Set tag for routes redistributed into OSPF

Router(config-router)#

127 Chapter 4: IP Routing Protocols Distribution Lists Distribution Lists are used to filter the contents of inbound or outbound routing protocol updates. Standard IP access lists are used to define a list against which the contents of the routing updates are matched. Remember that the access list is applied to the contents of the update, not to the source or destination of the routing update packets themselves. The distribute-list command is entered at the global or router configuration levels, and there is an option to apply the list to specific interfaces. For any given routing protocol, it is possible to define one interface-specific distribute-list per interface, and one protocol-specific distribute-list for each process/autonomous-system pair. Here is an example: access-list 1 permit 10.0.0.0 0.255.255.255 access-list 2 permit 172.16.3.0 0.0.0.255 router rip distribute-list 1 in ethernet 0 distribute-list 2 out Route-maps Route-maps are used for controlling many different router services. For instance, they are used to control redistribution, set tagging, set metrics, send certain traffic out certain interfaces (called policy routing), and have many other uses. Route-maps use sequence numbers for easy editing and work off of matching something and setting some parameter or taking some action. For instance, say that you were receiving routes from BGP peer 200.200.200.1 and you want to modify route 1.1.1.0/24 such that the metric, for that incoming route only, is equal to 1 (or the equivalent of a static route). Here's how you would do it: First, you need to make an access-list to specify the route that you want to modify with your route-map. Let's call that access-list, access-list 1, for simplicity's sake. Router(config)# access-list 1 permit 1.1.1.0 0.0.0.255 Now we will create the route-map. Specify anything that access-list 1 matches. Router(config)# route-map mymap Router(config-route-map)# match ip address 1 Now, set the metric for the incoming route, that you specified in access-list 1, to an administrative distance (metric) of 1. Router(config-route-map)# set metric 1

Chapter 4: IP Routing Protocols

128

Next, inside your BGP configuration, you would configure it to use the route-map that you created. Router(config)# router bgp 100 Router(config-router)* neighbor 200.200.200.1 route-map mymap in You are specifying in the BGP neighbor statement that you want to filter incoming (the in parameter) routing updates, from this neighbor, with the routemap named "mymap". To see the route-map that you created, you can do the following: Router* show route-map route-map mymap, permit, sequence 10 Match clauses: ip address (access-lists): 1 Set clauses: metric 1 Policy routing matches: 0 packets, 0 bytes Router* As you can see, the IOS automatically created the first sequence number of 10. If you attempt to create another statement, and do not specify the sequence number, you will be dropped back into sequence number 10, the first one that is created. Once a route-map has more than one sequence number, you must specify the sequence number when making additional statements or modifications to the map. Each route-map sequence number can also be a permit or deny statement. By default, they will be permit statements (as you can see from the example above where we did not specify and, in the show output, it shows that our statement was marked as "permit). Note that route-maps have an implicit deny at the end of them, just like an access-list. Thus, if your route-map does not specify a certain type of traffic, that traffic is denied. Prefix Lists Prefix-lists have many uses. They can be used with any routing protocol to filter routes in distribution-lists, used with redistribution from one routing protocol to another, or used in route-maps to match routes. Under BGP, prefix-lists can be used without distribution-lists, directly, to filter inbound or outbound routes. In general, prefix-lists filter routes depend on IP & subnet prefix combinations that match your list. Prefix-lists use sequence numbers that are used to order and modify them. When an entry in the list is matched the list processing stops and, just like a regular access-list, there is an implicit deny at the end of the list. Say that you are receiving routing updates from BGP neighbor 200.200.200.1. There are 3 updates coming from the neighbor. They are 1.1.1.0/24, 2.2.2.0/16,

129 Chapter 4: IP Routing Protocols and 3.3.3.0/28. You want to only permit networks with a /24 mask, or less. In other words, you only want to permit networks that have a mask o f / 1 through /24. In this case, that would deny the /28 network that is being advertised. To do this you would do the following: Router(config)# ip prefix-list myiist permit 0.0.0.0/0 le 24 This creates the prefix list named "myiist" to permit all routes that have a subnet mask of 255.255.255.0, or less. Router(config)# router bgp 100 Router(config-router)* neighbor 200.200.200.1 prefix-list myiist in You are specifying in the BGP neighbor statement that you want to filter incoming (the in parameter) routing updates, from this neighbor, with the prefix-list named "myiist". If you want to see your prefix-list, you can do the following: Router# show ip prefix-list ip prefix-list myiist: 1 entries seq 5 permit 0.0.0.0/0 le 24 Router* As you can see, when we did not specify a sequence number for this prefix list, the first entry defaulted to 5. The IOS will automatically create subsequent entries numbered by 5's (10, 15, 20, etc). Like route-maps, prefix-list statements can also be permit or deny statements. Filter Lists Filter-lists are only used with BGP. Filter lists don't actually filter networks in a routing update but, instead, they filter BGP AS paths. As you'll learn below, in the BGP section, BGP uses autonomous system numbers (AS numbers) to identify each network group your router would have to traverse to get to that IP network that is being advertised through BGP. Filter-lists use regular expressions to specify the AS path(s) that will be filtered. Regular expressions are used for many things so you should be familiar with the basics of them. Here are the meanings of some of the regular expressions: Beginning of a string -> (caret) End of a string -> $ (dollar sign) Zero or any sequence in a string - * (asterisk) A single character in a string -> . (period) Comma (,), left brace ( { ) , right brace ( } ) , left parenthesis, right parenthesis, the beginning of the input string, the end of the input string, or a space -> _ (underscore)

Chapter 4: IP Routing Protocols

130

An excellent source on how to properly create regular expressions can be found in the Cisco Documentation under IOS Terminal Server Configuration Guide, Regular Expression Appendix. Thus, a string that is empty would be $ (the beginning and the end, with nothing in the middle). For example, say that you wanted to allow networks that have any AS path ending in AS number 1234, from neighbor 1.1.1.1, using a filter list. Here is what you would do: Create the AS patch access-list Router(config)# ip as-path access-list 1 permit _1234$ Go into BGP configuration and specify (using the neighbor statement) to use the as-path access-list that you just created. Router(config)# router bgp 100 Router(config-router)# neighbor 1.1.1.1 filter-list 1 in Note that the filter-list is referencing access-list 1, the ip as-path access-list we just created. Also, note the direction of the filter-list. We are filtering updates that are coming IN from this neighbor. Routing Information Protocol ( R I P ) & RIP V2 There are two versions of RIPversions 1 and 2both of which are Distance Vector routing protocols. RIPvl (version 1) is classful and must use Fixed Length Subnet Masks (FLSM); RIPv2 adds additional features such as classless routing, variable subnet masks (VLSM), and authentication. Both versions use hop count as their only metric and are limited to 15 hops. A hop is simply a single pass through a router. By default, RIP routers send their entire routing table out every interface in 30 seconds increments. Both versions of RIP operate on UDP port 520. A metric of 1 signifies a directly connected network by the advertising router, and 16 as an unreachable network. The timers for update, invalid, holddown, and flush can be manually configured. RIP uses split-horizon with poison reverse, and triggered updates. The split horizon rule reduces the incidence of routing loops. Split horizon prevents two-node loops between neighbors (tight loops) by not advertising the routes on the same interface from which they were learned. Split horizon also eliminates unnecessary updates. Split horizon with poison reverse allows the routing protocol to advertise all routes out an interface, but those learned from earlier updates coming into that interface are marked with infinite distance metrics.

131 Chapter 4: IP Routing Protocols Split Horizon in a Hub and Spoke Network In a hub and spoke network, routes from remote frame relay sites will not be sent to other remote locations. The only way to ensure full connectivity between all locations in a hub and spoke topology using RIPv2, is to use sub-interfaces, or to disable split horizon on the physical serial interface. If split horizon is enabled, neither auto-summary nor interface summary addresses (those configured with the ip summary-address rip command) are advertised. The split horizon mechanism blocks information about routes from being advertised by a router out of any interface from which that information originated. Split horizon is enabled on all interfaces by default.

The ip summary-address rip command causes the router to summarize a given set of routes learned via RIP version 2 or redistributed into RIP version 2. Host routes are especially applicable for summarization. To configure IP summary addressing, use the following commands: Command
Step 1 Router(confiq)# interface ethernetl Step 2 Router(config-if)# ip summary-address rip ip_address ip_network_mask

Purpose
Enters interface configuration mode. Specifies the IP address and network mask that identify the routes to be summarized.

Configuration Example: router rip network 10.0.0.0

exit
interface ethernetl ip address 10.1.1.1 255.255.255.0 ip summary-address rip 10.2.0.0 255.255.0.0 no ip split-horizon

exit
By default, RIP version 2 summarizes networks automatically. However, the ip summary address configuration statement takes precedence over automatic network summary. RIPv2 (version 2) sends the network mask information in its updates. This feature allows RIPv2 to handle discontiguous networks and VLSM. When routes are redistributed into RIP, the default metric applied to the route is 16. Because RIP (both version 1 and version 2) uses hop count as the metric,

Chapter 4: IP Routing Protocols

132

the routes will be viewed as unreachable. However, when connected routes are redistributed into RIP, the default seed metric is 0. RIP doesn't require an AS or Process ID number. By default, when RIP is configured on a Cisco router it only sends RIPvl info, but listens to both RIPvl and RIPv2. Use the version x command under router rip to manipulate this behavior. By default, RIPv2 updates are sent via multicast at address 224.0.0.9. Basic Configuration Example: Router(config)# router rip Router(config-router)# network 10.0.0.0 Interior Gateway Routing Protocol (IGRP) IGRP is a Cisco proprietary, distance-vector routing protocol designed for routing in an autonomous system with diverse bandwidth and delay characteristics. It uses a combination of user-configurable metrics, including bandwidth, delay, reliability, load and MTU. A good way to remember "Bandwidth, Delay, Reliability, Load and MTU" is "Bob Doesn't Really Like Me". IGRP uses ticks ( l / 8 t h of a second increments) as its primary metric, but falls back to hops in the event of a tie. It also uses hops to limit loop problems. The default maximum diameter is 100 hops, but this can be changed to up to 255. There are three types of IGRP routes: Interior routes Routes between subnets in the network attached to a router interface. If the network is not subnetted, IGRP does not advertise interior routes. System routesRoutes to networks within an autonomous system. These routes are derived from directly connected network interfaces and system route information provided by other IGRP-speaking routers. These routes do not include any subnet information. Exterior routes Routes to networks outside the autonomous system that are considered when identifying a gateway of last resort.

By default, a router running IGRP sends an update broadcast every 90 seconds. It declares a route "is possibly down" if it does not receive an update from the first router in the route within three update periods (270 seconds). After seven update periods (630 seconds), the route is removed from the routing table. If traffic destined for a network learned through IGRP is on a link that "is possibly down", it will be forwarded, even though it might not be successful in reaching it's destination. IGRP has several ways to speed convergence:

133 Chapter 4: IP Routing Protocols Flash updateThe sending of an update sooner than the standard periodic update interval to notify other routers of a metric change. Poison reverseSent to remove a route and place it in holddown, which keeps new routing information from being used for a certain period of time.

Another thing to know about IGRP (and later, EIGRP) is that for IGRP and EIGRP to propagate a "gateway of last resort" the route specified by the ip defaultnetwork command must be already known as a network. This means the network must be IGRP- or EIGRP-derived, or a static route redistributed from another routing protocol. Open Shortest Path First (OSPF) OSPF is a Link State routing protocol that uses Dijkstra's shortest path first (SPF) algorithm. OSPF is an open standard (following RFC 1253) and is often used in multi-vendor environments. Several of OSPF's advantages include fast convergence, classless routing, VLSM support, authentication support, support for much larger inter-networks, the use of areas to minimize routing protocol traffic, and a hierarchical design. Other OSPF Features: Equal cost load balancing Multicast routing updates Route tagging for tagging of external routing information Classless behavior, which allows the use of discontiguous networks

OSPF Traffic Types: Intra-areaTraffic passed between routers within a single area. Inter-areaTraffic passed between routers in different areas. ExternalTraffic passed between an OSPF router and a router in another autonomous system.

OSPF Area Types: Backbone (transit area)Must be labeled area " 0 " , it accepts all LSA's (see below for LSA Type definitions) and is used to connect multiple areas. All other areas must connect directly to this area in order to exchange route information. The only exception to this is the use of virtual-links, which allows you to tunnel through an adjacent area to get to area " 0 " . When interconnecting multiple areas, the backbone area is the core area to which all other areas must connect. If you only have one OSPF area, then you can use an area number other than "0". StandardAccepts internal, external and summary LSA's.

Chapter 4: IP Routing Protocols

134

StubRefers to an area that does not accept Type-5 LSA's to learn of external ASs. If routers need to route to networks outside the autonomous system, they must use a default route. The ABR automatically sends this default route as an IA 0.0.0.0 0.0.0.0 route. Because this is a type 3 LSA (summary), the ABR does not send it to areas outside of the stub network. Type 3 LSAs (network summary) from other areas are still allowed into a stub area. Only type 1, 2, and 3 LSAs will be allowed inside of a stub area.

The command to configure a stub area is area X stub. Not-so-stubby areaAlso know as NSSA. It is the same as a stub area except it accepts LSA Type 7. This is useful if you want to accept redistributed routes from another routing protocol (at your ASBR). Once these routes leave the NSSA, through the ASBR, they are converted to Type 5. Type 7 LSA's can only exist in an NSSA. If the P-bit in the LSA is set to zero by the ASBR, then the ABR will change the LSA from type 7 to type 5 and send outside the area (see more on the P-bit in the LSA Options field section, below). In this case, the ABR does not send a default route automatically. To send a default route, you would use area X nssa default-information originate. This would send a default route as type "N2". The command to configure a NSSA area is area X nssa. Some other variants of the NSSA commands are: Area X nssa no-summaryThis configures a NSSA totally stubby area. This would be configured on NSSA ABRs only. This would automatically sends default route into the NSSA totally stubby area as an "IA" route / type 3 summary LSA. Area X nssa no-redistributeYou would configure this on a NSSA ASBR that is also an ABR. This would prevent the NSSA ASBR/ABR from sending type 7 LSAs into the NSSA. Area X nssa translate type7 suppress-faThis command would be configured on a NSSA ASBR or NSSA ABR. These are the only options for the command. This would translate a type 7 LSA into a type 5 LSA, thus suppressing the forwarding address of the type 7 LSA.

Totally StubbyAW LSA's except Type 1 and 2 are blocked. Intra-area routes and the default route (sent as an IA route) are the only routes passed within a totally stubby area. This is Cisco proprietary. The command to configure a totally stubby area is area X stub no-summary.

135 Chapter 4: IP Routing Protocols Stub and Totally Stubby Area Similarities: You cannot control the exit point from the area. If there is more than one default route, the router will use the best metric to choose the exit point. All routers within the stub area must be configured as stub routers (E-bit = 1). If not, they cannot form adjacencies with the other stub routers. A stub area cannot be used as a transit area for virtual links. An ASBR cannot be internal to a stub area. Inter-area routing is depends on a default route. This default route is generated automatically as a IA route. The backbone area cannot be a stub area. Typically used in a hub and spoke topology with the spokes being remote sites configured as stub or totally stubby areas.

Stub and Totally Stubby Area Differences: Totally stubby areas have smaller routing tables, since the only routes they receive is the default IA route. Neither will accept Type-5 (autonomous system entries). Totally stubby goes even further by not accepting Summary LSA's (Type-3 and Type-4). Totally stubby is Cisco proprietary and Stub is an OSPF standard.

OSPF Peer Relationships: OSPF enabled routers send hello packets out all OSPF configured interfaces. This is used to discover neighbors and determine if they are alive. In order for peer relationships to form, the OSPF hello packet information must be consistent on all routers in an area. A hello packet contains the following information: Area ID Router ID Address mask of the originating interface Hello/Dead Interval Authentication Information Router Priority DR and BDR info Router IDs of its direct OSPF neighbors

This hello packet information must be consistent before an adjacency can be formed. OSPF routers can have one of three neighbor relationships: Designated Router (DR), Backup Designated Router (BDR), or neither. For neither, the router neighbor relationship will show as 2WAY/DROTHER. All OSPF routers must have a unique router ID. The router ID is the highest IP address on any of its loopback interfaces. If the router doesn't have any loopback interfaces, then it chooses the highest IP address on any of its enable

Chapter 4: IP Routing Protocols

136

interfaces. The interface doesn't have to have OSPF enabled on it. Loopback interfaces are often used because they are always active and there is usually more leeway in its address assignment. Designated Routers (DRs) and Backup Designated Routers (BDRs) are elected on Broadcast and Nonbroadcast Multi-access networks such as Ethernet broadcast domains. You can control the selection of DRs through the use of the ip ospf priority command; the highest priority wins, and a setting of " 0 " makes the router ineligible to become DR. If a router joins the network with a priority somewhere between the existing DR and BDR the network does not recalculate until the DR fails, then the BDR becomes the DR, and the new router will become BDR. OSPF contains five network types: point-to-point, broadcast, non-broadcast multi-access (NBMA), point-to-multipoint, and virtual-links. Changes in the OSPF network topology are represented as changes in one or more of the OSPF Link State Advertisements (LSAs). Flooding is the process by which changed or new LSAs are sent throughout the network, and are used to ensure that the database of every OSPF router is updated is identical. This flooding makes use of two OSPF packet types: Link State Update packets (type 4), and Link State Acknowledgement packets (type 5). When an OSPF router performs a new SPF calculation, the existing routing table is saved and used as a baseline for changes made to the network topology. When any SPF calculation is made, the OSPF neighbor or neighbors is used as the root of the SPF routing tree. The Dijkstra shortest path algorithm is run two times. The first time deals with routers and the second always deals with networks. When the Shortest Path First (SPF) algorithm is computed by an OSPF router, the previous routing table is saved before the calculation, and is available for use if there are problems with the new routing table. It then invalidates the present routing table and performs the calculation using the OSPF neighbors as the root in the SPF tree. Each OSPF LSA has an age, which indicates whether the LSA is still valid. Once the LSA reaches the maximum age (one hour), it is discarded. During the aging process, the originating router sends a refresh packet every 30 minutes to refresh the LSA. Refresh packets are sent to keep the LSA from expiring, whether there has been a change in the network topology or not. Checksumming is performed on all LSAs every 10 minutes. The router keeps track of LSAs it generates and LSAs it receives from other routers.

137 Chapter 4: IP Routing Protocols The router refreshes the LSAs it generated; it ages the LSAs it received from other routers.

Adjacency formation: 1. Downinitial state of a neighbor conversation. 2. Attemptindicates that an attempt should be made to contact the neighbor. 3. Inithello packet has been received from the neighbor. 4. 2-Waycommunication between two routers is bidirectional. 5. ExStartfirst step in creating an adjacency between the two neighboring routers. 6. Exchangerouter is sending data description packets to the neighbor. 7. LoadingLink state request packets are sent to the neighbor. 8. Fullneighboring routers are fully adjacent. Router Types: Internal Router (LSA Type 1 or 2)Routers that have all their interfaces in the same area. They have identical link-state databases and run single copies of the routing algorithm. Backbone Routers (LSA Type 1 or 2)Routers that have at least one interface connected to area 0. An internal router whose interfaces all belong to area 0 is also a backbone router. Area Border Router (LSA Type 3 or 4)Routers that have interfaces attached to multiple areas. They maintain separate link-state databases for each area. This may require the router to have more memory and CPU power. These routers act as gateways for inter-area traffic. They must have at least one interface in the backbone area, unless a virtual link is configured. These routers will often summarize routes from other areas into the backbone area. Autonomous System Boundary Router (LSA Type 5 or 7)Routers that have at least one interface into an external network, such as a non-OSPF network. These routers can redistribute non-OSPF network information to and from an OSPF network. Redistribution into an NSSA area creates a special type of link-state advertisement (LSA) known as type 7. This router will be running another routing protocol beside OSPF, such as EIGRP, IGRP, RIP, IS-IS, etc.

Area 0This is the core area for OSPF. One of the basic rules of OSPF is that all areas must connect to area 0 (just as all roads lead to Rome). If there is an area that is not contiguous with area " 0 " , your only option is to use a virtuallink. This will provide a tunnel through another area in order to make it appear that the area is directly connected to area 0. ABR's are responsible for maintaining the routing information between areas. Internal routers receive all routes from the ABR except for those routes that are contained within the internal area.

Chapter 4: IP Routing Protocols

138

Traffic destined for networks outside of the AS must traverse Area 0 to an Autonomous System Border Router (ASBR). The ASBR is responsible for handling the routing between OSPF and another AS using another routing protocol such as EIGRP. Remember the difference between: ABR = Area Border Routerconnects areas together ASBR = Autonomous System Border Routerconnects ASs together

LSA Types: Router link entryType 1 LSA. Broadcasts only in a specific area. Contains all the default Link State information. Generated by each router for each area to which it belongs. It describes the state of the router's link to the area. The link status and cost are two of the descriptors provided. Network entryType 2 LSA. Multicast to all area routers in a multi-access network by the DR. They describe the set of routers attached to a particular network and are flooded only within the area that contains the network. Summary entryType 3 and 4 LSA's. LSA type 3 and 4 are summary link advertisements generated by ABRs. Type 3 LSA's have route information for the internal networks and are sent to the backbone routers. Type 3 LSA's are the result of type 1 and type 2 summaries that are created by the arearange command. Type 4 LSA's have information about ASBRs. This information is broadcast by the ABR, and it will reach all backbone routers. Type 4 LSAs are ASBR summary LSAs. Type 4 LSAs are only sent by ABRs and only in two cases: 1. There is an ASBR that the ABR needs to tell the backbone area about. 2. There is a legacy router incapable of demand circuits. These last two are indicator LSAs and are sent out only by an ABR putting itself in the ASBR position, but it is still not an ASBR. An ASBR would not be responsible for reporting either of these situations. Autonomous system entryThis is a Type 5 or 7 LSA. It comes from the ASBR and has information relating to the external networks. Type 5 LSAs are AS External LSAs. Type 7 LSAs are only found and flooded in NSSA areas. When an LSA type 7 is generated, only a partial SPF calculation needs to be performed.

LSA Options Field: The LSA options field is found inside OSPF Hello packets. There are six bits in the options field but we will cover the most important three. They are: -BitWith the -bit set, Type-5 external LSAs are not sent into OSPF stub and NSSA areas. On an OSPF router's interface that is in a stub or NSSA area, the hello packets sent from that interface have the -bit clear. This tells other routers that this router is configured as a stub or NSSA area. An

139 Chapter 4: IP Routing Protocols adjacency will not form unless both routers agree on the -bit. The E-bit reflects an associated area's External Routing Capability. AS external link advertisements are not flooded into or through OSPF stub areas. The E-bit ensures all members of a stub area agree on the area configuration. N-BitIf the N-bit is on then the E-bit must be off. The N-bit tells other routers that this router WILL send and receive Type 7 LSAs on that interface. An adjacency will not form unless both routers agree on the N-bit. p-Bit-The default state of the P-bit is off. The P-bit tells the NSSA border router to translate type 7 LSAs into type 5 LSAs. The default setting for the P-bit is off.

You do not need to configure a router as an ABR or ASBR. The router automatically becomes an ABR or ASBR if its interfaces belong to multiple areas, or in the case of an ASBR, where the router connects to another AS. OSPF Summarization When routes are summarized, routes with longer masks are consolidated into routes (or a route) with a shorter mask. Thus, the summary (less specific) route, with the shorter mask, can be a pointer to the routes with the more specific masks. Best practices indicate that you should summarize routes as they are sent back to the core of the network. This makes core network routing more efficient. When using OSPF, two types of summarizations can be performed. They are: Intra-area summarization This type of summarization is done between areas (on the ABR, or Area Border Router) using the following command: area {area-id} range address mask This would be used to summarize more specific networks into less specific networks. The area range command summarizes routes on the boundary between two OSPF areas. The information to be summarized is contained in two types of LSAs: Type 1 and Type2. o o Type 1 LSAs are Router LSAs and are generated by each router in an OSPF network. Type 2 LSAs are network LSAs, and are generated by the DR.

Both Typel and Type 2 LSAs are flooded within the originating area only. Only when the information needs to be conveyed to another area in a summarized form is an area-range command is used, which acts on the information provided by these two LSAs. External summarization

Chapter 4: IP Routing Protocols

140

This type of summarization is done when bringing routes in from another routing protocol. For instance, say that you are redistributing routes from RIP into OSPF. On the router that the routes are being redistributed on (the ASBR, or Autonomous System Boundary Routers), the following command would be used: summary-address ip-address mask This command would summarize the routes received from RIP (through redistribution) into whatever less-specific OSPF summary route that you specify. OSPF Metrics Every routing protocol has metric used to prefer one route over the other. For OSPF, the metric that is used is cost. With OSPF, the cost is a number that is inversely proportional to the bandwidth of the link. In other words, the higher the cost, the LESS the link is preferred. The lower the cost, the MORE the link is preferred. By default, OSPF load balances on up to four equal cost paths. The formula that OSPF uses to calculate the cost of a link is: Cost = 100,000,000 / bandwidth of the link Or Cost = 108 / bandwidth of the link For example, a 10Mb 10Base-T Ethernet link's cost would be calculated as:

Cost = 100,000,000 / 10,000,000 = 10


Or

Cost = 108 / 107 = 10


With this formula, the cost of a 64k Frame Relay link would be 1562 and the default cost of a T - l would be 64. So you may be asking, "what about a 100Mb Ethernet link or a Gigabit Ethernet link?" The cost of a 100Mb Ethernet link, or faster, when calculated with this formula, ends up being just 1. Note that the bandwidth of 108 is the same as the bandwidth of 100Mb Ethernet, or 1,0000,0000 (commas are placed to show the 8 zeros, in two sets of 4). This value is the default "reference bandwidth". This can be changes, thus causing all OSPF cost values to be changed on that router, with the ospf auto-cost reference-bandwidth command.

141 Chapter 4: IP Routing Protocols To manually change the cost of a link, you would use the following command on the interface that you wish to change: ip ospf cost {new cost} OSPF prefers Intra Area Path over Inter Area Paths. Passive OSPF Interface With a passive-interface no hello packets are sent and therefore an adjacency will never occur with this interface. OSPF Multicast Addresses 224.0.0.5 is the all-OSPF routers multicast address 224.0.0.6 is the Designated Routers multicast address.

Default Routes An OSPF router will need a default route itself before injecting a default route into an area, unless the keyword always is used in the configuration. For example, default-information originate always. OSPF Timers Default timers for a broadcast network (LAN) are: Hello 10 seconds, Dead 40 seconds. Default timers for an NBMA network (Frame Relay) are: Hello 30 seconds, Dead 120 seconds.

OSPF Redistribution Routes redistributed into OSPF are considered to be external routes. External LSAs are type 5 LSAs. Not-so-stubby areas (NSSAs) allow external routes to be advertised into OSPF network while retaining the characteristics of a stub area. To do this, the ASBR (the one doing the redistributing) in the NSSA will originate a type 7 LSA to advertise the external destinations.

Basic OSPF Configuration: 1. Enable global configuration mode 2. Enable the OSPF process 3. Identify the ip networks and their areas 4. Enable OSPF on the router:

Chapter 4: IP Routing Protocols

142

Router(config)# router ospf process-id Example: Router(config)# router ospf 1 Identify which IP networks on the router are part of the OSPF network: Router(config-router)# network address wildcard-mask area area-id Example: Router(config-router)# network 192.168.1,0 0.0.0.255 area 0 or Example: Router(config-router)# network 192.168.1.1 0.0.0.0 area 0 (0.0.0.0 wild-card mask to place a specific interface into OSPF area 0) Configuring Stub and Totally Stubby Areas: Configure a stub network: Router(config-router)# area area-id stub Configuring a Totally Stubby Network (ABR only): Router(config-router)# area area-id stub no-summary

OSPF Authentication OSPF offers two types of authentication to secure routing updates: text and MD5 encryption. OSPF authentication types are set, usually per link, but can also be specified for an entire area. To configure OSPF authentication on an interface you would do: Router(config-if)# ip ospf authentication-key key -> simple text authentication ip ospf authentication message-digest-key key-id md5 key

143 Chapter 4: IP Routing Protocols -> MD5 authentication You can also specify the authentication type that is required over a particular area: Router(config-router)# area area-id authentication - enables simple text authentication for that area area area-id authentication message-digest -> enables MD5 auth. for that area Enhanced Interior Gateway Routing Protocol (EIGRP) EIGRP is a Cisco proprietary protocol that combines the attributes of a Link State and a Distance Vector routing protocol. It is considered a 'hybrid' routing protocol. EIGRP was released as an enhancement to Cisco's other proprietary routing protocol, IGRP. EIGRP supports automatic route summarization, VLSM addressing, multicast updates, non-periodic updates, unequal-cost load balancing, and independent support for IPX and AppleTalk. EIGRP added many features to overcome the limitations of IGRP: The Diffusing Update Algorithm (DUAL) Loop-free networks Incremental updates instead of periodic (only send changes as they occur) Knowledge about neighbors as opposed to the entire network Independent Support for IP, IPX and AppleTalk Classless routing Efficient summarization of networks Efficient use of link bandwidth for routing updates Authentication EIGRP uses the same metrics as IGRP

EIGRP sends hello packets every 5 seconds on high bandwidth links, like PPP and HDLC leased lines, Ethernet, TR, FDDI and Frame Relay point-to-point and ATM. It sends hello's every 60 seconds on low bandwidth multipoint links, like FR multipoint and ATM multipoint links. EIGRP reliable packets are: Update, Query and Reply. EIGRP unreliable packets are: Hello and Ack.

Chapter 4: IP Routing Protocols

144

Updates are always transmitted reliably. Updates convey reachability of destinations. On discovery of a new neighbor, update packets are sent so the neighbor can build its topology table. These update packets are unicast. In other cases, such as a link cost change, updates are multicast. Both queries and replies are transmitted reliably. When destinations go into active state, queries and replies are sent. Queries are always multicast unless they are sent in response to a received query. In this case, a reply is unicast back to the successor that originated the query. Replies are always sent in response to queries to indicate to the originator that it does not need to go into active state because it has feasible successors. Replies are unicast to the originator of the query. Types of EIGRP Successors SuccessorA route selected as the primary route to reach a destination network specified by the Feasibility Condition. Successors are entries kept in the routing table. Feasible SuccessorA backup route to a specified network. Multiple feasible successors for a destination network can be retained in a topology table. Thus when a route goes down the entire routing table does not have to be recomputed.

Feasibility Condition When the receiving router has a Feasible Distance (FD) to a specified network and when it receives an update from a neighbor with a lower advertised or Reported Distance (RD) to that network, the Feasible Condition is met. The neighbor then becomes a Feasible Successor (FS) for that route because it is one hop closer to the destination network. In a meshed network environment, there can be a number of Feasible Successors. The RD for a neighbor to reach a specified network must always be less than the FD for the local router to reach the network. In this way EIGRP avoids routing loops. This is the reason why routes that have RD larger than the FD are not entered into the Topology table. Attributes of EIGRP Neighbor Discovery/Recovery Routers learn of the other routers on their directly attached networks dynamically, by sending Hello Packets. A router is assumed to be present by its neighbor through the hello packets it sends. DUAL (Diffusing Update Algorithm)Tracks all the routes advertised by all neighbors. DUAL will use various metrics to select the most efficient path. It selects routes to be inserted into the routing table depending on feasible successors. Protocol Dependent ModulesThese are individually responsible for IP, IPX, and Appletalk. The IPX EIGRP module is responsible for sending and

145 Chapter 4: IP Routing Protocols receiving EIGRP packets that are encapsulated in IPX. The Apple EIGRP module is responsible for AppleTalk packets. The IP EIGRP module is responsible for IP packets. They route like strangers in the night, except they don't even exchange glances. EIGRP Tables Neighbor tableThe current configuration of all the router's immediately adjacent neighbors. Topology tableThis table is maintained by the protocol dependent modules and is used by DUAL. It has all the destination networks advertised by the other neighbor routers. Routing tableEIGRP chooses the best routes to a destination network from the topology table and places these routes in the routing table. The routing table contains: How the route was discovered Destination network address and the subnet mask Metric Distance: This is the cost of the metric from the router Next hop address Route age Outbound interface

Choosing routes DUAL selects primary and backup routes using the composite metric and guarantees that the selected routes are loop free. The primary routes are then moved to a routing table. The rest (up to 6) are stored in the topology table as feasible successors. EIGRP uses the same composite metric as IGRP to determine the best path. The default criteria used are: BandwidthThe smallest bandwidth cost between source and destination Delay Cumulative interface delay along the path ReliabilityWorst reliability between source and destination depending on keepalives Load Utilization on a link between source and destination measured in bits per second on its worst link MTUThe smallest Maximum Transmission Unit

The default for EIGRP is to use only bandwidth and delay when calculating the metric. EIGRP uses the following scaled values to determine the total metric to the network: EIGRP Metric = 256*((Kl*Bw) + (K2*Bw)/(256-Load) + (K3*Delay)*(K5/(Reliability + K4)))

Chapter 4: IP Routing Protocols

146

The default values for K are: Kl K2 K3 K4 K5 = = = = = 1 0 1 0 0

For the default, you can simplify the formula as: Metric = Bandwidth + Delay After two routers become neighbors, each will send routing updates (and other packets) to the other using a reliable multicast scheme. For example, assume that router 1 has a series of packets, such as a routing table update, which must be transmitted to routers 1, 3, and 4. Router 1 will send the first packet to the EIGRP multicast address, 224.0.0.10, and then will wait for acknowledgment from each of its neighbors on its Ethernet interface (in this case, routers 2, 3 and 4). Assume that routers 2 and 4 answer the multicast packet, but router 3 does not. Router 1 will wait until the multicast flow timer expires on the Ethernet interface, then send out a special packet, a sequence TLV, telling router 3 not to listen to any further multicast packets from router 1. Router 1 will then continue transmitting the remainder of the update packets as multicast to all other routers on the network. The sequence TLV indicates an out-of-sequence multicast packet. Those routers not listed in the packet enter Conditional Receive (CR) mode, and continue listening to multicast. While there are some routers in this mode, the Conditional Receive bit will be set in multicast packets. In this case, router 1 will send out a sequence TLV with router 3 listed, so routers 2 and 4 will continue listening to further multicast updates. If a router receives an update packet with the init flag set it clearly implies that this packet is the first after a new neighbor relationship has been established. If we clear the IP EIGRP neighbor relationship it will automatically cause the EIGRP neighbor relationship to be restarted. Init Flag There is an 8-bit flag value in the EIGRP header. The rightmost bit is init. When init is set to 0x00000001 the enclosed route entries are treated as the first in a new neighbor relationship. Note that route entries are carried in update packets not hello packets. This debug output displays the Init Sequence increasing only with the update packet: Router* debug eigrp packet EIGRP: Sending HELLO on EthernetO/l

147 Chapter 4: IP Routing Protocols AS 666, Flags 0x0, Seq 0, Ack 0 EIGRP: Sending HELLO on EthemetO/1 AS 666, Flags 0x0, Seq 0, Ack 0 EIGRP: Sending HELLO on Ethernet0/1 AS 666, Flags 0x0, Seq 0, Ack 0 EIGRP: Received UPDATE on EthemetO/1 from 10.23.23.23, AS 666, Flags 0x1, Seq 1, Ack 0 EIGRP: Sending HELLO/ACK on EthemetO/1 to 10.23.23.23, AS 666, Flags 0x0, Seq 0, Ack 1 EIGRP: Sending HELLO/ACK on EthernetO/1 to 10.23.23.23, AS 666, Flags 0x0, Seq 0, Ack 1 EIGRP: Received UPDATE on EthernetO/1 from 10.23.23.23, AS 666, Flags 0x0, Seq 2, Ack 0 EIGRP Stub Routing Stub routing is frequently used in a hub and spoke network topology. The EIGRP Stub Routing feature is designed to improve network stability, reduce resource utilization, and simplify stub router configuration. In a hub and spoke network, one or more end (stub) networks are connected to a remote router (the spoke) that is connected to one or more distribution routers (the hub). The remote router is adjacent to one or more distribution routers. The only route for IP traffic to follow to reach the remote router is through a distribution router. This configuration is often used in WAN topologies where the distribution router is directly connected to a WAN. The distribution router can also be connected to many more remote routers, often to 100 or more remote routers. In a hub and spoke network topology, the remote router must forward all nonlocal traffic to a distribution router, so the remote router does not need to hold a complete routing table. Generally, the distribution router just needs to send a default route to the remote router, nothing more is needed. When you use EIGRP Stub Routing, you need to configure the distribution and remote routers to use EIGRP, and to configure only the remote router as a stub. Only specified routes are propagated from the remote (stub) router. The router responds with the message "inaccessible" to queries for summaries, connected routes, redistributed static routes, external routes, and internal routes A router configured as a stub will send a special peer information packet to all neighboring routers to report its status as a stub router. Neighbors receiving a stub status packet will not query the stub router for any routes, and a router that has a stub peer will not query that peer.

Chapter 4: IP Routing Protocols

148

The stub router will depend on the distribution router to send the proper updates to all peers.

Simple Hub and Spoke Network

l Internet Corporate network g ^ Distribution router (hub) ^ Ftemote router (spate)

Figure 4 - 1 . Simple hub and spoke network The stub feature on its own does not prevent routes from being advertised to a remote router. In the Figure 4 - 1 , the remote router can access the corporate network and the Internet through the distribution router only. A full route table on the remote router would have no functional purpose because. The larger route table would only increase the amount of memory required by the remote router. Route summarization and filtering on the distribution router can further conserve bandwidth and memory. The remote router doesn't need to receive routes that have already been learned from other networks. If a true stub network is desired, the distribution router should be configured to send only a default route to the remote router. The EIGRP Stub Routing feature allows a system administrator to prevent queries from being sent to the remote router. An EIGRP router will not query a stub neighbor about any route. In most cases, the system administrator will need to configure summarization on the distribution routers. The EIGRP Stub Routing feature does not automatically enable summarization on the distribution router. Problems could if the stub feature is not used, even after the filtered or summarized routes that are sent from the distribution router to the remote router. If a route is lost somewhere in the corporate network, EIGRP could send a query to the distribution router, which in turn will send a query to the remote router even if routes are being summarized. A problem communicating over the WAN link between the distribution router and the remote router, may cause an

149 Chapter 4: IP Routing Protocols EIGRP stuck in active (SIA) condition to occur and cause instability elsewhere in the network. Route Summary Route summarization is the best way to reduce the number of routes within the routing table. To optimize the network, route summarization should take place on the distribution layer of a three-tiered network design. Proper planning is important to ensure that enough IP address space is allocated at each distribution router, so all remote locations can be summarized into one single network route. Auto-Summarization EIGRP will perform auto-summarization of external routes, performing an autosummarization each time it crosses a border between two different major networks. The command to disable EIGRP's default summarization of addresses at network boundaries is no auto-summary. Process ID for an Autonomous System The process ID is the number following the router eigrp command. The process ID denotes the Autonomous System (AS) of the network that the router is in. The process ID can be any number between 1 and 65535 A process ID of 0 is not allowed The process ID can be randomly chosen, as long as it is the same for all EIGRP processes in routers that are to share the routing information.

EIGRP checks for the AS number on neighboring routers. EIGRP will only form a neighbor relationship with other routers in the same AS. Since EIGRP always sources data packets from the primary address, you should configure all routers on a particular subnet with primary addresses that belong to the same subnet. Routers do not form EIGRP neighbors over secondary networks. If the primary IP addresses do not agree for all routers, there can be problems with neighbor adjacencies. Show IP Route EIGRP An important point to remember with EIGRP is that very old routes are to be expected in a healthy network. Since updates only occur when there is a change, change is bad. Like fine wines, EIGRP routes should be seasoned by time. Here is a sample output from a show IP route command on an EIGRP network. r2#show ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, - BGP

Chapter 4: IP Routing Protocols

150

D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 El - OSPF external type 1, E2 - OSPF external type 2, - EGP i - IS-IS, LI - IS-IS level-1, L2 - IS-IS level-2, * - candidate default U per-user static route, o - ODR Gateway of last resort is not set 172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks D 172.16.0.0/16 is a summary, 4d06h, NullO C 172.16.1.0/24 is directly connected, TokenRingO C 192.168.4.0/24 is directly connected, Loopback3 C 192.168.5.0/24 is directly connected, Loopback4 D 10.0.0.0/8 [90/832000] via 172.16.1.3, 4d06h, TokenRingO C 192.168.1.0/24 is directly connected, LoopbackO C 192.168.2.0/24 is directly connected, Loopbackl C 192.168.3.0/24 is directly connected, Loopback2 r2# Notice that several routes have designations of "4d06h", which mean the routes are over four days old. Short aging periods in an EIGRP network indicates change, and should be monitored carefully. Show Ip Eigrp Topology
Show ip eigrp topology [autonomous-system-number | [[ip-address] mask]] [active | all-

links | pending | summary | zero-successors] Router* show ip eigrp topology IP-EIGRP Topology Table for process 77 Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply, r - Reply status P 10.16.90.0 255.255.255.0, 2 successors, FD is 0 via 10.16.80.28 (46251776/46226176), EthernetO via 10.16.81.28 (46251776/46226176), Ethernetl via 10.16.80.31 (46277376/46251776), SerialO P 10.16.81.0 255.255.255.0, 1 successors, FD is 307200 via via via via Connected, Ethernetl 10.16.81.28 (307200/281600), Ethernetl 10.16.80.28 (307200/281600), EthernetO 10.16.80.31 (332800/307200), SerialO

151 Chapter 4: IP Routing Protocols Table 4 - 1 . show ip eigrp topology Field Descriptions show ip eigrp topology Field Descriptions Field Codes Description State of this topology table entry. Passive and Active refer to the EIGRP state with respect to this destination; Update, Query, and Reply refer to the type of packet that is being sent. No EIGRP computations are being performed for this destination. EIGRP computations are being performed for this destination. Indicates that an update packet was sent to this destination. Indicates that a query packet was sent to this destination. Indicates that a reply packet was sent to this destination. Flag that is set after the software has sent a query and is waiting for a reply. Destination IP network number. Destination subnet mask. Number of successors. This number corresponds to the number of next hops in the IP routing table. If "successors" is capitalized, then the route or next hop is in a transition state. Feasible distance. The feasible distance is the best metric to reach the destination or the best metric that was known when the route went active. This value is used in the feasibility condition check. If the reported distance of the router (the metric after the slash) is less than the feasible distance, the feasibility condition is met and that path is a feasible successor. Once the software determines it has a feasible successor, it need not send a query for that destination. IP address of the peer that told the software about this destination. The first n of these entries, where n is the number of successors, is the current successors. The remaining entries on the list are feasible successors. The first number is the EIGRP metric that represents the cost to the destination. The second number is the EIGRP metric that this peer

PPassive AActive U-Update QQuery RReply r Reply status 10.16.90.0 255.255.255.0 successors

FD

via

(46251776/46226176)

Chapter 4: IP Routing Protocols

152

advertised. EthernetO SerialO Interface from which this information was learned. Interface from which this information was learned.

Show Ip Eigrp Neighbor router* show ip eigrp neighbor IP-EIGRP neighbors for process 1 H Address Interface Hold Uptime SPOT RTO Q Seq Type

1 0

10.1.1.2 10.1.2.2

Etl SO

(sec) (ms) Cnt Num 13 12:00:53 12 300 0 620 174 12:00:56 17 200 0 645

rp-2514aa# show ip eigrp neighbor IP-EIGRP neighbors for process 1 H Address Interface Hold Uptime SRTT RTO Q Seq Type (sec) (ms) Cnt Num

1 10.1.1.2 0 10.1.2.2

Etl SO

12 12:00:55 12 300 0 620 173 12:00:57 17 200 0 645

rp-2514aa# show ip eigrp neighbor IP-EIGRP neighbors for process 1 H Address Interface Hold Uptime SRTT RTO Q Seq Type (sec) (ms) Cnt Num

1 10.1.1.2 0 10.1.2.2

Etl SO

11 12:00:56 12 300 0 620 172 12:00:58 17 200 0 645

The Hold column value in the command output should always be less than the hold time, and should always be greater than the hold time minus the hello interval (unless you are losing hello packets). If the Hold column value is between 10 and 15 seconds, the hello interval is 5 seconds and the hold time is 15 seconds. If the Hold column has a wider rangebetween 120 and 180 secondsthe hello interval is 60 seconds and the hold time is 180 seconds.

If the numbers do not seem to fit one of the default timer settings, check the interface in question on the neighboring routerthe hello and hold timers may have been configured manually. Remember that EIGRP sends hello packets

153 Chapter 4: IP Routing Protocols every 5 seconds on high bandwidth links using multicast hellos, and every 60 seconds on low bandwidth multipoint links using unicast hellos. Border Gateway Protocol (BGP) BGP versio 4 is a path vector routing protocol used to exchange routing information between autonomous systems, and can be considered the routing protocol of the Internet. BGP is used to exchange routing information for the Internet and is the protocol used between Internet service providers (ISPs). BGP carries information as a sequence of AS numbers, which indicate the autonomous systems that must be used to get to a destination network. BGP is defined inn RFCs 1163, 1267, and 1771. BGP is considered an Exterior Gateway Protocol (EGP) (not to be confused with the obsolete routing protocol also called "EGP"). BGP is designed to prevent loops from forming between systems. There are both internal and external BGP (IBGP and EBGP) configurations. Organizational networks, such as universities and corporations, usually employ an Interior Gateway Protocol (IGP) such as RIP or OSPF for the exchange of routing information within their networks. These networks connect to ISPs, and ISPs use BGP to exchange customer and ISP routes. When BGP is used between autonomous systems (AS), the protocol is referred to as External BGP (EBGP). If a service provider is using BGP to exchange routes within an AS, then the protocol is referred to as Interior BGP (IBGP). BGP neighbors are defined in the configuration, not by their physical location in the network. Even if two routers are physically connected, they are not necessarily neighbors unless they form a TCP connection, which is configured by the network engineer. BGP's effective use of classless inter-domain routing (CIDR) has been a major factor in slowing the explosive growth of the Internet routing table. CIDR doesn't rely on classes of IP networks such as Class A, B, and C. In CIDR, a prefix and a mask, such as 197.32.0.0/14, represent a network. This would normally be considered an illegal Class C network, but CIDR handles it just fine. A network is called a super-net when the prefix boundary contains fewer bits than the network's natural mask. Situations that may require BGP: Extremely large networks A network that is connected to more than one AS Networks that are connected to two or more Internet Service Providers When you have a unique routing policy that requires it If you are an ISP

Interior Border Gateway Protocol (IBGP) Exchanges information within the same AS between routers

Chapter 4: IP Routing Protocols

154

Is more flexible, scalable, and efficient for controlling the exchange of information within an AS Shows a consistent view of the AS to external neighbors

Exterior Border Gateway Protocol (EBGP) Used when routers belong to different AS and need to exchange external updates The BGP Synchronization rule; "If an AS provides transit service to another AS, then BGP should not advertise the route until all of the routers within this AS have learned the route through the IGP." BGP Attributes Weight (Cisco BGP attribute) Local preference Multi-exit discriminator Origin (Mandatory Attribute) AS_path (Mandatory Attribute) Next hop (Mandatory Attribute) Community Cluster-List Originator ID

The mandatory attributes MUST be included in updates propagated to all peers (both INGP and EBGP). Weight Attribute The weight attribute (Cisco specific) is local to a router. The weight attribute is not advertised to neighboring routers. If the router learns about more than one route to the same destination, the route with the highest weight will be preferred. Since weight is Cisco specific, it is technically NOT a BGP attribute. In Figure 4-2, Router A is receiving an advertisement for network 172.16.1.0 from routers and C. When Router A receives the advertisement from Router B, the associated weight is set to 50. When Router A receives the advertisement from Router C, the associated weight is set to 100. Both paths for network 172.16.1.0 will be in the BGP routing table, with their respective weights.

155 Chapter 4: IP Routing Protocols The route with the highest weight will be installed in the IP routing table.
Asaoo

Adverse , 172 '? 1 CS1 4. --

Jx .X.

Prr'rrrr.-I

^ , kj '" / /

, /

Advertise ?2.14

AS ioo

Figure 4-2. Weight attribute Local Preference Attribute This attribute is used to prefer an exit point from the local autonomous system (AS). Unlike the weight attribute, the local preference attribute is propagated throughout the local AS. If there are multiple exit points from the AS, the local preference attribute is used to select the exit point for a specific route. In Figure 4-3, AS 100 is receiving two advertisements for network 172.16.1.0 from AS 200. When Router A receives the advertisement for network 172.16.1.0, the corresponding local preference is set to 50. When Router receives the advertisement for network 172.16.1.0, the corresponding local preference is set to 100. These local preference values will be exchanged between routers A and B. Because Router has a higher local preference than Router A, Router will be used as the exit point from AS 100 to reach network 172.16.1.0 in AS 200. The Local Preference attribute is NOT propagated outside the AS.

Chapter 4: IP Routing Protocols

156

Adve-use 17.'S 1 024

Prri.-.-rc<1

Adv&uae 72.-3 1.M24

Set LOCI H -el = SO

Set Local Pmt = >,

AS i os-

Figure 4-3. Local preference attribute Multi-Exit Discriminator Attribute The MED or metric attribute is used as a suggestion to an external AS regarding the preferred route into the AS that is advertising the MED. The word "suggestion" is used because the external AS receiving the MEDs may be using other BGP attributes for route selection. In Figure 4-4, Router C is advertising the route 172.16.1.0 with a metric of 10, while Route D is advertising 172.16.1.0 with a metric of 5. The lower value of the metric is preferred, so AS 100 will select the route to router D for network 172.16.1.0 in AS 200.

MEDs are advertised throughout the local AS.

16* -2--- \ i h ' . P D - Hi '.

Ar.erkse \

Advertise /2 1-, 1 CB*,. V = 0 = h

a-: Uoal Hrer = S"

_ AS 100

Set Local Praf = 100

Figure 4-4. Multi-exit discriminator attribute Confusion sometimes exists between the two Border Gateway Protocol (BGP) configuration commands: bgp deterministic-med and bgp always-comparemed.

157 Chapter 4: IP Routing Protocols Enabling the bgp deterministic-med command ensures comparison of the MED variable when choosing routes advertised by different peers in the same AS. Enabling the bgp always-compare-med command ensures comparison of the MED for paths from neighbors in different AS.

The bgp always-compare-med command can be useful when multiple service providers or organizations agree on a uniform policy for setting MED. Thus, for network X, if Internet Service Provider A (ISP A) sets the MED to 10, and ISP sets the MED to 20, both ISPs agree that ISP A has the better performing path toX. Whenever BGP receives multiple routes to a particular destination, it lists them in the reverse order that they were received, from the newest to the oldest. BGP then compares the routes in pairs, starting with the newest entry and moving toward the oldest entry (starting at top of the list and moving down). For example, entryl and entry2 are compared. The better of these two is then compared to entry3, and so on. The bgp always-compare-med command reorders the entries by neighbor AS. Origin Attribute The origin attribute is used for route selection, and can have one of three values, indicating how BGP learned about a particular route: IGPThe route is interior to the originating AS. This value is set when the network router configuration command is used to inject the route into BGP. EGPThe route is learned via EBGP. IncompleteThe route origin is unknown or learned in another way. An incomplete origin attribute occurs when a route is redistributed into BGP.

AS_path Attribute A route advertisement passing through an AS has the number of that AS added to an ordered list of AS numbers traversed by the route advertisement. Figure 45 shows a route passing through three different Autonomous Systems.

Chapter 4: IP Routing Protocols

158

AS.path 2.1 Rfif^cU t*? raufia AS_pam 1

AS1

17Z 16.1.0'

AS?

'

A 9
*

;
;

AS P3"! ''.

\,

, 4AS_palh

V~-J-J-t
\ I

, ,

/ A

'Ar
/

AS3

Figure 4-5. AS_path attribute AS1 originates the route to 172.16.1.0 and advertises this route to AS 2 and AS 3, with the AS_path attribute equal to { 1 } . AS 3 will advertise back to AS 1 with AS-path attribute { 3 , 1 } , and AS 2 will advertise back to AS 1 with AS-path attribute { 2 , 1 } . AS 1 will reject these routes when its own AS number is detected in the route advertisement.

The AS_path attribute is the mechanism that BGP uses to detect routing loops. AS 2 and AS 3 propagate the route to each other with their AS numbers added to the AS_path attribute. These routes will not be installed in the IP routing table because AS 2 and AS 3 are learning a route to 172.16.1.0 from AS 1 which has a shorter AS_path list. Next-Hop Attribute The next-hop attribute in EBGP is the IP address to reach the advertising router. For EBGP peers, the next-hop address is the IP address of the connection between the peers. For IBGP, the EBGP next-hop address is carried into the local AS, as illustrated in Figure 4-6:

159 Chapter 4: IP Routing Protocols

AS 200

172.16.1.0424

i?2.ie.i.a*
HUXX hop

10.1.1.1

AdmtfMe 172.16.1.024 Ned! hop 10.1.1.2 10.1.1.1

Figure 4-6. Next-hop attribute Router C advertises network 172.16.1.0 with a next hop of 10.1.1.1. When Router A propagates this route within its own AS, the EBGP next-hop information is preserved. If Router does not have routing information regarding the next hop, the route will be discarded.

Note that it is important to have an IGP running in the AS to propagate next-hop routing information. Community Attribute This attribute is a way of grouping destinations, called communities, to which routing decisions (such as acceptance, preference, and redistribution) can be applied. Route maps are used to set the community attribute. The predefined community attributes are: no-exportDon't advertise this route to EBGP peers. no-advertiseDon't advertise this route to any peer. internetAdvertise this route to the Internet community, all routers in the network belong to it.

Cluster-List
The cluster-list is a sequence of cluster IDs that the route has passed. It is an optional, nontransitive BGP attribute. When a route reflector reflects a route from its clients to nonclient peers, and vice versa, it appends the local cluster ID to the cluster list. If the cluster list is empty, a new cluster list is created.

Using the cluste-list attribute, a route reflector can identify if routing information is looped back to the same cluster due to misconfiguration. If the local cluster ID is found in the cluster list, the advertisement is ignored.

Chapter 4: IP Routing Protocols

160

Originator ID Originator ID is an optional, nontransitive BGP attribute. It is a 4-byte attribute created by a route reflector. The attribute carries the router ID of the originator of the route in the local autonomous system. Therefore, if a misconfiguration causes routing information to come back to the originator, the information is ignored. BGP Neighbor Connectivity Specific neighbor commands must be entered to create BGP neighbors. Routers are considered to be neighbors or peers whenever they open up a TCP session between each other to exchange routing information. When routers communicate for the first time, they exchange their entire routing tables. From then on, they send only incremental updates. Keepalive messages are periodically sent between BGP peers to maintain the connection. BGP maintains a table version number to track the instance of the BGP routing table. BGP uses TCP as its transport protocol, via port 179. The use of TCP ensures reliability. Internal BGP (IBGP) neighbors don't need to be directly connected. However, they do require IP connectivity via an IP Internal Gateway Protocol (IGP) such as OSPF. However, all IBGP neighbors must be configured as peers of each other (fully-meshed). With IBGP, all routers are to be configured as a fully meshed topology. The number of connections needed for any fully meshed configuration can be found by the formula: N(N-l)/2. External BGP (EBGP) neighbors normally need to be direct connectivity, however, Cisco provides the ebgp-multihop router configuration command to override this behavior. A BGP router receiving a BGP routing update from an EBGP neighbor propagates the update to all IBGP neighbors. The same is not true for routing updates received via an IBGP neighbor, since these updates are not passed on to all IBGP peers. This is why IBGP speakers need to be configured in full mesh. For EBGP peers, you should use directly connected interfaces as the peering IP addresses. If you do not, then EBGP multihop must be used. Multihop is used only in EBGP, not IBGP. A loopback interface should be used when configuring IBGP peers, since this interface is always up. For IBGP, the peering IP address does not have to be directly connected, only needs to be reachable via IGP.

Any time you make changes to the BGP configuration on a router, your BGP neighbor connection must be reset. Use the Cisco IOS command clear ip bgp * to perform this task. Use the command show ip bgp command to view your BGP table. Note that you should never do a clear ip bgp * on a production network as this will cause network downtime.

161 Chapter 4: IP Routing Protocols Synchronization/Full Mesh In order to avoid routing loops inside an AS, BGP doesn't advertise to internal BGP (IBGP) peer routes that are learned via other IBGP peers. Therefore, one must maintain full IBGP mesh within an AS or utilize other techniques such as route reflectors. BGP routing information must be in sync with the Interior Gateway Protocol (IGP) such as OSPF, before advertising transit routes to other ASs. This behavior can be turned off using the Cisco IOS command no sync. However, this isn't recommended unless all the routers in your BGP AS are running BGP and are fully meshed or the AS in question isn't a transit AS. The careless use of the no sync command could cause non-BGP routers within an autonomous system to receive traffic for destinations that they don't have a route for. With synchronization enabled, BGP waits until the IGP has propagated routing information across the autonomous system before advertising transit routes to other ASs. By default, synchronization is enabled on all BGP routers. Next-Hop-Self Command In a non-meshed environment where you know that a path exists from the current router to a specific address the BGP router command neighbor {ipaddress | peer-group-name} next-hop-self can be used to disable next-hop processing. This will cause the current router to advertise itself as the next hop for the specified neighbor, simplifying the network. Other BGP neighbors will then forward packets for that destination to the current router. This would not be useful in a fully meshed environment, since it will result in unnecessary extra hops where there may be a more direct path. Private AS numbers AS numbers from 64512-65535 are private AS numbers. These numbers are very similar in fashion to the RFC 1918 IP addresses of 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. These AS numbers aren't used anywhere in the Core BGP route tables. They are used to keep the AS number requirement down. Smaller BGP users will often use Private AS numbers and then have them translated to public AS numbers by routers upstream toward the core of the Internet. Many of the larger ISPs may have multiple public AS numbers. Smaller ISPs will usually only have one public AS number. BGP Path Selection BGP will select one path as the best path. This path is put into the BGP routing table and then propagated to its neighbors. The criteria for selecting the path for a destination is: If the path specifies a next hop that is not accessible, the update is dropped. The path with the largest weight is preferred. If the weights are the same, the path with the larger local preference is preferred.

Chapter 4: IP Routing Protocols

162

If the local preference is the same, then prefer the path that originated on this router. If no route originated on this router, then prefer the one with the shortest AS-path. If they have the same AS_path, then prefer the path with the lowest origin path. If the origin codes are the same, then prefer the path with the lowest MED. If the MED is the same, then prefer an external path to an internal path. If these are the same, then prefer a path through the closest IGP neighbor. Lastly, prefer the path with the lowest IP address, as specified by the BGP router ID. If a loopback is configured, this will be used as the router ID.

Scalability Problems with Internal BGP (IBGP) Autonomous systems consisting of hundreds of routers can create management problems for system administrators. Remember that IBGP must be fully meshed unless you utilize other techniques. This will require BGP neighbor statements to and from every IBGP router in a given AS. Internal ASs with more than three or four IBGP routers are often configured with Confederations or Route Reflectors to make the configuration and management tasks easier. Peer Groups Several BGP routers that share the same update policies can be grouped into a peer group to simplify configuration and to make updating more efficient. The power of this function will be obvious the first time you need to configure hundreds of routers and type the same commands over, and over, and over again. The real benefit of specifying a BGP peer group is reduced system resources (CPU and memory) used in update generation. It also simplifies the BGP configuration. The load on system resources is reduced by allowing the routing table to be checked only once, and updates to be replicated to all peer group members instead of individually for each peer in the peer group. Depending on the number of peer group members, the number of prefixes in the table, and the number of prefixes advertised, the load reduction can be significan. You should group together peers with identical outbound announcement policies. The members of a peer group will inherit changes made to the peer group, simplifying updates. Peer group members inherit the following: Remote-as (if configured) Version

163 Chapter 4: IP Routing Protocols Update-source Out-route-map Out-filter-list Out-dist-list Minimum-advertisement-interval IMext-hop-self

Confederations Confederations eliminate the need to fully mesh BGP communications in a given AS by splitting a single AS into sub-AS's and using EBGP between them. The sub-ASs will usually use private AS numbers. In most BGP environments it is too cumbersome to have all the BGP routers peered to each other. ASs external to the confederation group look like a single AS to the routers inside. Route Reflectors Route reflectors can also reduce the number of BGP peering statements by configuring some of the IBGP routers as route reflectors. The route reflector clients only peer with the route reflectors and not each other. This setup can greatly reduce the number of BGP peering configurations required in an AS. Route Summary BGP supports aggregation of specific routes into one route using the aggregateaddress address mask command. Aggregation applies to routes that in the BGP routing table. This is in contrast to the network command, which applies to the routes that exist in an IP routing table. Aggregation can be performed if one or more of the specific routes of the aggregate address exist in the BGP routing table. To use a specific example, a router will summarize the routes into 172.16.0.0/22 as long as at least one of the more specific 172.16 assumed routes actually exist in the routing table. Usually aggregate addresses are advertised in addition to the more specific subnets. However, in this case the suppress map will filter the more specific routes, advertising only the 172.16.0.0/22 route. When the aggregate-address command is used within BGP routing, the aggregated address is advertised, along with the more specific routes. The exception to this rule is through the use of the summary-only command. The summary-only keyword announces only the summarized route and suppresses the more specific routes. Using the as-set argument creates an aggregate address with a mathematical set of AS. This as-set summarizes the as_path attributes of the all of the

Chapter 4: IP Routing Protocols

164

individual routes. This can be useful to avoid routing loops while aggregating routes. Unless the summary-only keyword is used with the as-set command the summary route is advertised along with the more specific routes. The as-set information is preserved because it becomes important in avoiding loops as it maintains an indication of where the route has been.

BGP Clusters
You can cluster BGP Route Reflectors to provide redundancy. This prevents the failure of a single router from bringing down your IBGP domain.

Policy Routing
Policy routing is a means of managing routes and the paths used with manually configured rules. It makes routing decisions depending on parameters such as source address or source and destination address rather than just destination address alone. Policy routing can be used to manipulate traffic inside an AS or between ASs. Policy routing has many of the same drawbacks as static routing. Route filtering and attribute manipulation are the basis of implementing BGP policies. Route Maps Used by BGP to control and modify routing information and define the conditions under which routes are redistributed between routers and routing processes.

No Export
The no-export BGP community attribute means that the route can be advertised to other IBGP peers, but not to EBGP peers. If the no-advertise BGP community attribute were used instead, then this route would not be advertised to either IBGP or EBGP peers. Route Dampening Route dampening is used to control this route instability. A network connected to an AS with flapping routes (routes that go up and down) can often cause problems, as the BGP routers must continuously update their routing tables. Dampening classifies routes as "well-behaved" or "ill-behaved" depending on their past reliability and penalties are assigned each time a route flaps. When a set penalty is reached, BGP suppresses the route until it is well behaved and trusted again. There is no penalty limit at which a route is permanently barred from joining the domain. Route dampening is not enabled by default.

165 Chapter 4: IP Routing Protocols

Backdoor
You can use backdoor makes an IGP learned route the preferred route. Use the network backdoor router configuration command to specify a backdoor route to a BGP border router that will provide better information about the network. Use the no form of this command to remove an address from the list. Enabling BGP Routing To enable BGP routing and establish a BGP routing process, use the following commands beginning in global configuration mode:

Step 1:
Router(config)# router bgp as-number Enables a BGP routing process, which places the router in router configuration mode. Step 2: Router(con fig-router)# network [route-map route-map-name] network-number [mask network-mask]

Flags a network as local to this autonomous system and enters it to the BGP table. Controlling BGP Routes BGP routes can be controlled in similar ways to other routing protocols. BGP can use distribute-lists, prefix-lists, filter-lists, and route-maps to control routing updates. This is good because there are multiple options but it can be bad as, with any configuration with many options, it can become complex. Most all BGP route control is done via the neighbor command, like this: Router(config)# router bgp 1 Router(config-router)# Router(config-router)# Router(config-router)# Router(config-router)# neighbor neighbor neighbor neighbor IP-Address IP-Address IP-Address IP-Address distribute-list list-number in/out prefix-list list-number in/out filter-list AS-path-access-list in/out route-map map-name in/out

Each of these has their particular uses. We covered each of these in more detail in the Controlling Routing Update section, earlier in the chapter.

Chapter 4: IP Routing Protocols

166

Policy-Based Routing Policy routing provides a mechanism to mark packets so that certain kinds of traffic receive priority service when used in combination with queuing techniques enabled through Cisco IOS software. These queuing techniques offer a powerful and flexible tool for system administrators who implement routing policies for their networks. Policy-Based Routing Benefits Source-Based Transit Provider SelectionService providers and other organizations can use policy-based routing for traffic originating from different user groups through different Internet connections across policy routers. Quality of Service (QoS)Organizations can provide QoS to differentiated traffic by setting precedence or type of service (ToS) values in IP packet headers at the network periphery and using queuing mechanisms to prioritize traffic in the backbone or network core. Load SharingThe dynamic load-sharing through destination-based routing can be extended to include policies that use traffic charactereistics to distribute traffic among multiple paths. Cost SavingsOrganizations can lower costs by distributing interactive and batch traffic among low-cost low-bandwidth permanent paths and high-cost high-bandwidth switched paths.

Data Forwarding Using Policy-Based Routing Routers forward packets to their destination addresses using information from static routes or dynamic routing protocols such as Routing Information Protocol (RIP), Open Shortest Path First (OSPF), or Enhanced Interior Gateway Routing Protocol (Enhanced IGRP). PBR complements existing routing protocols for routing packets through routers by offering more flexibility. PBR is a mechanism for expressing and implementing forwarding and routing of data packets using policies defined by system administrators. Instead of routing by the destination address, PBR allows system administrators to determine and implement routing policies to allow or deny paths using the following: Identity of the end system Application Protocol Packet size

167 Chapter 4: IP Routing Protocols Policies can be defined as simply as "my network will only carry traffic from the engineering department" or as complex as "traffic originating within my network with the following characteristics will take path A, while all other traffic will take path B." Tagging Network Traffic The system administrator can classify traffic under PBR using access control lists (ACL) (which provide packet-filtering capability) and then set IP precedence or ToS values, so as to tag packets with the defined classification. PBR traffic classification allows the system administrator to identify traffic for different classes of service at the network perimeter and then to implement a QoS defined for each network core class of service. This avoids a need to classify traffic explicitly at each WAN interface in the core or backbone network. Applying Policy-Based Routing All incoming packets received on a PBR-enabled interface are considered for PBR. The router passes the packets through enhanced packet filters called route maps. Packets are routed to the next hop using criteria defined in a route map. Policy Route Maps Policy routing is specified on the interface that receives the packets, not on the interface from which the packets are sent. Route map statement entries contain a combination of match and set clauses and commands. Match clauses define whether packets meet particular criteria. Set clauses then explain how packets meeting match criteria will be routed. In a route map statement, a packet must meet all match clauses in order for the set clauses to be applied for each combination of match and set commands in a route map statement. A full route map statement can have multiple combinations of match and set commands. Route map statements can be marked as permit or deny. If the statement is marked as deny, packets meeting the match criteria are sent back through the normal forwarding channels (destination-based routing). If the statement is marked as permit, and packets meet all match criteria, are the set clauses applied. If a statement is marked as permit and the packets do not meet all match criteria, then those packets are also forwarded via the normal destinationbased routing.

Chapter 4: IP Routing Protocols

168

Match Clauses Define the Criteria You can use either IP standard access lists or extended ACLs to set match criteria. Standard IP ACLs can be used to specify the match criteria for source address Extended ACLs can be used to specify match criteria by application, protocol type, ToS, or precedence.

The match clause can matching packet length between the specified minimum and maximum values. You can use packet match length as a criterion that distinguishes between interactive and bulk traffic (given that bulk traffic generally has larger packet sizes). The policy routing process goes through the route map until a match is found. If no match is found in the route map, or the route map entry is deny instead of permit, normal destination-based routing of the traffic applies. There is an implicit deny at the end of the list of match statements. Set Clauses Define the Route The set clauses specify criteria for forwarding packets through the router after all match clauses are satisfied. The set clauses are evaluated in this list order: List of interfaces through which the packets can be routedIf more than one interface is specified, then the first up interface will be used to forward packets. List of specified IP addressesThe IP address can specify the adjacent next hop router in the forwarding path to the destination. The first IP address associated with a connected up interface will be used for packet routing. List of default interfacesIf no explicit route is available to the destination address of the packet, then the packet will be routed to the first up interface in the list of specified default interfaces. List of default next hop IP addressesRoute to the interface or next hop specified by the set clause only if there is no explicit route for the destination address of the packet in the routing table. IP ToSA value or keyword can specify the type of service in the IP packets. IP precedenceA value or keyword can specify precedence in the IP packets.

Set commands are used in conjunction with each other. The next hop router specified in the set clauses must be adjacent to the policy router, and share the same subnetwork with the policy router. Remember that a packet which does not meet any defined match criteria will be routed through the normal destination-based routing process. If you do not want

169 Chapter 4: IP Routing Protocols to revert to normal forwarding and want to drop any packets that do not match the specified criteria, then you should use the set clause to specify interface Null 0 as the last interface on the list. Source-Sensitive and Equal-Access Routing With policy routing the system administrator can support equal-access and source-sensitive routing. In Figure 4-7, Organization X has directed that traffic from address range A go through ISP 1 and traffic from address range go through ISP2.

Figure 4-7. Source-sensitive routing Quality of Service (QoS) Policy routing allows tagging of packets. System administrators can use this ability to classify the network traffic at the network perimeter for various classes of service, and then implement those classes of service in the network core using priority, custom or weighted fair queuing (Figure 4-8). This improves network performance by eliminating the need to classify the traffic explicitly at each WAN interface in the core or backbone network.

Figure 4-8. Type of service prioritization Load Sharing System administrators have the ability to use policy routing to distribute traffic among multiple paths according to traffic characteristics. Remember that other dynamic load-sharing capabilities are offered by destination-based routing.

Chapter 4: IP Routing Protocols

170

Management Implications Configured routing policies may differ from routes determined by routing protocols, so packets can take different routes depending on source, length, and content. Packet forwarding using configured policies overrides packet forwarding using routing table entries for the same destination. For example, traffic would use a path defined by configured policies, even though a management application might discover a path found through a dynamic routing protocol or specified in static route mapping. Note also that the traceroute command may generate a path that is a different from the route suggested by a user application. For example, in Figure 4-9, the best path between X and Y is through the Tl line, but policy routing can be used to send some traffic over the Frame Relay link.

Figure 4-9. Best Path and Configured Path The ability to route traffic on user-defined paths rather than on paths determined by routing protocols may make the environment more difficult to manage and can cause routing loops. To keep the environment simple and manageable, a good guideline is to define policyes in a deterministic manner in order whenever you can. PBR Summary PBR gives you more control over routing by extending the existing mechanisms provided by routing protocols. PBR allows you to set IP precedence. It also allows you to specify traffic paths, such as prioritizing traffic that flows over a high-cost link. You can set up PBR to route packets using configured policies. For example, you can implement routing policies to allow or deny paths depending on the identity of a particular end system, an application protocol, or a packet size. PBR allows you to do the following: Classify traffic according to extended access list criteria. Access lists establish the match criteria.

171 Chapter 4: IP Routing Protocols Set IP Precedence bits. This gives the network the ability to accommodate differentiated classes of service. Route packets to specific traffic-engineered paths. You might need this to support a specific QoS through the network. You can create policies defined by IP address, port number, protocol, or packet size. For a simple policy, you can use a single attribute. For a more complex policy, you can use combinations of them or all of them. Route Filtering Route filtering should be used at the topological edge of a network to prevent false routing information from being accepted. You might consider filtering routing information coming from remote sites back to a central location (data center), or filtering routing information coming from any open areas in the network, such as lab networks. Two types of route filtering are: IP Prefix-list OSPF ABRType 3 LSA Filtering (uses area filter lists and IP prefix-lists)

IP Prefix-list A prefix list is made up of an IP address and a bit mask. Prefix lists are configured as permit or deny .The ip prefix-list command is used for IP prefix filtering. The IP address can be a classful network, a subnet, or a single host route. The bit mask is a number from 1 to 32. Prefix lists match either an exact prefix length or a prefix range. The ge and le keywords are used to specify length ranges to match. A prefix list is processed as an exact match when neither ge nor le keyword is entered. If the ge value only is entered, the range is entered ge value and up through the full 32-bits. If the le value only is entered, the range is from value entered for the network/length argument up to the entered le value. If both the ge length and le length keywords and arguments are entered, the range falls between the two values. network/length < ge length < le length <= 32 Either a name or sequence number can be use to configure a prefix list. One or the other must be entered when configuring this command. Prefix lists are evaluated starting with the lowest sequence number. A match is made on the longest most specific prefix. The first successful match is processed for a given prefix. Once a match is made, the permit or deny statement is processed. The remainder of the list is not evaluated.

Chapter 4: IP Routing Protocols

172

If a sequence number is not entered in a prefix list, the default sequence number is 5, and subsequent prefix list entries will increment by 5. If a sequence number is entered for the first prefix list entry but not subsequent entries, then the subsequent entries will also be incremented by 5 (If the first configured sequence number is 3, then subsequent entries will be 8, 13,18, and so on). Default sequence numbers can be suppressed by entering the no form of this command with the seq keyword. Prefix lists are applied to inbound or outbound updates for a specific peer by entering the neighbor prefix-list command. Prefix list information and counters are displayed in the output for the show ip prefix-list command. Prefix-list counters can be reset by entering the clear ip prefix-list command. ip prefix-list list-name | sequence-number [seq number] {deny network/length \
p e r m i t network/length} [ge length] [le length]

list-name sequencenumber seq number Deny permit network/length ge length

Configures the prefix-list name. Applies a sequence number to a prefix-list entry. The range of sequence numbers that can be entered is from 1 to 4294967294. (Optional) Applies a sequence number to a prefix-list entry. The range of sequence numbers that can be entered is from 1 to 4294967294. Denies access for a matching condition. Permits access for a matching condition. Configures the network address, and the length of the network mask in bits. The network number can be any valid IP address or prefix. The bit mask can be a number from 0 to 32. (Optional) Applies the ge-value to the range specified. The length argument represents the minimum prefix length to be matched. (Optional) Specifies the lesser value of a range (the "from" portion of the range description). (Optional) Applies the le-value to the range specified. The length argument represents the minimum prefix length to be matched. (Optional) Specifies the greater value of a range (the "to" portion of the range description).

le length

OSPF ABR Type 3 LSA Filtering Open Shortest Path First (OSPF) ABR Type 3 LSA Filtering allows only those packets with specified prefixes to be sent from one OSPF area to another OSPF area. OSPF ABR Type 3 LSA Filtering extends the ability of an area border router (ABR) running the OSPF protocol to filter type 3 link-state advertisements (LSAs) between different OSPF areas. This area filtering can be applied out of a specific OSPF area, into a specific OSPF area, or into and out of the same OSPF areas, or to combinations of these. OSPF ABR Type 3 LSA Filtering is supported through the area filter-list command in router configuration mode.

173 Chapter 4: IP Routing Protocols Configuring OSPF ABR Type 3 LSA Filtering To filter inter-area routes into a specified area, use these commands in router configuration mode:

Command
Step 1 Router(config)#router ospf process-id

Purpose
Configures the router to run an OSPF process. Configures the router to filter interarea routes into the specified area. Creates a prefix list with the name specified for the list-name argument.

Step 2

Router(config-router)#area area-id filter-list prefix prefix-list-name in

Step 3

Router(config-router)#ip prefix-list list-name [seq seq-value] deny | permit network/len [ge ge-value] [le le-value]

To filter inter-area routes out of a specified area, use these commands in router configuration mode:

Command
Step 1 Router(config)#router ospf process-id

Purpose Configures the router: to run an OSPF process. Configures the router to filter interarea routes out of the specified area. Creates a prefix list with the name specified for the listname argument.

Step 2

Router(config-router)#area area-id filter-list prefix prefix-list-name out

Step 3

Router(config-router)#ip prefix-list list-name [seq seq-value] deny | permit network/len [ge ge-value] [le le-value]

The use of SHOW and DEBUG commands Show and debug commands are used in troubleshooting any network configuration. It is important for you to understand the output from these commands. The best way to learn the show and debug commands is to set up a

Chapter 4: IP Routing Protocols

174

practice lab, and experiment with the commands. You should experiment with the different show interface commands along with the show and debug commands. This will give you information on how a routing protocol behaves. Does the routing protocol behave as you expect? Food for thought!

175 Chapter 4: IP Routing Protocols Chapter 4 Questions 4-1. What is the difference between Routing and Routed Protocols? a) A routed protocol communicates between routers about paths to deliver data to desired destinations. A routing protocol traverses the network passing user data. b) A routing protocol communicates between switches and routers about paths to deliver data to desired destinations. A routed protocol traverses the network passing user data. c) A routing protocol communicates between routers about paths to deliver data to desired destinations. A routed protocol traverses the network passing user data. d) A routed protocol communicates between switches and routers about paths to deliver data to desired destinations. A routing protocol traverses the network passing user data. 4-2. Which of the following is not true of BGP "route dampening"? a) Penalty is exponentially decayed according to parameters such as halflife-time. b) A numeric penalty is applied to a route each time it flaps. c) Route is permanently removed after a configured number of flaps. d) History of unstable routes is forwarded back to the sender to control future updates. e) The route is eventually suppressed after the suppress-limit threshold is reached. Will the command ip default-network 0.0.0.0 create a valid route in IGRP? a) Yes, only if you have a static route configured b) No, IGRP does not understand a default route c) Yes, you do not need to do anything else d) No, you need to use a network known to IGRP besides 0.0.0.0 4-4. Which of the following can be used with the RIP Version 2 routing protocol? a) Authentication password b) Subnet mask c) Unlimited hops d) Metrics including bandwidth, delay, load 4-5. What is the Administrative Distance for RIP? a) 90

4-3.

b) 100 c) 110
d) 120

Chapter 4: IP Routing Protocols

176

4-6.

The router ospf 10 statement is used to enable OSPF routing on a Cisco router. What is the "10" in this statement? a) Router identifier b) AS identifier c) Process identifier d) Router area

4-7.

Which methods can be used to allow IBGP from having to maintain fully meshed neighbors? a) Configure point-to-multipoint networks b) All neighbors must be fully meshed unless you disable synchronization c) Route reflectors d) Confederations e) All IBGP neighbors must be fully meshed in a default configuration

4-8.

Which of the following protocols has the highest precedence? a) Static routes b) OSPF c) Internal BGP d) External BGP e) EIGRP

4-9.

Which of the following is true about RIPvl? a) RIP is classful b) RIP is limited to 15 hops c) RIP only broadcasts updates when changes are made to the topology d) RIP broadcasts its routing table every 30 seconds by default

4-10. Which of the following routes could exist in an IP RIP routing table?

a) b) c)

192.168.1.0/24 10.1.0.0/16 160.0.0.0/4

d) 10.0.0.0/8 4 - 1 1 . Which of the following could be used to control update traffic? a) Passive interfaces b) Default routes c) Defined bandwidth for interfaces

d) Static routes
e) Routing update access-lists

177 Chapter 4: IP Routing Protocols f) Redistribution 4-12. Which of the following is part of the EIGRP protocol? a) Neighbor discovery b) Full routing table updates of its own networks every 30 seconds c) Partial updates d) Variable-length subnet mask e) Multiple routes held in the topology table f) Automatic route summarization g) Protection for DDR links 4-13. Which features of the RIP protocol does the Cisco IOS implement? a) Poison reverse b) Split horizons c) Fixed Length Subnet Masks d) Variable Length Subnet Masks e) Holddown 4-14. Which is not true of IGRP a) Maximum hop count of 255 b) IGRP packet can carry 104 routes c) Update interval is 90 seconds d) Default hop count is 100 e) It can load share up to 6 paths f) It can support VLSM 4-15. Which of the following is not associated with IP routing?

a) RIP
b) RTMP c) OSPF d) Integrated IS-IS e) Static 4-16. Which is not a Distance Vector routing protocol? a) RIP b) BGP c) OSPF d) IGRP e) IS-IS 4-17. Which is a true statement about Link State routing protocols?

Chapter 4: IP Routing Protocols

178

a) Link State routing uses the DUAL algorithm. b) All routers using a Link State protocol will have the same topology map of the network. c) Each router is responsible for learning about its neighbors and their links and flooding this information throughout the network. d) The topology table stores feasible routes and are used as a backup to the routing table. 4-18. Which of the following is not needed in a hello packet in order for OSPF routers to become adjacent? a) Area ID b) Authentication Password c) Hello/Dead Interval Timers d) Stub Area Flag e) All of the above are required 4-19. Which of the following statements concerning Area 0 is not true? a) All areas must be directly connected to Area 0, with the exception of virtual links b) Area 0 is the core of the OSPF network c) Area 0 is not required for a small OSPF network with only 2 areas d) Routers connected to Area 0 and another area are known as ABR's 4-20. Which of the following are key differences between RIP version 1 and RIP version 2? (Choose all that apply) a) RIP version 1 supports authentication while RIP version 2 does not. b) RIP version 2 uses multicasts while RIP version 1 does not. c) RIP version 1 uses hop counts as the metric while RIP version 2 uses bandwidth information. d) RIP version 1 does not support VLSM while RIP version 2 does. e) RIP version 1 is distance vector while RIP version 2 is not. 4-21. A router is configured for OSPF. Under the OSPF process, you type in the area 1 range command. Which LSA types will be acted upon (summarized) as a result? (Choose all that apply) a) Type 1 b) Type 2 c) Type 3 d) Type 4 e) Type 5

179 Chapter 4: IP Routing Protocols 4-22. A change in the topology of the BootCamp OSPF network causes the flooding operation. Which OSPF packet types are used in this LSA Flooding? a) Hello b) Link State Update c) Link State Request d) Database description e) Link State Acknowledgement 4-23. A router is configured for OSPF and is connected to two areas: area 0 and area 1. You then configure area 1 as a stub area. Which LSAs will now operate inside of area 1? a) Type 7 b) Type 1 and 2 c) Type 1, 2, and 5 d) Type 3 and 4 e) Type 1, 2 and 3 4-24. The following exhibit is an illustration of the output from an ASBR: ASBBR#show ip ospf database external OSPF Router with ID (15.33.4.2) (Process ID 10) Type-5 AS External Link States LS age: 15 Options: (No ToS-capability, DC) LS Type: AS External Link Link State ID: 10.10.1.0 (External Network Number) Advertising Router: 15.33.4.2 LS Seq Number: 80000002 Checksum: 0x513 Length: 36 Network Mask: /24 Metric Type: 1 (Comparable directly to link state metric) ToS: 0 Metric: 10 Forward Address: 0.0.0.0 External Route Tag: 0 And this exhibit is an illustration from a router in the network: Router#show ip ospf border-routers OSPF Process 10 internal Routing Table Codes: i-intra-area route, I-Inter-area route 15.33.4.2(2) via 30.0.0.1, Serial0/0, ASBR, AreaO, SPF 4 Using this information what is the total metric for the route to subnet 10.10.1.0/24 on this Router? a)l b) 8

Chapter 4: IP Routing Protocols

180

c) 12
d) 20 e) 22 4-25. You are the system administrator at EnableMode. The EnableMode network contains four Routers named R l , R2, R3, and R4. All four routers are connected to a hub via Ethernet interfaces. All four routers have a basic OSPF configuration of a network statement for the Ethernet network. During routine maintenance, you issue the show ip ospf neighbor command on Router R2. The output from the show ip ospf neighbor command shows 2WAY/DROTHER for its neighbor, Router R3. What conclusions can you draw from this output? (Choose all that apply) a) Router R2 is the DR or BDR. b) Router R3 is not a DR or BDR. c) Router R2Router R3 adjacency is not yet FULL. d) Router R2 is not the DR. e) Router R4 is the DR. 4-26. What is the default seed metric for routes redistributed into RIPv2? a) 0 b)l

c) 15 d) 16 e) 120
e) None of the above) 4-27. Router Rl is configured for OSPF. Interface serial 0 is configured to be in area 0 and interface serial 1 is configured to be in area 1. Under the OSPF process area 1 nssa default-information-originate is configured. Which of the following are true? (Choose all that apply) a) Rl will inject a type 3 default route into area 1. b) Rl will inject a type 7 default route into area 1. c) Rl will inject a type 7 default route into area 0. d) Rl needs a default route in its routing table to inject a default into area 1. e) Rl does not need a default route in its routing table to inject a default into area 1. 4-28. Which of the following OSPF routers can generate a type 4 ASBRsummary LSA? (Choose all that apply) a) ABRs b) DR c) BDR

181 Chapter 4: IP Routing Protocols d) ASBRs 4-29. Routers Rl and R2 are in the same LAN and both are running OSPF. Which multicast IP address will Rl and R2 use for sending routing updates to each other? (Choose all that apply) a) 224.0.0.10

b) 224.0.0.1 c) 224.0.0.13
d) 224.0.0.5

e) 224.0.0.9
f) 224.0.0.6 4-30. The BootCamp router NL2 is experiencing OSPF problems with a neighbor across a frame relay network. During troubleshooting, OSPF event debugging was issued as shown below: NL2#debug ip ospf events OSPF events debugging is on NL2# 00:16:22: OSPF: Red hello from 192.168.0.6 area 4 from Ethernet 0/0

16.16.26.6
00:16:22: OSPF: End of hello processing 00:16:22: OSPF: Send hello to 244.0.0.5 area 4 on Ethernet0/0 from

116.16.26.2
NL2# 00:16:28: OSPF: Red hello from 192.168.0.3 area 3 from Seriall/0

116.16.32.1
00:16:28: OSPF: 00:16:28: OSPF: 255.255.255.252 NL2: 00:16:32: OSPF: Mismatched hello parameters from 116.16.32.1 Dead R 40 C 120, Hello R 10 C 30 Mask R C 255.255.255.252 Red hello from 192.168.0.6 area 4 from Ethernet0/0

116.16.26.6
00:16:32: OSPF: End of hello processing 00:16:32: OSPF: Send hello to 224.0.0.5 area 4 on Ethernet0/0 from

116.16.26.2
Using the information above, what is the most likely reason for the OSPF problems across the frame relay link? a) This router is in area 4 while its neighbor is configured to be in area 3. b) There is mismatch between the OSPF Frame Relay parameters configured on this router and those configured on its neighbor. c) The OSPF network mode configured on this router is not the same as the mode configured on its neighbor. d) This router has a Frame Relay interface DLCI statement that is using the broadcast mode. While its neighbor is using a point-to-point mode. e) None of the above)

Chapter 4: IP Routing Protocols

182

4-31. Which of the following statements are true regarding the SPF calculation? (Select three) a) The Dijkstra algorithm is run two times. b) The previous routing table is saved. c) The present routing table is invalidated. d) A router calculates the shortest-path cost using their neighbor(s) as the root for the SPF tree. e) Cisco routers use a default OSPF cost of 10*7/B\N. 4-32. What statement is correct regarding OSPF adjacencies and link-state database synchronization? a) Full adjacency occurs when OSPF routers reach the LOADING state. b) Adjacency relationship begins in the EXSTART state. c) All OSPF neighbors establish adjacencies in the FULL state with all other routers on the broadcast network. d) The INIT state indicates that a router has received a Hello packet from a neighbor and has seen their own ROUTERID in the Hello packet. 4-33. OSPF is running on the BootCamp network. In OSPF, what LSA type would only cause a partial SPF calculation? a) Type 1 b) Type 2 c) Type 4 d) Type 7 e) Type 9 4-34. OSPF is being used as the routing protocol in the BootCamp network. Which two statements regarding the SPF calculation on these OSPF routers are true? (Select two) a) The existing routing table is saved so that changes in routing table entries can be identified. b) The present routing table is invalidated and is built again from scratch. c) A router calculates the shortest-path cost using their neighbor(s) as the root for the SPF tree. d) Cisco routers use a default OSPF cost of 10 A 7/BW. 4-35. What statement is accurate regarding OSPF areas? a) Redistribution is allowed into all types of OSPF areas. b) When routes are redistributed into an OSPF stub area, they enter as type-5 LSAs. c) Redistribution is allowed into an OSPF stub area, but not into an OSPF not-so-stubby area.

183 Chapter 4: IP Routing Protocols d) When routes are redistributed into an OSPF not-so-stubby area, they enter as type-5 LSAs. e) When routes are redistributed into an OSPF not-so-stubby area, they enter as type-7 LSAs. 4-36. Within the BootCamp OSPF network, which statement is true regarding the LSA's contained in the link state database? (Choose all that apply). a) The LSRefreshTime is 30 minutes. b) LSA's can only be reflooded by the router that originated the LSa) c) When an LSA reaches its MaxAge the router will send out a purge message to the other routers within its area. d) All LSAs contained in the LSDB expire at the same time unless they are refreshed) e) The MaxAge of an LSA is 3600 seconds. 4-37. Which of the following is are considered to be attributes of BGP routes? (Choose all that apply) a) Origin b) Weight c) Local Preference d) Community e) Cluster List 4-38. You are the system administrator at BootCamp. You want to advertise the network 190.72.27.0/27 to an EBGP peer. What command should you use? a) network 190.72.27.0 b) network 190.72.27.0 mask 255.255.255.224 c) network 190.72.27.0 mask 255.255.225.240 d) network 190.72.27.0 mask 0.0.0.31. 4-39. Routers NL1 and NL2 are configured for BGP. Both routers reside in AS 65234. Routes from Router NL2 show up in the BGP table on Router NL1, but not in the IP routing table. What could be the cause of this problem? a) Synchronization is off. b) The BGP peers are down. c) BGP multi-hop is disabled on Router NL1. d) Router NL1 is not receiving the same routes via an internal protocol. 4-40. You have a router running BGP for the Internet connections as well as IGRP for use internally. You configure the network backdoor command on this router under the BGP process. What will this do? a) It will change the distance of an iBGP route to 20. b) It will change the distance of an eBGP route to 200.

Chapter 4: IP Routing Protocols

184

c) It will change the distance of an IGRP route to 20. d) It will not change the distance of the route. 4 - 4 1 . You have two routers running BGP to two different ISP's. You wish to influence the way that traffic comes into your network from the Internet, but your company policy prohibits the use of BGP communities. What is the best way to influence this traffic? a) Adjust the cost of your routers. b) Use MED values. c) Increase the weight value on one of your routers. d) Decrease the local preference value on one of your routers. e) Use AS-path prepending. f) Use Metrics. 4-42. Your router is multi-homed to three different ISP's for Internet access. You then configure bgp deterministic-med under the BGP routing process configuration of your router. What effect does this change have on your network? a) It configures BGP to compare MEDs between different ASs. b) It makes the default metric count the worst possible metric. c) It makes the default metric count the best possible metric) d) It configures BGP to reorder the entries by neighbor AS. e) It configures BGP to reorder the entries by MED. 4-43. Which of the following attributes are "well known" BGP attributes? (Choose all that apply) a) Atomic-aggregate b) MED c) Next-hop d) AS-path e) Origin f) Weight g) Aggregator 4-44. In BGP routing, what does the rule of synchronization mean? a) It means that a BGP router can only advertise an iBGP-leamed route provided that the route is in the only in the BGP table. b) It means that a BGP router can only advertise an eBGP-leamed route provided that the route is an IGP route in the routing table. c) It means that a BGP router can only advertise an iBGP-learned route provided that the route is in the routing table of all its iBGP neighbors.

185 Chapter 4: IP Routing Protocols d) It means provided e) It means provided that that that that a BGP router can only advertise an eBGP-learned route the route is metric 0 in the BGP table. a BGP router can only advertise an iBGP-learned route the route is an IGP route in the routing table)

4-45. What is the correct sequence order that BGP routers use when determining the best route to any given destination? a) MED, Local preference, AS-path, Weight, Origin Code b) Origin Code, MED, Weight, AS Path, Local Preference c) Weight, Local Preference, AS-path, Origin Code, MED d) Weight, Local Preference, MED, AS-Path, Origin Code e) MED, Weight, Local Preference, Origin Code, AS Path 4-46. You are setting up BGP on router NLl and you wish to simplify the configuration file through the use of BGP peer groups. Which of the following best describes the proper use of BGP peer groups? a) They should be used for peers with common community values b) They should be used for peers with common inbound announcement policies c) They should be used for peers with common outbound announcement policies d) They should be sued to combine MED inbound policies e) They should be used to peers with common transitive AS policies 4-47. Assume the following routes are in the BGP routing table.

172.16.0.0/24 172.16.1.0/24 172.16.2.0/24 172.16.3.0/24


also assume the following commands have been configured: router bgp 1 neighbor 10.1.1.1 remote-as 2 aggregate-address 172.16.0.0 255.255.252.0 suppress-map specific access-list 1 permit 172.16.2.0 0.0.0.3.255 route-map specific permit 10 match ip-address 1 Which routes will BGP advertise?

a) 172.16.0.0/22 b) 172.16.0.0/22, 172.16.2.0/24, 172.16.3.0/24 c) 172.16.0.0/22, 172.16.0.0/24, 172.16.1.0/24


d) 172.16.2.0/24 and 172.16.3.0/24

e) 172.16.0.0/22 and 172.16.1.0/24


4-48. A BGP router in the BootCamp network called P1R3 is configured as shown below:

Chapter 4: IP Routing Protocols

186

hostname P1R3 ! ! Output omitted


j

router bgp 50001 synchronization bgp log-neighbor-changes neighbor 10.200.200.11 remote-as 50001 neighbor 10.200.200.11 update-source loopbackO neighbor 10.200.200.12 remote-as 20001 neighbor 10.200.200.12 update-source LoopbackO neighbor 10.200.200.14 remote-as 50001 neighbor 10.200.200.14 update-source LoopbackO no auto-summary PIR3#show ip bgp summary BGP router identifier 10.200.200.13, local As number 50001 BGP table version is 1, main routing table version 1 6 network entries using 606 bytes of memory 7 path entires using 336 bytes of memory 4 BGP path attribute entries using 240 bytes of memory 3 BGP AS-PATH entries using 72 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 1254 total bytes of memory BGP activity 6/0 prefixes, 7/2 paths, scan interval 60 sees Neighbor V AS MsgRcvd MsqSent TbIVer InO OutO Up/Down State/Pfxrcd 10.200.200.11 4 50001 9 4 10 0 00:00: 14 6 10.200.200.12 4 50001 9 4 10 0 00:00: 14 6 10.200.200.14 4 50001 4 4 10 0 00:00: 14 0 PIR#show ip bgp BGP table version is 1, local router: ID is 10.200.200.13 Status Codes: s suppressed, d damped, h history, * valid, > best, I internal Origin codes: i - IGP, - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path 10.0.0.0 10.200.200.12 0 100 0 i 10.200.200.11 0 100 0 i 192.168.11.0 10.200.200.12 0 100 0 50998 50222 50223 i 10.200.200.11 0 100 0 50998 50222 50223 i 192.168.12.0 10.200.200.12 0 100 0 50998 50222 50223 i 10.200.200.11 0 100 0 50998 50222 50223 i 192.168.13.0 10.200.200.12 0 100 0 50998 50222 50223 i 10.200.200.11 0 100 0 50998 50222 50223 i 192.168.14.0 10.200.200.11 0 100 0 50998 50222 50223 i 10.200.200.11 0 100 0 50998 50222 50223 i <output omitted> PIR#show ip route Codes: C - connected, s - static, I IGRP, R - RIP, M - mobile, - BGP

187 Chapter 4: IP Routing Protocols D - EIGRP, EX - EIGRP external, 0 - OSPF< IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 El - OSPF external type 1, E2 - OSPF external type 2, - EGP I - is-is, su - IS-IS summary, LI - IS-IS leval-1, L2 - IS-IS level - 2 la - IS-IS inter area, * - candidate default, U - per-user static route 0 - ODR, P - periodic downloaded static route Gateway of last resort is not set 10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks o 10.200.200.11/32 [110/11] via 10.1.1.1, 00:06:38, Ethernet0/0 o 10.200.200.14/32 [110/65] via 10.1.3.4, 00:06:38, Seriall/0 o 10.200.200.12/32 [110/75] via 10.1.1.1, 00:06:38, Ethernet0/0 o 10.200.200.13/32 is directly connected, LoopbackO o 10.1.3.0/24 is directly connected, Seriall/0 o 10.1.2.0/72 [110/74] via 10.1.3.4, 00:06:38, Seriall/0 o 10.1.1.0/24 is directly connected, Ethernet0/0 o 10.1.0.0/24 [110/74] via 10.1.1.1, 00:06:38, Ethernet 0/0 Router PIR3 is running an IBGP full-mesh with its IBGP neighbors (10.200.200.11, 10.200.200.12, and 10.200.200.14). Using the BGP configuration and the show command outputs above, why are BGP routes not being selected in the BGP table and placed into the IP routing table? a) Because the 10.200.200.11 and 10.200.200.12 neighbors are setting the Weight to 0 b) Because the 10.200.200.11 and 10.200.200.12 neighbors are setting the MED to 0 c) Because the 10.200.200.11 and 10.200.200.12 neighbors are not using next-hopself d) Because synchronization is enabled on PIR3 e) Because there are no routes to reach the next-hops 4-49. With regards to BGP and the administrative distance in a routed environment, which statement is correct? a) The administrative distance of all BGP routes is 20, which explains why BGP routes are preferred over any IGP (such as OSPF). b) BGP is a path vector protocol, and thus does not employ the concept of administrative distance. c) BGP dynamically adjusts its administrative distance to match that of the IGP within the AS to eliminate routing confusion. d) BGP actually employs two different administrative distance values: IBGP is 20, while EBGP is 200. e) BGP actually employs two different administrative distance values: IBGP is 200, while EBGP is 20. 4-50. You are configuring the BootCamp Internet router as a BGP peer to your ISP's router. After doing this, which BGP attributes will be carried in every BGP update (both IBGP and EBGP)? a) Origin, AS-Path, Next Hop b) Origin, local preference, AS-Path

Chapter 4: IP Routing Protocols

188

c) Router-ID, Origin, AS-Path d) Router-ID, Local-Preference, Next-Hop e) AS-Path, Local Preference, Next-Hop 4-51. Router NLl is used as the BootCamp Internet router and is configured for BGP. The Ip BGP information of this router is displayed below: NL1# show ip bgp BGP table version is 12, local router ID is 172.16.1.2 Status code: s supported, d damped, h history, * valid, > best, i - internal Origin codes: i IGP, - EGP ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 192.168.0.0/16 172.16.1.1 0 0 50 103 {50101, 50102} i Given above information, why does the 192.168.0.0/16 prefix contain an AS-PATH of 50 103 { 50101, 50102} a) Because AS 50101 and AS 50102 are Transit AS's b) Because AS 50103 is using BGP confederations with two sub-ASs (subAS 50101 and sub-AS 50102) c) Because it is an aggregate route and the more specific routes have passed through AS 50101 and AS 50102 d) Because AS 50103 is using AS-Path pre-pending to influence the return traffic e) Because AS 50103 is performing route summarization using the network 192.168.0.0. mask 255.255.0.0 command 4-52. The IP BGP information for a specific network on router NLl is displayed below: NLl#show ip bgp 10.254.0.0 BGP routing table entry for 10.254.0.0/24, version 8 Paths: (2 available, best # 1 , table Default-IP-RoutingTable, not advertised Advertised to non peer-group peers: 10.1.0.2 10.200.200.13 10.200.200.14 50998 172.31.1.3 from 172.31.1.3 (172.31.1.3) Origin IGP, metric 0, localpref 100, valid, external, best Community: 50998:1 no-export 50998 172.31.1.3 from 10.1.0.2 (10.200.200.12) Origin IGP, metric 0, localpref 100, valid, internal Router NLl, which is in Transit AS 50001, is not propagating the 10.254.0.0/24 prefix to its neighboring ASs. Based the show IP BGP 10.254.0.0 output shown, determine a possible cause of this problem. a) Because the community b) Because the c) Because the d) Because the 10.254.0.0/24 prefix is tagged with the no-export best path chosen by BGP is the IBGP learned path best path chosen by BGP is the EBGP learned path 10.254.0.0/24 prefix has a MED of 0

189 Chapter 4: IP Routing Protocols e) Because of the EBGP split horizon rule 4-53. The BootCamp network is using BGP for Internet routing, and part of the router configuration is shown below: router bgp 50101 neighbor 10.1.1.1 neighbor 10.2.2.2 neighbor 10,2.2.2 neighbor 10.1.1.1 remote-as remote-as route-map route-map 50102 50103 test2 out test out

ip as-path access-list 1 permit _50104$ ip as-path access-list 2 permit .* route-map test permit 10 match as-path 1 set metric 140
i

route-map test permit 20 match as-path 2


i

route-map test2 permit 10 set metric 100 Using the configuration above, which statement is correct? a) All prefixes originating in AS 50104 will be advertised to the 10.1.1.1 neighbor with a MED of 150. b) All prefixes not originating in AS 50104 will be advertised to the 10.1.1.1 neighbor with a MED of 0. c) All prefixes not originating in AS 50104 will be advertised to the 10.1.1.1 neighbor. d) All prefixes will be advertised to the neighbor with a MED of 100. e) All prefixes originating in AS 50104 will be advertised to the 10.2.2.2 and the 10.1.1.1 neighbor with a MED of 100. 4-54. Assume that a BGP router has learned prefix 63.0.0.0/8 from two different BGP neighbors. Which statement regarding the BGP route selection process and how this route will be installed is correct? a) The update from the neighbor that has the highest weight and the highest local preference becomes the preferred path. b) The update from the neighbor that has the shortest AS path becomes the preferred path. c) The update from the neighbor that has the highest local preference and the highest MED becomes the preferred path. d) The update from the neighbor that has the lowest local preference becomes the preferred path. e) The update from the neighbor that has the highest MED becomes the preferred path.

Chapter 4: IP Routing Protocols

190

4-55. The BootCamp network is using BGP for external routing. If a BGP router has more than one route to the same IP prefix, in what order are BGP attributes examined in making a best path route selection? a) LOCAL_PREF, MED, AS_PATH, WEIGHT, ORIGIN b) WEIGHT, LOCAL_PREF, ORIGIN, AS_PATH; MED c) WEIGHT, LOCAL_PREF, AS_PATH, ORIGIN, MED d) WEIGHT; LOCAL_PREF, AS_PATH, MED, ORIGIN e) MED, LOCAL_PREF, WEIGHT, ORIGIN, AS_PATH 4-56 Router NLl is running RIP Version II and has 2 interfaces. NLl has received RIP routing updates from its neighbors on both interfaces. The first interface receives a routing update for network 10.1.1.0/24 with a metric of 3 while the second interface also receives a routing update for network 10.1.1.0/24 with a metric of 5. Which interface(s) will router NLl select to forward packets to network 10.1.1.0/24? a) The router will choose the first interface because it has the lowest metric. b) The router will load share across both interfaces in a weighted fashion, sending the first 3 packets out of the first interface, and the next 5 packets out of the second interface. c) The router will choose the second interface because it has the highest metric) d) The router will equally load share packets across both interfaces in a round robin fashion, because both are valid RIP Version II routes. e) The router will ignore the RIP metrics and compare the administrative distance of each route, and choose the interface with the lowest administrative distance) 4-57. Router NLl and NL2 are IBGP peers. Which BGP attributes are carried in all IBGP routing updates? (Select 3) a) MED b) Local Preference c) Weight d) Community e) AS-path f) Cost g) Origin 4-58. Many of the BootCamp BGP routers are configured using peer groups. Which of the following correctly display the common properties of BGP peer groups? a) Community values b) Inbound policies

191 Chapter 4: IP Routing Protocols c) Outbound policies d) MED inbound policies e) Transitive AS policies f) None of the above 4-59. An EIGRP multicast flow timer is defined as which of the following? a) The timeout timer after which EIGRP retransmits to the neighbor in non CR mode, through unicasts. b) The time interval that EIGRP hello packets are sent. c) The timer after which EIGRP will not forward multicast data traffic) d) The timer interval between consecutive transmitted EIGRP hello intervals. e) The timeout timer after which EIGRP retransmits to the neighbor in CR mode, through unicasts. f) None of the above. 4-60. Which components are factored in by default when an EIGRP metric is calculated? (Choose all that apply) a) MTU b) Delay c) Load d) Bandwidth e) Reliability 4 - 6 1 . The topology of a network changes causing an EIGRP router to go into the active state. The DUAL process shows a new route that meets the EIGRP Feasibility Condition. In regards to this specific route, which of the following is true? a) The Feasible Distance of the new route must be equal to one. b) The Feasible Distance of the new route must be higher than one. c) The Reported Distance of the new route must be equal to Feasible Distance. d) The Reported Distance of the new route must be higher than Feasible Distance. e) The Reported Distance of the new route must be lower than Feasible Distance) 4-62. Which of the following EIGRP packets require an acknowledgement? (Choose all that apply) a) Hello b) Query c) Reply d) Update

Chapter 4: IP Routing Protocols

192

e) Ack f) None of the above 4-63. Which of the following types of EIGRP packets contain the Init flag? a) Hello/Ack

b) Query
c) Reply d) Update e) None of the above 4-64. In your EIGRP network you notice that the neighbor relationship between two of your routers was recently restarted. Which of the following could have occurred to have caused this? (Choose all that apply) a) The clear ip route command was issued. b) The ARP cache was cleared. c) The IP cache was cleared. d) An update packet with Init flag set from a known, already established neighbor relationship was received by one of the routers. e) The IP EIGRP neighbor relationship was cleared manually. 4-65. The BootCamp EIGRP network has a router named Router NL2. Router NL2 is connected to an EIGRP neighbor, NL1. NL1 is defined as a stub. With regard to this network, which of the following are true? a) Router NL1 will not advertise any network routes to NL2. b) Router NL2 will send only summary routes to NL1. c) Router NL2 will not query NL1 about any internal route. d) Router NL2 will not query NL1 about any external route. e) Router NL2 will not query NL1 about any route) f) None of the above. 4-66. How is the metric for a summarized route derived when the interface summary command for EIGRP is used? a) It is derived from the route that has the biggest metric. b) It is derived from the route that has the smallest metric. c) It is derived from the interface that has the summary command configured on it. d) It is derived from the route that has the shortest matching mask. e) It is derived from the default-metric. 4-67. EnableMode.com is designing a large network with core, distribution, and access layers. EIGRP is the routing protocol that will be used throughout the network. Each distribution router has WAN connectivity to at least 20

193 Chapter 4: IP Routing Protocols access routers. Every router in the network has an explicit route to every possible subnet. All hosts in the network should be able to reach any other host, anywhere within the network. What should be done to optimize the routing configuration? a) Ensure IP address space is allocated so that routes can be summarized at the core routers. b) Filter routes in the distribution layer so that every access router doesn't have an explicit route to every subnet. c) Filter routes in the access layer so that every access router doesn't have an explicit route to every subnet. d) Ensure IP address space is allocated so that routes can be summarized at each distribution router. 4-68. A router is being configured to override the normal routed behavior of certain traffic types. To do this, policy pased routing is used. Which of the following statements is FALSE with regards to the application of policybased routing (PBR)? a) PBR can not be used to set the IP precedence. b) PBR can not set the DSCP in one statement. c) PBR can be used to set the next hop IP address. d) PBR can be used to match on the length of a packet. e) All of the above are true 4-69. Routers NLl and NL2 are in the same LAN, and both are running RIP version 2. During a troubleshooting session you place a sniffer on the LAN network. Using the sniffer you see routers NLl and NL2 sending routing updates to each other every 30 seconds. Which IP address should you expect to see these updates destined to? (Choose all that apply) a) 224.0.0.10 b) 255.255.255.255

c) 224.0.0.13
d) 224.0.0.5

e) 224.0.0.9 f) 224.0.0.6
4-70. What is the destination IP address of routing update packets used by RIPv2? What would your reply be? a) 224.0.0.1 b) 224.0.0.10 c) 224.0.0.5 d) 224.0.0.9 e) 255.255.255.255

Chapter 4: IP Routing Protocols

194

4-71. The router NLl is using RIPv2 as the routing protocol, and the partial configuration file is displayed below: interface Ethernet 1 ip address 10.1.1.1 255.255.255.0 ip summary-address rip 10.2.0.0 255.255.0.0 ip split-horizon router rip network 10.0.0.0 What is a result of the configuration shown for router NLl? a) The 10.2.0.0 network overrides the auto summary address of 10.1.1.1. b) The 10.2.0.0 network is advertised out interface E l , and the auto summary address is not advertised. c) The auto summary address of 10.1.1.1.1 will be advertised out interface El and the interface summary-address is not advertised. d) Neither the auto summary address nor the interface summary-address is advertised because split horizon is enabled) e) Both the auto summary address and the interface summary-address are advertised out of interface E l . 4-72. A customer has a Frame Relay network with 2 sitesa headquarters site and a remote sitewith a single PVC connecting the 2 sites. The network is running RIP version I I . The company is now expanding and adding another remote site in the frame relay network and has ordered a second PVC between the new remote site and the headquarters site. All Frame Relay interface IP addresses are in a single subnet. The customer configured Frame Relay DLCI mappings and can successfully ping from the new remote to the headquarters site as well as the other remote site. However, the new router does not have a route in its route table to the other remote site's LAN, and cannot ping the LAN interface or any hosts on that LAN. What is most likely causing the problem? a) Neighbor statements are not configured on the two remote sites, pointing to all other sites. b) The headquarters site router has split-horizon enabled on the Frame Relay interface. c) The Frame Relay IP to DLCI mappings are incorrectly configured. d) RIP cannot propagate routing updates over a partial mesh Frame Relay configuration, so another routing protocol should be selected) e) Triggered updates should be configured on the headquarters router, to directly forward routing updates between the two remote sites. 4-73. A RIP Version 2 router is sending RIP updates to its neighbor that include several contiguous IP subnet routes in the 10.1.1,0/24 space. What command should be configured to aggregate the routes into a single route in the update to the RIP neighbor?

195 Chapter 4: IP Routing Protocols a) summary-address rip 10.1.1.0 255.255.255.0, configured under the RIP process or the interface b) summary-address 10.1.1.0 255.255.255.0, configured under the RIP process c) ip summary-address rip 10.1.1.0 255.255.255.0, configured under the interface d) rip summary-address 10.1.1.0 255.255.255.0, configured under the interface e) ip rip summary-address 10.1.1.0 255.255.255.0, configured under the interface f) None of the above

Chapter 4 An

Chapter 4: IP Routi

G J W O J U J W W U I M M M N J N J M M M M W i - ' i - ' c r i L n ^ U J M i - ' O v o r j o ^ i a i u i - ^ u j M M - o y D o o

f> A
MVI
1-1

- > -^ A &
>->

-tv
I-1 M

1-1

Ol

<

-^
h

MLO

.&. *.
* b i-*

A
10

-^ -p* oo *ij
CO

-fi Ol

f. -fc> A w

A
M

-P*
I-1

(A

a>

OJ

a.

cr _ cr n

Q-h

Q.

Q.

CL

fD

(J fD

ttl

O" Q.

(t

^1 Q.

Q.

Q.

ers

Protocols

a, b, d a, b, d

a, b, d, a, c, d, e,

b, c,

cr

fD

cr

197 Chapter 4: IP Routing Protocols 4-37 4-38 4-39 4-40 4-41 4-42 4-43 4-44 4-45 4-46 4-47 4-48 4-49 4-50 4-51 4-52 4-53 4-54 4-55 4-56 4-57 4-58 4-59 4-60 4-61 4-62 4-63 4-64 4-65 4-66 4-67 4-68 4-69 4-70 4-71 4-72 4-73 a, c, d, b d b d a, c, d, c c a d a c a b a c a b, e, G c b, d b, c, d d d, b d a d d b c

Chapter 4: IP Routing Protocols

198

Chapter 5

Quality of Service (QoS)


With today's latency sensitive applications like transaction processing, voice, or video, it isn't good enough just to deliver the data where it needs to arrive; it also must arrive there within the time requirements of the application. Even if you do not have delay sensitive applications, it is always in your best interest to deliver the data that needs to be delivered first and to use your costly Wide-Area Network links in the most efficient method possible. So, prepare to learn about Cisco's abundant Quality of Service (QoS) options. Cisco breaks Quality of Service into six categories. They are: 1. Traffic ClassificationIncludes Policy Routing, CAR, and NBAR 2. Congestion ManagementIncludes Weighted Fair Queuing (WFQ), Custom Queuing, and Priority Queuing. These are also known as Fancy Queuing. 3. Congestion AvoidanceIncludes WRED 4. Policing and ShapingIncludes Generic Traffic Shaping (GTS), Class-Based Shaping, and Distributed Traffic Shaping 5. SignalingIncludes RSVP 6. Link Efficiency MechanismsIncludes RTP With IOS 12.2(15)T, and above, Cisco has released "AutoQoS VoIP". This feature will automatically configure your router to provide quality service for Voice over IP traffic. QoS is defined over various network technologies, including Frame Relay, Asynchronous Transfer Mode (ATM), Ethernet and 802.1 networks, SONET, and IP-routed networks. Cisco IOS QoS software is designed to predictably service many applications and traffic types. It provides: Enhanced control over, and more efficient use of, network resources Better network analysis management and accounting tools The ability to consistently service the most important traffic while still providing access for less time-sensitive applications Enables ISPs to offer tailored grades of service to their customers

There are three fundamental pieces for a QoS implementation:

Chapter 5: Quality of Service (QoS) Queuing, scheduling, and traffic shaping tools Signaling techniques for coordinating QoS from end to end QoS policy, management, and accounting functions

200

There are three basic levels of end-to-end service: Best-effort serviceBasic connectivity with no guarantees Differentiated service or Soft QoSSome traffic is treated better than the rest via statistical preference Guaranteed service or Hard QoSAbsolute reservation of network resources for specific traffic

QoS Overview Five Benefits for Implementing QoS in the Enterprise Networks Cisco states that there are five major aspects of good QoS deployments: 1. Application Classification 2. Policy Generation 3. Configuration 4. Monitoring & Reporting 5. Consistency How a Converged Network Behaves Without QoS Without QoS a converged networka network that transports voice packets and data packetstreats all packets equally. A packet containing an email attachment is given the same treatment and preference as packets containing real-time voice conversations. This is why QoS is so criticalso that time-sensitive voice packets can be treated preferentially. They can be moved to the front of the buffer, they can interrupt a large packet that is traversing a slow link and be given the highest priority the network can provide. QoS framework QoS can be broken into three aspects: The ability of a device to perform QoS (classification of packets, queuing, prioritizations, etc.) The ability of a network to signal between components to establish QoS from end-to-end within a data path (RSVP, etc.) The ability to administer policies and management functions to control endto-end traffic in a network

201 Chapter 5: Quality of Service (QoS) Edge devices and core devices typically perform different QoS functions. Edge routers are the first devices to accept traffic into the network, so they typically perform packet classification and admission control (marking, policing, shaping, etc.) Core routers assume the traffic has already been classified and policed. Core routers are more likely to encounter links that are heavily used, thus they focus on congestion management and congestion avoidance. Call Admission Control Functionality Call Admission Control Functionality is the ability to be able to admit or deny a call across a portion of the network depending on some standardsuch as whether enough bandwidth is available to support the call. Call admission control can be deployed in many ways. However all call admission control provides the same purposeto determine whether a particular call's voice traffic will be admitted across a network link depending on the status of the network. Integrated Services vs. Differentiated Services Integrated services involve an application requesting a specific quality of service from the network before it begins transmitting data. This requires the application knowing what type of service it requires (including requirements of bandwidth, latency, etc.) and explicitly signaling this request to the network. With integrated services the application only sends traffic after (and if) it has received confirmation that the network has provisioned the end-to-end QoS that it requires. If the network accepts the request the application's traffic is expected to adhere to the agreed upon traffic profile (amount of bandwidth, etc.) With integrated services the network must perform admission control. That is, it can only accept requests for a particular quality of service if it has the resources to provision the network to support that request. For example, suppose a desktop video conferencing system is requesting 256 Kb/s of bandwidth and less than 40 mS of latency to a particular endpoint (presumably another video conferencing device). The network can only accept this request if it has 256 Kb/s of bandwidth free on every leg between the two endpoints (including the WAN, if any). Obviously the network cannot reserve all the bandwidth of a given connection since many applications (email, etc.) do not request particular QoS, they simply accept best effort. In order to meet its QoS requirement the network not only has to reserve appropriate bandwidth, it also needs to meet latency requirements. This can often mean classifying (identifying) the appropriate traffic and queuing it with a high priority. The Resource Reservation Protocol (RSVP) is the best known form of integrated services. RSVP reserves the necessary network resources (bandwidth, latency, etc.) between two endpoints before starting a connection. Although RSVP holds great promise, at this time it is very rarely used in practice. Differentiated services are easier to implement because the application does not reserve resources from end-to-end, as with integrated services. In fact

Chapter 5: Quality of Service (QoS)

202

differentiated services does not signal the network at all. Instead differentiated services rely on the network to apply QoS on a packet-by-packet basis, depending on criteria such as IP precedence, source or destination address or TCP port. With differentiated services the network applies classification at the point at which the packets enter the network. So the "edge" router or switch can identify and sort packets. This device can also set IP Precedence or the Differentiated Service Code Point (DSCP) within each packet to allow other network devices to apply similar QoS without having to repeat the classification exercise. Differentiated services replace the Type of Service (ToS) field of an IP packet with its own field, the Differentiated Services field (DS field). The first six bits of the DS field are composed of the DSCP. The last two bits of this field are used for explicit congestion notification. The DSCP defines the Per Hop Behavior (PHB) that each device should provide for that packet. PHB refers to the packet scheduling, queuing, policing, or shaping behavior of a device on any given packet. There are four defined PHB's: Default PHB Class-Selector PHB Assured Forwarding (AF) PHB Expedited Forwarding (EF) PHB

The default PHB is best effort servicethe device will forward the packet as fast as resources allow. The Class Selector PHB provides backward compatibility with devices that only understand IP Precedence values (and not DSCPs). The Assured Forwarding PHB defines four priority values and three drop precedence values, for a total of twelve total classes. The Expedited Forwarding PHB is the best service provided by differentiated services. It is a guaranteed bandwidth service well suited to voice or video traffic. It provides low latency, low jitter, low loss and assured bandwidth. Configure QoS Policy using Modular QoS CLI The Modular QoS CLI (MQC) is nothing more than using the command-level interface of the switch, router, gateway or gatekeeper to configure QoS. Although MQC is discussed throughout this guide, here is a quick overview, with minimal explanation: FIFO: default, no config necessary WFQ: interface serial 0

203 Chapter 5: Quality of Service (QoS) fair-queue CBWFQ: class-map Sarasota-traffic match access-group 101 or match input-interface serial 1 or match protocol ip ! policy-map sarasota class Sarasota-traffic bandwidth 384 (in kb/s or you can specify percent bandwidth) queue-limit 20 random-detect (if using WRED rather than the default, tail drop) ! interface serial 4/1 service-policy output sarasota
CQ:

interface serial 1/1 custom-queue-list 1 ! queue-list 1 protocol ip 1 tcp 23 (TCP port 23 to queue 1) queue-list 1 protocol ip 2 tcp 80 (TCP port 80 to queue 2) queue-list 1 protocol ip 3 list 100 (ACL 100 to queue 3) queue-list 1 queue 1 limit 20 (max 20 packets in queue 1) queue-list 1 queue 2 byte-count 1000 (byte count 1000 in queue 2) PQ: priority-list 2 protocol ip high list 5 priority-list 2 interface ethernet 1/0 low or priority-list 2 protocol ip medium udp 161 ! interface serial 1/0 priority-group 2
i

access-list 5 permit 192.168.1.0 0.0.0.255 LLQ: class-map voice-enabled match access-group 150 policy-map phoenix class voice-enabled priority 128
j

interface serial 1

Chapter 5: Quality of Service (QoS) service-policy output phoenix


i

204

access-list 150 permit udp any range 16384 32768 any access-list 150 permit udp any any range 16384 32768 IP RTP Priority: interface serial 0/1 ip rtp priority starting-port-number port-number-range bandwidth CAR: interface serial 0/1 rate-limit input 128000 10000 20000 conform-action transmit exceedaction drop

Marking: class-map telnet-class match access-group name telnet class-map web-class match access-group name www ! policy-map salem class telnet-class set ip precedence 5 class web-class set ip dscp ef
j

interface serial 1/1 service-policy input salem ! ip access-list extended telnet permit tcp any any eq telnet ip access-list extended www permit tcp any any eq www

Policing: class-map ftp match access-group 101 ! policy-map limit-ftp class ftp police 256000 32000 32000 conform-action transmit exceed-action drop violate-action drop ! interface serial 0/0

205 Chapter 5: Quality of Service (QoS) service-policy input limit-ftp


j

access-list 101 permit tcp any any eq ftp-data WRED: interface serial 0 random-detect

FRED: interface serial 1 random-detect random-detect flow random-detect flow count 16 random-detect flow average-depth-factor 8 Compressed IP RTP (CRTP): interface serial 0/0 ip rtp header-compression interface serial 0/0 encapsulation frame-relay frame-relay map ip 10.10.0.1 17 broadcast rtp header-compression Link Fragmentation and Interleaving (LFI) for Multilink PPP (MLP): interface virtual-template 1 ip unnumbered loopback 0 ppp multilink ppp multilink interleave ppp multilink fragment-delay 30
i

multilink virtual-template 1 LFI for Frame Relay: interface Serial 3/0 no ip address encapsulation frame-relay frame-relay traffic-shaping ! interface Serial 3/0.1 point-to-point frame-relay interface-dlci 83 ppp Virtual-Templatel class frame-pvc
i

interface Virtual-Templatel bandwidth 100 ip address 10.10.10.51 255.255.255.252

Chapter 5: Quality of Service (QoS) ppp ppp ppp ppp ppp authentication chap chap hostname rtr3 multilink multilink fragment-delay 20 multilink interleave

206

! map-class frame-relay frame-pvc frame-relay cir 96000 frame-relay be 1000 frame-relay be 0 no frame-relay adaptive-shaping LFI using FRF.12: interface serial 1/1 frame-relay traffic-shaping frame-relay interface-dlci 105 class FR-fragment
i

map-class frame-relay FR-fragment frame-relay cir 64000 frame-relay fragment 25 frame-relay fair-queue Generic Traffic Shaping: interface serial 0/0 traffic-shape rate 128000 32000 Frame Relay Traffic Shaping: interface Seriall encapsulation frame-relay frame-relay class 384K_VCs frame-relay traffic-shaping
i

interface Seriall.1 point-to-point frame-relay class 512K_VCs frame-relay interface-dlci 101


j

interface Seriall.2 point-to-point frame-relay interface-dlci 102 ! interface Seriall.3 point-to-point frame-relay interface-dlci 103 ! interface Seriall.4 point-to-point frame-relay interface-dlci 104 class 256K_VCs ! map-class frame-relay 384K_VCs

207 Chapter 5: Quality of Service (QoS) frame-relay traffic-rate 384000 384000 frame-relay adaptive-shaping been ! map-class frame-relay 512K_VCs frame-relay traffic-rate 512000 512000 frame-relay adaptive-shaping been ! map-class frame-relay 256K_VCs frame-relay traffic-rate 256000 256000 frame-relay adaptive-shaping been
j

Classification and Marking Purposes of Classification and Marking Classification is identifying a packet depending on some criteria and assigning it to a group that can be given a particular class of service. Classification can be depend on many factors, such as receiving interface, ACLs, protocol, etc. Marking is changing or setting part of the packet, allowing it to be easily recognized throughout the network so that a class of service can be applied. Marking can be done by: Setting the IP Precedence or DSCP bits in the IP header Setting the Class of Service (CoS) bits in the Layer 2 802.lp header Associating a local QoS group value with the packet Setting the cell loss priority (CLP) bit in the ATM header from 0 to 1

Setting the IP Precedence or DSCP bits is very effective since once these values are set, by default, Layer 3 devices adhere to but do not alter them. These also determine how Weighted Random Early Detection (WRED) treats packets in a congestion situation. Setting the CoS value is not as useful since it is not maintained from end-to-end through the networkthe CoS value is removed when the 802.lp header is stripped. However it is very useful for marking (and thus prioritizing) traffic on a trunk link, switch-to-switch link or router-to-switch link. DSCP values can also be mapped using CoS values, making for a more permanent marking. Setting a local QoS group is useful to classify packets within a router (not between routers). This classification can be depend on parameters such as IP prefix, autonomous system and BGP community values. Setting the CLP bit provides a way of controlling how cells are discarded in an ATM network. Cells with a CLP of 1 are discarded before cells with a CLP of 0. This bit is very similar to the Discard Eligible (DE) bit in Frame Relay.

Chapter 5: Quality of Service (QoS) Difference Between Classification and Marking

208

Classification is the process of identifying and grouping particular packets. Classification examines one or more attributes of a packetsource or destination IP address, source or destination TCP/UDP ports, receiving interface, etc. Classification allows action to be taken on packets. Devices can use classification to mark packets, to provide a particular class of service on the packet, or both. Marking is the process of identifying packets that have been classified so that other devices in the network can also apply a class of service. The most common forms of marking are setting IP Precedence bits, setting DSCP bits and/or setting 802.Ip CoS bits. Class of Service, IP Precedence and DiffServ Code Points The phrase "Class of Service" is used in many ways. One common use of the phrase is to describe a particular level of service, such as the classes of service defined by the IP Precedence bits (see below). Another use is to define a level of priority in a Layer 2 header. The 802.Ip standard dictates bits for use in defining Class of Service (CoS). The 802.Ip and 802.Iq standards are commonly used on Layer 2 trunk links. 802.Ip defines priority of packets using the CoS bits. 802.Iq defines a tagging standard, allowing more than one VLAN (or subnet) to be carried separately across one physical link. When the CoS bits are set in an 802.Ip header a Layer 2-only device (such as a switch) can still apply priority to certain packets since they understand and adhere to the CoS value (whereas they likely do not always understand the Layer 3 IP Precedence or DSCP field). IPv4 contains an 8-bit Type of Service (ToS) field in its header. Three of these eight bits form the IP Precedence bits, providing six different classes of service (two levels are reserved), as shown in Table 5-1. Once these bits are set other devices throughout the network can assign a level of service (low latency, etc.) depending on the three IP Precedence bits. For example Weighted Fair Queuing (WFQ) and Weighted Random Early Detection (WRED) can both use IP Precedence bits to determine how to treat packets. Table 5 - 1 . IP Precedence Classes IP Precedence Bits 0(000) 1 (001) 2 (010) 3 (011) 4(100) 5(101) 6 (110) 7(111) Name of Service Class routine priority immediate flash flash-override critical internet control (reserved) network control (reserved)

209 Chapter 5: Quality of Service (QoS) The DiffServ Code Point (DSCP) uses (and replaces) the Type of Service (ToS) field in the IPv4 header. The eight bits of the IPv4 Type of Service field are: the three IP Precedence bits (discussed earlier in this section) one bit for Mow delay' one bit for 'high throughput' one bit for'high reliability.'

The last two bits of the ToS field are not used. DSCP uses these first six bits to define its levels of service, also known as forwarding classes. Table 5-2. DSCP Classes DSCP Value Default Forwarding Assured Forwarding - Class 1 - 4 Expedited Forwarding Forwarding Class 000000 001010 through 100110 101110

IPv6 (for those of you who are really forward looking!) has a Traffic Class' field that is very similar to the IPv4 Type of Service field. This field also contains bits used to prioritize traffic. DSCP also replaces this field with its own information Network Based Application Recognition (NBAR) NBAR adds greater intelligence to network infrastructures by allowing them to recognize applicationsnot just TCP and UDP ports. This includes applications that use dynamic TCP and UDP ports and applications that use web-based protocols by identifying URL, host or MIME type. NBAR can apply QoS features such as guaranteeing bandwidth, limiting bandwidth, shaping traffic, etc. NBAR allows administrators to define their own protocols (or define new protocols that emerge) by using a Packet Description Language (PDL). NBAR also has the ability to learn what applications are running on a network, allowing the administrator to then apply the appropriate QoS. The ability of NBAR to distinguish HTTP traffic depending on host or URL allows the administrator to prioritize certain (critical) HTTP traffic, while allowing the remaining, "normal" web traffic to receive best effort service. This is known as 'subport' service since it goes beyond (or "below") just the TCP port. NBAR also has the ability to perform subport classification on Citrix ICA traffic, depending on the published application to which the user is connecting. Classify and Mark Traffic The Committed Access Rate (CAR) and Distributed CAR (DCAR) services limit the input or output transmission rate on an interface or subinterface using a flexible set of criteria, such as incoming interface, IP precedence, or IP access list.

Chapter 5: Quality of Service (QoS) Every interface can have multiple CAR policies, matching and taking different actions on various types of traffic. For example CAR can allow more bandwidth for more important protocols and less bandwidth for less important protocols. To configure CAR on traffic entering an interface: interface serial 0/1

210

rate-limit input 128000 10000 20000 conform-action transmit exceedaction drop In this example CAR limits all traffic to 128 Kb/s as the average traffic rate. Traffic within the 128 Kb/s limit is always conforming. "Normal" bursts of 10,000 bytes and "excess" bursts of 20,000 bytes are allowed. CAR propagates b u r s t s it does not smooth or shape traffic. In this example traffic that conforms is passed normally, traffic that exceeds is dropped. CAR has the options shown in Table 5-3 for both conform (meets the rate limit) and exceed (higher than the rate limit) traffic: Table 5-3. CAR Actions for Conform and Exceed Action continue drop set-prec-continue (precedence) Description Evaluates the next rate-limit command Drops the packet Sets the IP precedence of the packet and evaluates the next rate-limit command Sets the IP precedence of the packet and transmits the packet Transmits the packet normally

set-prec-transmit (precedence) transmit

CAR using multiple rate-limit commands and access lists: interface Hssi0/0 rate-limit output access-group 101 4000000 32000 36000 conformaction set-prec-transmit 5 exceed-action set-prec-transmit 0 rate-limit output access-group 102 6000000 32000 36000 conformaction set-prec-transmit 4 exceed-action drop rate-limit output 12000000 16000 24000 conform-action set-prectransmit 0 exceed-action drop ip address 10.71.10.90 255.255.255.0 ! access-list 101 permit tcp any any eq www access-list 102 permit tcp any any eq ftp In this example access-lists are used to closely define interesting traffic. The last rate-limit command has no access list and thus is the 'default' CAR class. The default rate-limit command evaluates all traffic that does not match any other rate-limit command.

211 Chapter 5: Quality of Service (QoS) In this example web traffic is allotted 4 Mb/s. Traffic within this limit gets its IP precedence set to 5; all other web traffic has its IP precedence set to 0 (lowest priority) and transmitted. FTP traffic is allotted 6 Mb/s. Traffic within this rate gets its IP precedence set to 4 and transmitted. All FTP traffic beyond 6 Mb/s is dropped. All remaining traffic is allowed 12 Mb/s. Traffic within this rate is transmitted, though with an IP precedence of 0. All other traffic is dropped. To configure marking, create a class of traffic using some criteria. Once the class has been defined, create a policy that will set one or more of: IP Precedence IP DSCP Value QoS Group Value CoS Value ATM CLP Value class-map telnet-class match access-group name telnet class-map web-class match access-group name www ! policy-map salem class telnet-class set ip precedence 5 class web-class set ip dscp ef
i

interface serial 1/1 service-policy input salem ! ip access-list extended telnet permit tcp any any eq telnet ip access-list extended www permit tcp any any eq www

Instead of the set ip precedence 5 command, you can also use: set set set set ip dscp 56 qos-group 50 cos 7 atm-clp

Congestion Management Congestion management techniques allow you to control congestion by determining the order in which packets are sent out an interface depending on priorities assigned to each packet. Congestion management includes:

Chapter 5: Quality of Service (QoS) Creating queues Using classification to assign packets to queues Scheduling packets in a queue for transmission

212

Identify and Differentiate Between IOS Queuing Techniques IOS provides four broad types of queuing (congestion management) techniques: First I n , First Out (FIFO) Weighted Fair Queuing (WFQ) Custom Queuing (CQ) Priority Queuing (PQ)

First I n , First Out is by far the simplest queuing mechanism. Packets are sent out an interface in the order they are received. No preferential treatment is administered, no bandwidth is reserved. Although FIFO sounds almost overly simplistic, on higher speed interfaces ( o v e r T l / E l ) it is the default. This is mostly because it is extremely easy for the router to process packets and the average wait or delay on these interfaces is usually very low. Weighted Fair Queuing actually includes four different variations: Flow-Based Weighted Fair Queuing (WFQ) Class-Based Weighted Fair Queuing (CBWFQ) Distributed Weighted Fair Queuing (DWFQ) Distributed Class-Based Weighted Fair Queuing (DCBWFQ)

The first two types are designed for the standard IOS routers. The distributed types are simply the same implementations as the non-distributed types, but designed for the Route Switch Processor (7000 series) or the Versatile Interface Processor (VIP) (7500 series). In this guide unless otherwise noted WFQ will denote both WFQ and DWFQ. Similarly CBWFQ will be used to mean CBWFQ and DCBWFQ. With WFQ the router identifies and sorts traffic into flows, or conversations. Each flow is assigned a weight, which effectively acts as that flow's priority. The weight can be set by: IP Precedence of the packet RSVP Traffic of the flow (lower traffic rates get higher weight) Frame Relay BECN, FECN and DE

The router cycles through all flows, servicing them in proportion to their weight. The router automatically sorts traffic depending on many attributes, such as source and destination network or MAC address, protocol, source and destination

213 Chapter 5: Quality of Service (QoS) port and socket numbers of the session, Frame Relay data-link connection identifier (DLCI) value, and ToS value. IP Precedence is part of the ToS value, and WFQ will adhere to this setting. It will give higher weights to flows with higher IP Precedence values. The router identifies traffic as either low-volume flows (usually interactive traffic) or high-volume flows (usually file transfers or database operations). WFQ gives preferential treatment to low-volume traffic since this is usually the traffic users are waiting for when using their applications. WFQ will adapt to changing network conditions, since it is constantly evaluating and sorting flows. CBWFQ is similar to WFQ, but instead of the router automatically identifying and sorting flows, the administrator can manually configure classes of traffic depending on protocols, access control lists, and input interfaces. CBWFQ allows the administrator to customize each class, such as defining the guaranteed bandwidth or maximum packet limit. Once a queue has reached its maximum packet limit, any additional packets for that queue will be dropped. The user can decide whether the default, tail drop, will be used (packets at the end (or tail) of the queue that won't fit in get dropped). The alternative to this is to use Weighted Random Early Detection (WRED) to drop excess packets. Custom Queuing provides 16 queues for traffic (if you are using anywhere near 16 queues your configuration is more complicated than it needs to be!) Each queue is serviced in a round-robin fashion, with the router moving from one queue, to the next, to the next. The administrator specifies how many bytes are sent from each queue before the router should move onto the next queue. If any queue is empty, the router immediately moves onto the next queue. The router maintains one queue for system traffic (keepalives, etc.) that is emptied before any other queue. Custom Queuing and Priority Queuing are statically configuredthey do not adapt to changing conditions the way WFQ and CBWFQ do. With CQ you do not specify a percentage of bandwidth, you specify a byte count for each queue. However this can indirectly be a percentage of bandwidth. For example, suppose you define three queues. If you specify byte counts of 2000 bytes, 1000 bytes and 1000 bytes, the queue with 2000 bytes will get 50% of the bandwidth (2000 out of every 2000+1000+1000=4000 bytes) and the other queues will get 25% each. Setting the byte count too high can result in delays for the other queues. For example, setting a byte count to 7500 bytes could result in five 1500 byte packets being sent. As we discuss in the section The Need for Link Efficiency Tools on page 220, this could require 94 mS per packet (on a 128 Kb/s link). This could result in that queue sending 5 packets at 94 mS eachalmost half a second of time just to service that queue!

Chapter 5: Quality of Service (QoS)

214

Priority Queuing uses just four queueshigh, medium, normal and low. This is the priority order of the four queues. The highest queue with traffic to send is always serviced first. Thus if the high priority queue has enough traffic to fill a link, the other three queues will never send a packet! The administrator uses filters to place packets into one of the four queues. Packets that do not match any list are placed in the normal queue. Packets can be classified by the following criteria; Protocol or subprotocol type Incoming interface Packet size Fragments Access list

Packets are filtered and sorted by the router's processor. This causes a slight delay in the handling of each packet. On low speed interfaces this small delay is usually acceptable (especially compared to the benefit PQ provides). On higher speed interfaces this delay may be unacceptable. Each queue does have a limit on the number of packets that can be in the queue. This is especially helpful for the lower queues, as packets may build up there, waiting for transmission. In the case of long delays in the lower queues, the application is probably resending the data anywayso the router is better off dropping the packets. Keepalives are always placed in the high priority queues. Other important system traffic (OSPF hellos, CDP, etc.) needs to be manually configured. Custom Queuing and Priority Queuing are statically configuredthey do not adapt to changing conditions the way WFQ and CBWFQ do. Apply Each Queuing Technique to the Appropriate Application WFQ, the default on serial interfaces for speeds of El and below, does a good job of fairly allocating bandwidth to all protocols and applications. Just as appealing, WFQ requires little or no configuration. The router automatically sorts traffic into conversations, or flows, and assigns bandwidth to each flow. A lowtraffic interactive program will receive the bandwidth it needs, even with bandwidth-hungry file transfers occurring. CBWFQ should be used when the general characteristics of WFQ are desired (every flow gets some traffic, etc.) but the administrator wants more granularity over the sorting of traffic and bandwidth allocation. With CBWFQ the administrator manually defines how traffic is sorted into classes. This is a fairly easy way to assure certain applications get preferred treatment, but also that all applications receive some bandwidth. PQ is the enforcer of queuingassigning bandwidth to the queues you designate, even if that means some traffic never gets sent. PQ is perfect for a remote office

215 Chapters: Quality of Service (QoS) where there is a critical business application competing with non-business or unimportant applications for bandwidth. In this case PQ would assure that the business application always gets priority and all other (unimportant) traffic could use what was left. CQ is somewhere in between WFQ and PQ. CQ allocates some bandwidth to all queues (like WFQ). However CQ allows you to specify what traffic is placed in what queue, and how each queue should be treated (like PQ). CQ is appropriate if you have somewhat unusual traffic patterns (like perhaps several critical applications that need varying amounts of bandwidth). You might put one or two applications into a queue and give it the byte count that it needs. You might put another application into its own queue, giving it the byte count that it needs. You might lump all other traffic into a queue that receives a small byte count (but can use all the bandwidth if your critical applications are not using it). In this case WFQ may not service each application's needs, yet PQ is too rigid at the expense of some types of traffic. FIFO queuing is strictly first come, first served, offering no prioritization at all. The only time FIFO would be a good choice would be if a link had little or no congestion. In this case FIFO is the fastest queuing method, thus delay would be minimized. IP RTP Priority and Low Latency Queuing (LLQ) Differences RTP uses a well defined range of UDP port numbers and is used by real-time traffic, such as voice. IP RTP Priority sorts this traffic into its own queue and provides strict priority queuing, giving this traffic the best possible treatment. IP RTP Priority does not require that you know the voice port of a call. Instead it allows all RTP ports (UDP ports 16384 to 32767) to be prioritized. The lower the speed of a link the more useful the strict queuing of IP RTP Priority is. IP RTP Priority can be used with WFQ or CBWFQ. In either case IP RTP Priority is given strict priority service and the remaining bandwidth is divided fairly among the other queues. Often IP RTP Priority is used in conjunction with Link Fragmentation and Interleaving (LFI). This breaks large packets up into fragments so they do not cause large delays on slow links. Time-sensitive traffic such as voice can be transmitted in between the two (or more) fragments of a large packet. The ip rtp priority max-reserved-bandwidth command reserves a configurable amount of bandwidth for voice traffic, but it performs no call admission control. Thus while it will provide a set bandwidth strictly for voice traffic, if there are more calls that can be supported by that bandwidth (and if other bandwidth is being used), call quality will be poor. Use call admission control to prevent this. IP RTP Priority can be applied on most serial interfaces. To enable it on a Frame Relay PVC, use the frame-relay ip rtp priority command.

Chapter 5: Quality of Service (QoS)

216

Low Latency Queuing (LLQ) provides CBWFQ with a strict priority queuing feature (much like regular PQ). Without LLQ, CBWFQ provides WFQ depending on defined classes. The weight assigned to each class depends on the bandwidth the administrator has assigned to that class. All queues receive fair treatment, but there is no strict priority queuing. Without LLQ voice traffic can be subject to delay and variation in delay ("jitter"). A key difference with LLQ compared to IP RTP Priority is that IP RTP Priority allows a range of UDP ports to be prioritized. LLQ is much more flexible since it allows the definition of prioritized traffic by access lists, protocols, and input interfaces. Another difference is that LLQ supports ATM VCs, whereas IP RTP Priority does not. Similar to IP RTP Priority, LLQ uses a bandwidth command that both reserves that bandwidth for voice and restricts voice to only that bandwidth in the event of congestion. LLQ includes the Layer 2 header (PPP, HDLC, etc.) in its bandwidth calculations whereas IP RTP Priority does not include any Layer 2 headers in its bandwidth calculations. LLQ is Cisco's preferred method for prioritizing voice traffic. Configure WFQ, CBWFQ, and LLQ To configure WFQ on an interface, use the fair-queue interface command. Note that for T l / E l speeds and below this is the default queuing mechanism. You can optionally set the level of congestion threshold, after which packets are dropped from the queue. Configuring CBWFQ requires first defining classes, then creating policies, then attaching policies to interfaces: First define traffic classes to specify the classification policy (class maps). class-map Sarasota-traffic match access-group 101 or match input-interface serial 1 or match protocol ip Next associate policiesthat is, class characteristicswith each traffic class (policy maps). policy-map sarasota class Sarasota-traffic bandwidth 384 (in kb/s or you can specify percent bandwidth) queue-limit 20 random-detect (if using WRED rather than the default, tail drop) Last, attach policies to interfaces (service policies). interface serial 4/1

217 Chapter 5: Quality of Service (QoS) service-policy output sarasota To configure LLP on an interface create the class that defines the traffic (such as RTP packets with a UDP range of 16384-32768), create a policy map, then apply the priority command to the class that requires the LLQ (the voice class). Finally apply the policy-map to the interface: class-map voice-enabled match access-group 150 ! policy-map phoenix class voice-enabled priority 128 ! interface serial 1 service-policy output phoenix ! access-list 150 permit udp any range 16384 32768 any access-list 150 permit udp any any range 16384 32768

To configure IP RTP Priority on an interface, use the interface command: interface serial 0/1 ip rtp priority starting-port-number port-number-range bandwidth The combination of the starting port number and the range (how many ports) completely defines the starting and ending ports that will have IP RTP Priority applied to them. To configure custom queuing assign a custom-queue-list to an interface, then define the characteristics of that custom queue: interface serial 1/1 custom-queue-list 1
i

queue-list queue-list queue-list queue-list queue-list

1 1 1 1 1

protocol ip 1 tcp 23 (TCP port 23 to queue 1) protocol ip 2 tcp 80 (TCP port 80 to queue 2) protocol ip 3 list 100 (ACL 100 to queue 3) queue 1 limit 20 (max 20 packets in queue 1) queue 2 byte-count 1000 (byte count 1000 in queue 2)

The byte count determines how many bytes are serviced from that queue before the next queue is services. The default byte count is 1500. To configure Priority Pueuing assign packets to a priority list, then assign the priority group to an interface: priority-list 2 protocol ip high list 5 or

Chapter 5: Quality of Service (QoS) priority-list 2 interface ethernet 1/0 low or priority-list 2 protocol ip medium udp 161

218

interface serial 1/0 priority-group 2

access-list 5 permit 192.168.1.0 0.0.0.255 Congestion Avoidance Congestion management provides solutions for dealing with congestion that exists in a network. Congestion avoidance provides solutions for attempting to prevent congestion from ever occurring. Explain How TCP Responds to Congestion TCP, which responds appropriatelyeven robustlyto dropped traffic by slowing down its traffic transmission, effectively allows the traffic-drop behavior of Random Early Detection (RED) to work as a congestion-avoidance signaling mechanism. When a TCP sender determines (from the sequence number received from the receiver) that a data segment has been lost (such as dropped by a router), it resends that data and reduces its transmission rate in half. During the course of a TCP transmission the sender will continue to adapt its transmission r a t e increasing it or decreasing it, depending on the results from the network (dropped packets or no dropped packets, etc.). Explain Tail Drop and Global Synchronization Once a network becomes congested a switch or router will start queuing traffic for that network. If congestion continues the buffer that queues the traffic may well begin to fill up. Tail drop is the concept that if the buffers fill up the device will drop packets that will not fit in the bufferi.e. drop them from the rear or tail of the queue. This is an extremely easy way to configure a device, though it may not always be the most efficient. Global synchronization occurs as waves of congestion peak at once, then are followed by dramatic reductions in congestion during which the transmission link is not fully utilized. Global synchronization of TCP hosts, for example, can occur because packets are dropped all at once, forcing all hosts to slow down (and thus underutilizing the link). Global synchronization manifests when many TCP hosts reduce their transmission rates simultaneously in response to packet drops, then increase their transmission rates once again when the congestion is reduced, again causing congestion.

219 Chapter 5: Quality of Service (QoS) Random Early Detection (WRED, for example) helps prevent global synchronization. WRED starts to drop packets as the buffer starts experiencing congestion. Thus WRED helps avoid a situation where large numbers of packets are dropped over a very short time (such as with tail drop). Instead it starts dropping packets at an earlier stage, slowing down some flows earlier than others. Identify and Differentiate Between: RED, WRED, FRED Random Early Detection (RED) attempts to reduce router queue sizes by indicating to end hosts when they should slow their packet transmissions. TCP inherently will adapt to network conditions: if the network is performing very well it will increase its traffic flow. Conversely if retransmits are needed TCP will slow down its traffic flow. By randomly dropping packets before a router is overloaded, RED forces the TCP session running on end hosts to adapt and slow its traffic flow. This, in turn, helps prevent congestion at the router. If there is no congestion at a router, RED does not drop any packets and the TCP sessions on the hosts will increase the traffic flows. WRED has most of the same characteristics as RED, but also takes into account the IP Precedence bits of a packet. This is the weighted portionpackets with a higher precedence value receive a higher weight and are less likely to be dropped. This has a doubly good effect: it drops packets that have a lower precedence (importance) and it tends to slow down those same (less important) sessions. WRED also tends to drop packets from sources that generate a lot of traffic. Traffic from light users is less likely to be dropped, providing a degree of fairness. Because WRED understands IP Precedence, it also understands DiffServ. WRED also gives preferred treatment to packets with a higher differentiated services code point (DSCP). FRED is flow-based WRED, employing WRED but also being aware of traffic flows. FRED uses this classification and state information of each flow to determine if they are hogging buffer resources. Those that are taking more than their share of resources are severely penalized with drops. To determine which flows are taking too many resources, FRED determines how many flows are active on an interface. It then divides by the number of buffer resources on that interface to get an average number of buffer resources per flow. Flows using greatly more than that number get impacted the most. FRED allows for some traffic burstiness, but not continued over-utilization of buffers. Configure IOS Congestion Avoidance Features Cisco does not use RED. Cisco's implementation is either WRED (weighted random early detection) or FRED (flow-based random early detection). To enable WRED on an interface, configure the interface: interface serial 0 random-detect

Chapter 5: Quality of Service (QoS) To configure FRED on an interface, configure the interface: interface serial 1 random-detect random-detect flow random-detect flow count 16 random-detect flow average-depth-factor 8

220

The random-detect flow count command sets the maximum flow count for FRED. The random-detect flow average-depth-factor command determines the number of packets from each flow that can accumulate in the interface's buffers. Link Efficiency Tools The Need for Link Efficiency Tools According to Cisco's hierarchical network model, there are 3 network layers: The Core layer, the Distribution layer, and the Access layer. The core or backbone of the network should not be involved in processor intensive tasks. Tasks such as packet classification and access control are limited to the access layer and in some cases to the distribution layer. It is the edge routers that classify the QoS traffic, as well as control the QoS admissions into the network. Link efficiency tools fall into two categories: 1. Actually reducing the bandwidth needed on a particular link (using techniques like compression). 2. Changing the way packets are sent on links without changing the actual bandwidth used (such as fragmentation and interleaving). Both techniques try to achieve the same goal: attempting to make slower WAN links more suitable for voice traffic. "Slower" is a relative term depending on environment. If you are trying to support four simultaneous calls, a 128 Kb/s link is slow. If you are trying to support 50 simultaneous calls, a Tl is a slow link. Both of these environments can equally benefit from compression, since that will reduce the need for bandwidth across the link. Fragmentation and interleaving have more importance the slower the link speed is. The primary focus of these technologies is to prevent voice packets from getting "stuck" behind large (such as 1500 byte) packets that may consume large amounts of time on a slow link. For example, to calculate how long a 1500 byte packet requires to be transmitted on a 128 Kb/s link:

1500 bytes x 8 bits/byte x 1 sec / 128,000 bits = 94 milliseconds

Thus it requires 94 ms just for the transmitting interface to actually send the electrical signal. This doesn't even include the actual propagation delay (such as

221 Chapter 5: Quality of Service (QoS) if you are transmitting from Memphis to Denver). Since "best practices" suggest you should not have more than 150 mS of total end-to-end delay for voice, you better hope you don't have any more slow links! Yet to see that fragmentation and interleaving are not as important on a T l , even if a voice packet does get "stuck" behind a 1500 byte packet:

1500 bytes x 8 bits/byte x 1 sec / 1,544,000 bits = 8 milliseconds

In this case it probably doesn't make sense to deploy fragmentation since in the worst case scenario you'll only incur 7 or 8 mS of delay waiting for a 1500 byte packet to be sent across a Tl link. Now you might say, "Yesbut what if the voice packet on the Tl gets stuck behind several 1500 byte packets?" That is not the job of fragmentationthat is the job of queuing, to assure that the voice packets get moved to the front of the line. Available LFI techniques including MLP Interleaving and FR Fragmentation using FRF.ll Annex-Cor FRF.12 There are three primary methods of fragmentation and interleaving employed by Cisco: Link Fragmentation and Interleaving (LFI) for Multilink PPP (MLP) Link Fragmentation and Interleaving for Frame Relay and ATM VCs Frame Relay Fragmentation

Each form of fragmentation and interleaving operate about the same way: they take large packets, break them up into smaller fragments and transport each fragment across the link. At the opposite end of the link the fragments are recombined into the original, large packet and transported normally. For example, LFI can fragment a 1500-byte packet Into five 300-byte packets. These 300-byte packets will only require 20% of the transmission time that the 1500byte packet would require. Voice packets can be sent over the link in between the various 300-byte fragments. LFI for Multilink PPP involves splitting large packets into multiple smaller packets to be sequenced and sent over two or more links. Each smaller packet gets its own MLP header. LFI for MLP is common with multiple dialer links (such as ISDN, etc.) LFI for Frame Relay and ATM VCs segments large packets into smaller fragments before sending the packets over an ATM or Frame Relay circuit. As with LFI for MLP, allowing smaller, time-sensitive voice packets to be sent with small delays. LFI for Frame Relay and ATM works in conjunction with Frame Relay Traffic Shaping and LLQ.

Chapter 5: Quality of Service (QoS)

222

Frame Relay fragmentation can be implemented with either end-to-end FRF.12 fragmentation or with FRF.11 Annex C fragmentation. FRF.12 is recommended for VoIP packets, whereas FRF. 11 Annex C is used when fragmentation is used with VoFR implementations (FRF.11 defines VoFR). Real Time Protocol Header Compression (CRTP) RTP is composed of a data portion and a header portion. The data portion is concerned with real-time properties of applications, such as timing reconstruction, loss detection, and content identification. The header portion of RTP is usually 12 bytes. Its 12 bytes, combined with the 20 bytes of IP header and the 8 bytes of UDP header make for 40 total bytes of headerswithout a single byte of actual data! The voice payloads of RTP packets vary between 20 bytes and 160 bytes, according to the codec used. It is very inefficient to have 40 bytes of header and only 20 bytes of real data! CRTP compresses the IP, UDP and RTP headers from 40 bytes to between 2 and 5 bytes. This is a significant savings when the payload is small (20-40 bytes). CRTP is less useful as the payload gets larger. G.711 codecs use 160 byte voice payloads. Here cRTP is not as useful since reducing each packet by about 35-38 bytes on a 200-byte packet is a fairly small gain. CRTP can achieve such large compression because many of the fields in the headers so not change (IP Addresses, UDP ports, etc.) Most of the fields that are not constant change by the exact same amount from packet to packet, so this is predictable. The only time this high level of compression can not be achieved is when several fields change by a different amount in every packetfairly unusual. CRTP is enabled on a link-by-link basis (not on an end-to-end basis). It can be enabled on any serial interface running Frame Relay, HDLC or PPP. It can also be used on ISDN interfaces. CRTP should not be used on interfaces over T l / E l (T3, etc.)the gain in bandwidth is usually not worth the slight delay caused by the compression and decompression. Configure and Monitor Various LFI methods and CRTP To configure LFI for MLP, use the ppp multilink interleave command to enable fragmentation and interleaving. The ppp multilink fragment-delay 30 command forces a maximum delay of 30 mSthus no fragment will be transmitted that will cause a delay of greater than 30 mS. interface virtual-template 1 ip unnumbered loopback 0 ppp multilink ppp multilink interleave ppp multilink fragment-delay 30
j

multilink virtual-template 1

223 Chapter 5: Quality of Service (QoS)

To configure LFI for MLP on ISDN interfaces, use the following configuration: interface BRIO/0 no ip address encapsulation ppp dialer rotary-group 0 ! interface BRIO/1 no ip address encapsulation ppp dialer rotary-group 0 ! interface DialerO description Dialer group FOR the BRIs ip address 192.168.1.1 255.255.255.0 encapsulation ppp dialer map ip 192.168.1.2 name frome 12123331212 dialer-group 1 ppp authentication chap ! Enables Multilink PPP interleaving on the dialer interface ! and reserves a special queue for RTP traffic: ppp multilink ppp multilink interleave ip rtp reserve 16384 16384 200 ! Keeps fragments of large packets small enough to ensure delay ! of 25 ms or less: ppp multilink fragment-delay 25
i

dialer-list 1 protocol ip permit

To configure LFI for Frame Relay VCs use the following configuration. The ppp multilink fragment-delay command specifies the maximum delay of 20 mS. The ppp multilink interleave command allows other (voice) packets to be transmitted between fragments: interface Serial 3/0 no ip address encapsulation frame-relay frame-relay traffic-shaping ! interface Serial 3/0.1 point-to-point frame-relay interface-dlci 83 ppp Virtual-Tern plate 1 class frame-pvc
i

interface Virtual-Templatel bandwidth 100 ip address 10.10.10.51 255.255.255.252 ppp authentication chap

Chapter 5: Quality of Service (QoS) ppp ppp ppp ppp


i

224

chap hostname rtr3 multilink multilink fragment-delay 20 multilink interleave

map-class frame-relay frame-pvc frame-relay cir 96000 frame-relay be 1000 frame-relay be 0 no frame-relay adaptive-shaping To configure LFI using FRF. 12 interface serial 1/1 frame-relay traffic-shaping frame-relay interface-dlci 105 class FR-fragment I map-class frame-relay FR-fragment frame-relay cir 64000 frame-relay fragment 25 frame-relay fair-queue To configure CRTP on a serial interface: interface serial 0/0 ip rtp header-compression To configure CRTP on a Frame Relay PVC: interface serial 0/0 encapsulation frame-relay frame-relay map ip 10.10.0.1 17 broadcast rtp header-compression Policing and Shaping The Difference Between Policing and Shaping and How Each Relates to QoS Policing and shaping are both traffic regulation techniques that typically evaluate traffic in the same way. Policing and shaping treat violations of traffic policy differentlypolicing tends to drop the packets, whereas shaping tends to delay the packets by holding them in a buffer before queuing them. This slows down the flow of traffic. Policing benefits QoS by preventing certain sources or applications from taking too much bandwidth (or more bandwidth than was agreed upon). Shaping helps QoS by slowing traffic when an interface gets congested. Slowing traffic is often more efficient than dropping it. Dropped packets usually get retransmitted (adding to the interface congestion, though possibly at a slower rate). Delaying

225 Chapter 5: Quality of Service (QoS) packets prevents the retransmission problem but also tends to slow down traffic, easing congestion. When to Apply and How to Configure Policing Mechanisms Policing is applied when a strict limit needs to be set on traffic (or on certain types of traffic) on a particular interface. Policing is configured by: Creating a traffic class to define the type of traffic to be policed (class-map) Creating a traffic policy to define the actions to be taken on the traffic (policy-map) Applying the policy to an interface (service-policy) class-map pop-email match access-group 100
i

policy-map limit-email class pop-email police 128000 16000 32000 conform-action transmit exceed-action set-qos-transmit 3 violate-action drop
i

interface serial 0/0 service-policy input limit-email


i

access-list 100 permit tcp any any eq The class-map uses an access list to define the type of traffic to be policed (use match any in the class-map or an ip any any in the access-list to police all traffic). The policy-map uses the traffic defined by the class-map and defines how the traffic will be treated (transmitted, dropped, change the QoS, precedence or DSCP value). The service-policy applies the policing policy to a given interface. Different Types of Traffic Shaping and How to Apply Them Traffic shaping allows you to control traffic going out an interface. This can be done to match the speed of a remote connection or remote portion of the network, to adhere to a policy or to restrict certain types of traffic. Traffic shaping can be more useful than policing, since it shapes traffic by delaying it, whereas policing drops excess traffic. Dropped traffic often is simply retransmitted, creating inefficiency. There are four types of traffic shaping: 1. Generic Traffic Shaping (GTS) 2. Class-based Traffic Shaping 3. Distributed Traffic Shaping (DTS) 4. Frame Relay Traffic Shaping (FRTS)

Chapter 5: Quality of Service (QoS)

226

DTS is similar to GTS but primarily targeted at distributed architecturessuch as the VIP processors used on 7500 routers, etc. All four methods use similar methods to determine whether a packet can be forwarded or whether it must be delayed. If a packet must be delayed GTS and Class-based Shaping use a weighted fair queue to delay the traffic. DTS and FRTS use either a weighted queue, a custom queue or a priority queue to hold delayed traffic, depending on how they are configured. GTS applies traffic shaping to an entire interface or to traffic using access control lists (ACLs). Class-based shaping applies shaping to classes. Classes can be defined by ACL, input interface, protocol, etc. Shaping can be defined uniquely for each class. FRTS can apply shaping to individual VC's (PVCs or SVCs) that are assigned to a subinterface. In this case if a subinterface does not have any shaping configured, it will inherit the shaping on the main interface (if any is configured there). Any shaping configuration on the subinterface will override the shaping configured on the main interface. GTS and DTS are applied to interfaces (or subinterfaces). FRTS can be applied on a per-DLCI basis. Class-based shaping is applied to a class (or, occasionally, on an interface). A variable that you should be familiar with for traffic shaping is Be. This is known as the "committed burst" (thus the Be) of traffic a router can send. That is, this is the burst of traffic that a router transmits that the network (such as a Frame Relay network) is committed to accept and deliver. This is directly related to the Committed Information Rate (CIR)the CIR is simply Be divided by time. For example if the CIR is 128 Kb/s and the router's sampling period is 1 second then Be = 128,000 bits. Another variable used in traffic shaping is Be. This is the excess burst (thus the "Be") that the router can send that the network will accept but is not committed to deliver. It will mark this traffic discard eligible (set the DE bit) and will give a best effort to deliver this traffic, but may drop this traffic upon congestion. The total amount of traffic the router can transmit in any given sampling period is the committed burst plus the excess burst (Be plus Be). Configure the Different Types of Traffic Shaping To configure Generic Traffic Shaping (GTS) on all traffic on an interface for 128 Kb/s (with a burst of 32 Kb/s): interface serial 0/0 traffic-shape rate 128000 32000 To configure GTS on an interface to limit the traffic caused by POP3 email to 500 Kb/s (all other (non-) traffic will not be restricted at all):

227 Chapter 5: Quality of Service (QoS) interface serial 1/0 traffic-shape group 105 500000
i

access-list 105 permit tcp any any eq access-list 105 permit tcp any eq any To configure Class-based Traffic Shaping: class-map 256k match any ! policy-map houston class 256k shape average 256000
i

interface serial 1 service-policy output houston To configure Frame Relay Traffic Shaping create a class with the map-class command. Apply the map-class to an interface, subinterface or DLCI using the class command (or the frame-relay class command, depending on the config mode you are in). For example, subinterfaces si.2 and sl.3 do not have any shaping configured and inherit the main interface shaping (configured for a 384K PVC). S l . l has shaping configured on the subinterface for a 512K PVC. SI.4 has individual shaping configured on the DLCIfor a 256K PVC: interface Seriall encapsulation frame-relay frame-relay class 384K_VCs frame-relay traffic-shaping
i

interface Seriall. 1 point-to-point frame-relay class 512K_VCs frame-relay interface-dlci 101


i

interface Seriall.2 point-to-point frame-relay interface-dlci 102 ! interface Seriall.3 point-to-point frame-relay interface-dlci 103
i

interface Seriall.4 point-to-point frame-relay interface-dlci 104 class 256K_VCs ! map-class frame-relay 384K_VCs frame-relay traffic-rate 384000 384000 frame-relay adaptive-shaping been

Chapter 5: Quality of Service (QoS) map-class frame-relay 512K_VCs frame-relay traffic-rate 512000 512000 frame-relay adaptive-shaping been ! map-class frame-relay 256K_VCs frame-relay traffic-rate 256000 256000 frame-relay adaptive-shaping been !

228

The map-classes also let you configure many other characteristics, such as custom queuing, priority queuing, weighted fair queuing, committed and excess burst sizes, etc. Fancy Queuing Fancy queuing is Cisco's collective term for custom, priority, or weighted fair queuing. Often if you call the TAC for help on a problem, they will ask you to remove all the fancy queuing as a way to make sure nothing critical is being blocked. First-In, First-Out (FIFO) FIFO does not use priority or classes of traffic and also does not make decisions about packet priority. There is only one queue and all packets are treated equally. Packets are sent out an interface in the order in which they arrive. If no other queuing methods are configured, all interfaces except serial interfaces at El (2.048 Mbps) and below use FIFO by default. (Serial interfaces at El and below use WFQ by default.) FIFO, which is the fastest method of queuing, is effective for large links that have little delay and minimal congestion. FIFO may be sufficient for links that do not experience congestion and do not need advanced queuing such as Custom or Priority. Weighted Fair Queuing (WFQ) WFQ gives high-volume traffic a lower priority than lower-volume traffic. For example, a time sensitive SNA conversation would have a higher priority than a file transfer, where latencies are less of an issue. WFQ is enabled by default on all Cisco router links with speeds of less than E l . Since WFQ is a default method, it normally does not require any configuration. To adjust the way that WFQ operates, you can use the fair-queue <congestion threshold> command. This command only allows you to change the number of messages in a queue where there is high volume traffic moving. The default is 64 messages and can be configured from 1 to 512. Routerl(config-if)# fair-queue 128

229 Chapter 5: Quality of Service (QoS) Priority Queuing Priority queuing uses four levels of queues, defined as; high, medium, normal, and low. The administrator defines the traffic types which belong in which queue type, usually depending on the protocol type or the interface the packets came from. Any protocols supported by Cisco are allowed, and the command line arguments include TCP and UDP port designations. The major thing to remember with priority queuing is that the "high" queue is serviced first; the "medium" queue will be ignored until the its superior is finished. The same goes for the "normal" queue, it won't see any bandwidth until both the "high" and "medium" queues are empty, and so on. Think of it as the kitchen table in a house full of siblings; the big ones eat first. This can result in a situation where higher level queues monopolize the link, lower level queue packets are dropped, and the small ones go to bed hungry. Just like access lists, the router reads the priority-list commands in order of appearance. When trying to classify a packet, the system searches the rule list for a matching criterion. When a match is made the packet is assigned to the appropriate queue, and the search ends. Packets that do not match any of the rules are assigned to the default queue. The default queue is "normal" by default, but it can be changed. The priority-list global command is used to create the priority-list, which defines the criteria, selects the queues things belong to, and optionally, set the maximum length of the different queues. The default maximum lengths are: high medium normal-limit low-limit 20 40 60 80

Once the list is defined, it must be applied to the interface using the prioritygroup command. Example 5-1 Priority Queuing Example #1 1. Router(config)# priority-list 1 protocol ip high 2. 3. 4. 5. 6. Router(config)# priority-list 1 protocol rsrb medium Router(config)# priority-list 1 protocol dlsw low Router(config)# priority-list 1 default normal Router(config)# interface serial 0 Router(config-if)# priority-group 1 Line #1

Chapter 5: Quality of Service (QoS) All IP packets are assigned to a high priority queue level. Line #2 All RSRB packets are assigned to a medium priority queue level. Line #3 All DLSW packets are assigned to a low priority queue level. Line #4 All remaining traffic is assigned to a normal priority queue level. Line #6 Assigns priority queue list number 1 to serial interface 0. Example 5-2 Priority Queuing Example #2 1. Router(config)# priority-list 4 protocol dlsw medium It 200 2. 3. 4. 5. 6. Router(config)# priority-list 4 protocol ip medium tcp 23 Router(config)# priority-list 4 protocol ip medium udp 53 Router(config)# priority-list 4 protocol ip high Router(config)# interface serial 0 Router(config-if)# priority-group 4 Line #1 DLSW packets with a byte count less than 200 are assigned a medium priority queue level. Line #2

230

IP packets originating or destined to TCP port 23 (telnet) are assigned a medium priority queue level. Line #3 IP packets originating or destined to UDP port 53 (DNS) are assigned a medium priority queue level. Line #4 All remaining IP packets are assigned a high priority queue level.

231 Chapter 5: Quality of Service (QoS) Line #6 Assigns priority queue list number 1 to serial interface 0 Custom Queuing The primary advantage to custom queuing is that it will never completely ignore any one queue. You can define up to 16 queues, and while some pass more data then others, because they are addressed in a round-robin fashion, none is ever completely ignored. Associated with each output queue is a configurable byte count, which specifies how much data should be delivered from the one queue before the system moves on to the next. When a queue is processed, packets are sent until the number of bytes sent exceeds the queue byte count for that queue, until the queue is empty, or until the queue runs out of data. Once the appropriate number of bytes has been transmitted, the router moves on to the next queue. If the byte count has been reached and a packet has not been completely sent, it will continue to be sent; the packet will not be fragmented. Just like access lists, the router reads the queue-list commands in order of appearance. When trying to classify a packet, the system searches the list queue-list rules for a matching protocol or interface type. When a match is found, the packet is assigned to the appropriate queue. Since the list is searched in the order it is specified, and the first matching rule terminates the search. By default, each queue is allocated 1,500 bytes, although the queue size is configurable. In this way, it is possible to allocate a percentage of the bandwidth to a specific protocol. The steps for configuring a custom queue are: Define the custom queue list using the queue-list global command. The list number needs to be in the 1 to 16 range, and there is no default assigned. Also using the queue-list global commands, specify the queue parameters. You can specify: The maximum number of packets allowed in each of the custom queues (the default is 20 entries). * The approximate number of bytes to be forwarded from each queue during its turn in the cycle. The number will vary because the queue will finish the last packet, even if it runs over the limit. Packets will not be truncated. * What kinds of packets are assigned to each custom queue, depending on the protocol type or interface where the packets come into the router. Any protocols supported by Cisco are allowed, and the options include TCP and UDP port numbers as well as access list definitions. The default queue for any leftover packets that do not match any of the other assignment rules.

Chapter 5: Quality of Service (QoS)

232

Apply the custom list to an interface using the custom-queue-list interface command (required). Only one queue list can be assigned per interface.

* Remember that the queue length limit is the maximum number of packets that can be queued at any one time, while the byte count is the number of bytes the system will allow to be delivered from a given queue during a particular cycle. Example 5-3 Custom Queuing Example #1 1. Routerl(config)# queue-list 1 protocol ip 1 2. 3. 4. 5. 6. 7. Routerl(config)# queue-list 1 protocol ipx 2 Routerl(config)# queue-list 1 default 3 Routerl(config)# queue-list 1 queue 1 byte-count 3000 Routerl(config)# queue-list 1 queue 2 byte-count 1500 Routerl(config)# interface serial 0 Routerl(config-if)# custom-queue-list 1 Line #1 Assigns IP traffic to queue number 1 Line #2 Assigns Novell traffic to queue number 2 Line #3 All remaining traffic will be sent to queue number 3 Line #4 Increases the byte count for queue number 1 from the default of 1500 to 3000 bytes. Line #5 Sets the byte count for queue number 2 to 1500 (the default value) Line #7 Assigns custom queue list number 1 to serial interface 0 Unstated above is that queue number 3 is also set for 1500 bytes, the default value. In this example the total byte-count is 6,000 bytes. IP is set for 50% of the traffic, 25% is allocated for IPX, and 25% for everything else.

233 Chapter 5: Quality of Service (QoS) Example 5-4 Custom Queuing Example #2 1. Router(config)# queue-list 1 protocol ip 1 list 10 2. Router(config)# 3. Router(config)# 4. Router(config)# 5. Router(config)# 6. Router(config)# 7. Router(config)# 8. Router(config)# 9. Router(config)# 10. Router(config)# Router(config)# queue-list 1 protocol ip 2 tcp 23 queue-list 1 protocol ip 3 udp 53 queue-list 1 interface serial 0 4 queue-list 1 default 5 queue-list 1 queue 5 limit 40 queue-list 1 queue 1 byte-count 7400 queue-list 1 queue 2 byte-count 1400 queue-list 1 queue 5 byte-count 3000 access-list 10 permit 239.1.1.0 0.0.0.255 interface serial 0 custom-queue-list 1

11. Router(config-if)# Line #1

Assigns traffic matching IP access list 10 to queue number 1.

Line #2
Assigns Telnet packets to queue number 2.

Line #3
Assigns UDP Domain Name Service (DNS) packets to queue number 3. Line #4 Assigns packets entering on serial interface 0 to queue number 4. Line #5 All remaining traffic will be sent to queue number 5. Line #6 Increased the length of queue 5 from the default 20 packets to 40 packets. Line #7 Increases queue number 1 byte count from the 1500 default 1500 to 7400 bytes, or 50% of the available bandwidth:

Chapter 5: Quality of Service (QoS)

234

((3 x 1500) + (1 x 3000) + (1 x 1400)) - 1500 = 7400


Line #8 Decreases the byte count for queue number 2 from the default of 1500 to 1400 bytes.

Line #9
Increases the byte count for queue number 5 from the default of 1500 to 3000 bytes. Line #10 Defines access list 10. Line #12 Assigns custom queue list number 1 to serial interface 0. Unstated above is that queues 4 and 5 are set for 1500 bytes, the default value. In this example the total byte-count is 14,800 bytes. IP is set for 50% of the traffic, with the rest shared among the other traffic in proportion to their byte counts.

Class-Based Weighted Fair Queuing


CBWFQ expands the WFQ functionality using the Modular Qos CM (MQC). Three steps for CBWFQ: Define class map defined by protocol, interface or ACL match Define Policy Map to set QoS parameters for class map Configure a Service Policy to apply the Policy-map to an interface

CBWFQ also allows for low latency queueing, using the 'priority' command in the policy map. This enables critical traffic to be priority-queued to minimize delay. Packet over SONET/SDH (PoS) and IP Precedence Cisco PoS has the IP layer riding directly above the SONET layer, eliminating the overhead usually required to run IP over ATM and SONET, while still offering strong quality-of-service (QoS) guarantees. PoS was designed to overcome some of the limitations of IP that restricted its direct use on very high-speed links, and addressing some of the QoS issues inherent with IP. The three IP precedence bits in the IP header make it possible to provide differentiated classes of services by utilizing Random Early Detection (RED) and Weighted RED (WRED). As packets enter the network, the edge routers set their precedence, which is then used to determine the queuing of

235 Chapter 5: Quality of Service (QoS) packets through the network. PoS can facilitate reliable deployment of voice, video, and other time-dependent services on large, very high-speed (OC-3, OC48 and OC-192 speed) provider networks. IP Precedence The header of an IP packet contains a Type of Service (ToS) field, which signifies the precedence of the packet. Routers use the TOS field to determine the priority level the packet should be routed at. The three most significant bits of the TOS byte make up the IP Precedence. In decimal form, these make a value from 0 to 7; the higher the number, the higher the priority. Network control packets and routing updates are typically the only traffic that use levels 6 and 7. User data traffic is usually 0 to 5. IP precedence can be configured by several methods, including: through CAR or a police statement with the 'set-prec-transmit' keyword, using MQC or routemap with the "set ip precedence' command. Example 5-5 IP Precedence Route-Map Example:

1. Router(config)# access-list 101 permit ip 10.1.1.0 0.0.0.255 192.168.1.0 0.0.0.255


2. Router(config)# route-map SET_IP_PRECEDENCE permit 10 3. Router(config-route-map)# match ip address 101 4. Router(config-route-map)# set ip precedence 5 5. Router(config-route-map)# exit 6. Router(config)# interface serial 0 7. Router(config-if)# ip policy route-map SET_IP_PRECEDENCE

Line #1
Defines the traffic to be included. Line #4 Set defined traffic to a precedence level of 5, or "critical".

Chapter 5: Quality of Service (QoS) See the table below for a description of each precedence type: Value 0 1 2 3 4 5 6 7 Random Early Detection (RED) Precedence Type Routine Priority Immediate Flash Flash-override Critical Internetwork Control Network Control

236

Random Early Detection (RED) is a congestion-avoidance mechanism that uses the flow control features of TCP to avoid congestion. It is typically found at the core of the network to control packet flow before congestion occurs by manipulating the TCP sessions. In order to understand how RED works, you need to understand Tail Drops and TCP Slow Start. Tail DropOccurs when a transmit queue on an interface is filled and the router has more incoming packets than it can handle. The router drops all packets until the queue is below the maximum level. The problem with this is that all flows of traffic are dropped, including TCP and UDP. Since TCP is a reliable protocol, lost packets will be retransmitted. UDP and other unreliable protocols will either not be retransmitted, or have to rely on upper layer protocols for retransmission. TCP Slow StartPackets are sent only a few at a time so as to avoid retransmission. As packets are sent successfully without retransmission, the router will gradually increase the rate it sends packets until it experiences lost packets again.

RED works by randomly dropping packets, depending on the number of packets that are in queue for an interface. When the queue gets close to its maximum capacity, it speeds up the rate at which it drops packets to avoid the Tail Drop condition. Remember that RED drops some packets randomly, where Tail Drop just drops all the packets. RED will use TCP Slow Start to throttling back traffic flows. By avoiding Tail Drop letting go of all packets, and by slowing down some traffic flows, a router interface using RED can typically keep its queues from reaching their maximum.

237 Chapter 5: Quality of Service (QoS) Weighted Random Early Detection (WRED) WRED provides separate thresholds and weights for different IP precedences, allowing different QoS levels for different traffic, meaning that during periods of congestion, standard traffic will be dropped in favor of premium traffic. Using WRED optimizes the transmission rates of individual flows and prevents congestion collapse and synchronization problems. WRED provides preferential treatment to voice traffic. Weighted Round-Robin (WRR)/Queue Scheduling WRR scheduling is used on the egress ports of a layer-3 switch to manage the queuing and sending of packets. WRR sorts the packets into four queues, depending on IP precedence. Devices that use WRR automatically create the four queues with a default weight for each interface. The system administrator can then assign different weights to each of the different queues; the higher the WRR weight, the higher the effective bandwidth for that particular queue. This provides bandwidth to higher priority applications (using IP precedence), while still allowing access to lower priority queues. The four queues on any destination interface are configured to be part of the same service class. Bandwidth is not explicitly reserved for these four queues. Each of them is assigned a different WRR-scheduling weight, which determines the way they share the interface bandwidth. Class of Service (CoS) Managing network traffic by grouping similar types of traffic (like e-mail, or streaming video or voice) together and treating each type as a class with its own level of service priority. CoS technologies do not guarantee a level of service in terms of bandwidth and delivery time; they offer a "best-effort." On the other hand, CoS technology is simpler to manage than QoS, and provides more scalability as a network grows. You can think of CoS as "coarsely-ground" traffic control, while QoS is "finelyground" traffic control. There are three main CoS technologies: 802.Ip Layer 2 Tagging Type of Service (ToS) Differentiated Services (DiffServ)

Shaping vs. Policing Cisco IOS QoS offers two kinds of traffic regulation mechanisms:

Chapter 5: Quality of Service (QoS)

238

PolicingThe rate-limiting features of committed access rate (CAR) and the Traffic Policing feature provide the functionality for policing traffic. A policer typically drops traffic. ShapingThe features of Generic Traffic Shaping (GTS), Class-Based Shaping, Distributed Traffic Shaping (DTS), and Frame Relay Traffic Shaping (FRTS) provide the functionality for shaping traffic. A shaper typically delays excess traffic using a buffer, or queuing mechanism, to hold packets and shape the flow when the data rate of the source is higher than expected.

Both policing and shaping mechanisms use the traffic descriptor for a packet to ensure adherence and service. Policers and shapers usually identify traffic descriptor violations in an identical manner; but as can be seen above, they usually differ in how they respond to violations. Traffic shaping and policing can work in tandem. For example, a good traffic shaping scheme should make it easy for nodes inside the network to detect misbehaving flows (sometimes called policing the traffic of the flow). Traffic Shaping As discussed above, traffic shaping allows you to control access to the available bandwidth, to make sure that traffic follows the policies established and to prevent packet loss. It smoothes out network flows by queuing excess traffic. Data transfer can be limited to a specifically configured rate, or a dynamically calculated rate depending on the actual level of congestion. Traffic shaping is not supported with optimum, distributed, or flow switching. If you enable traffic shaping, all interfaces will revert to fast switching. There are two traffic-shaping tools: Generic traffic shaping (GTS)GTS reduces outbound traffic flow by constraining specified traffic to a particular bit rate while queuing bursts of specified traffic. It uses a weighted fair queue to hold the delayed traffic, and is configured per interface or subinterface. It is not supported over ISDN, tunnel, or dialup interfaces. Frame Relay traffic shaping (FRTS)FRTS is useful for managing network traffic congestion issues specific to Frame Relay links. It can use weighted fair, priority or custom queues, depending on how it is configured. FRTS is applied to individual DLCI's and the parameters are defined using map-class definitions.

Example 5-6 Traffic Shaping Example # 1 : GTS is configured on an interface or subinterface, using the traffic-shape interface command. 1. Router(config)# access-list 101 permit udp any any

239 Chapter 5: Quality of Service (QoS) Router(config)# interface EthernetO Router(config-if)# traffic-shape rate 5000000 625000 625000 Router(config-if)# exit Router(config)# interface Ethernetl Router(config-if)# traffic-shape group 101 1000000 125000 125000 Line #3 Ethernet 0 is configured to limit all output to 5 Mbps. Line #6 Ethernet 1 is configured to use access list 101 to limit User Datagram Protocol (UDP) traffic to 1 Mbps. Example 5-7 Traffic Shaping Example # 2 : FRTS is configured by creating an appropriate map class, enabling traffic shaping through the use of the frame-relay traffic-shaping interface command on the main Frame-Relay interface, and applying the map-class to the specific DLCI. Here is an example of a valid FRTS configuration: interface Seriail ip address 192.168.1.1 255.255.255.252 no ip directed-broadcast encapsulation frame-relay frame-relay traffic-shaping frame-relay interface-dlci 100 class cciebrad

map-class frame-relay cciebrad no frame-relay adaptive-shaping frame-relay cir 56000 frame-relay be 8000 frame-relay be 16000 frame-relay mincir 32000

frame relay cirCommitted Information Rate Average rate you want to send traffic in bps frame relay mincir rate guaranteed from the service provider

Chapter 5: Quality of Service (QoS) Tc Tc is a time interval. It cannot be configured, but can be calculated as Bc/CIR frame relay beCommitted burst Bits of user data transferred during the Tc period at the CIR frame relay beExcess burst Maximum number of bits in excess of Be transferred during the Tc. frame relay adaptive-shaping been If BECNs are received, the PVC will decrease the transmit rate until the mincir level is reached Committed Access Rate (CAR)

240

CAR is used on interfaces to rate-limit traffic depending on IP addresses or by protocol. The first step to using CAR is setting your rate policy, which determines what is to be done with traffic that exceeds a set bandwidth threshold. For example, you can configure an interface to drop all Telnet traffic that exceeds 64kbps. The rate limit consists of 3 values: average rate (bits per second), normal burst size (bytes per second), and excess burst size (bytes per second). Note that average rate is specified at bits per second, and the other two values are bytes per second. If the bandwidth being utilized is below the average rate, it is said to conform to the rate policies. Once the traffic exceeds this defined threshold, it is said to exceed the rate policy. Once traffic exceeds the average rate, it is allowed to continue being sent only if the policy allows for a burst. This is all dependent on the values you choose. Normal burst size is the amount of traffic that can be sent before it gets to another exceeded value. Once traffic exceeds the normal burst value, it is subject to RED. RED only drops some of the packets in order to get the traffic rate below the limit. If the traffic is not slowed enough by RED, and exceeds the excess burst size, then all traffic is dropped or subject to whatever rate policy you decide. To configure CAR, you first define the access-list necessary for the traffic you want to limit, then create a rate-limit and apply it to an interface. To configure CAR, you first define the access-list necessary for the traffic you want to limit. Then you create a rate-limit and apply it to an interface. In the following example, we limit outbound web traffic to 512kbps with a burst to 72KB. If the traffic exceeds 72KB it is dropped. Remember, the average rate is in bits, and burst sizes are in bytes. Router(config)# access-list 101 permit tcp any any eq http Router(config)# interface hssi 0/0/1

241 Chapter 5: Quality of Service (QoS) Router(config-if)# rate-limit output access-group 101 512000 72000 72000 conform-action transmit exceed-action drop Network-Based Application Recognition (NBAR) Network-Based Application Recognition (NBAR) classifies application-level protocols so that QoS policies can be applied to traffic. This intelligent classification handles many applications; including web-based and other difficultto-classify protocols using dynamic TCP/UDP port assignments. NBAR can alseo determine which protocols and applications are currently running on a network, so that the right QoS policy can be applied. NBAR can also calssify subport HTTP traffic by HOST name as well as by MIME-type or URL. This gives users the ability to classify HTTP traffic by Web server names. NBAR provides a special Protocol Discovery feature that determines which application protocols are traversing a network at any given time. The Protocol Discovery feature captures key statistics associated with each protocol in a network. These statistics can be used to define traffic classes and QoS policies for each traffic class. NBAR can also classify static port protocols. Although Access Control Lists (ACLs) can also be used for this purpose, NBAR is easier to configure and can provide classification statistics that are not available when using ACLs. Once an application is recognized and classified by NBAR, a network can invoke services specific to that application. In this way, NBAR ensures that network bandwidth is used efficiently by working with QoS features to provide: Guaranteed bandwidth Bandwidth limits Traffic shaping Packet coloring

NBAR introduces several new classification features: Classification of applications that dynamically assign TCP/UDP port numbers. NBAR can classify application traffic by looking beyond the TCP/UDP port numbers of a packet into the TCP/UDP payload itself and classifies packets on content within the payload such as transaction identifier, message type, or other similar data. This is called subport classification, an example of which would be classification of HTTP by URL, HOST, or Multipurpose Internet Mail Extension (MIME) type. NBAR can classify Citrix Independent Computing Architecture (ICA) traffic and perform subport classification of that traffic using Citrix published applications.

NBAR is capable of classifying the following three types of protocols:

Chapter 5: Quality of Service (QoS)

242

Non-UDP and non-TCP IP protocols TCP and UDP protocols that use statically assigned port numbers TCP and UDP protocols that dynamically assign port numbers and therefore require stateful inspection

Configuring NBAR Cisco Express Forwarding (CEF) must be enabled before NBAR can be configured. NBAR is configured by using the MQC to configure traffic classes of policies that will be applied to those traffic classes, and the attaching of policies to interfaces: Class-mapdefines one or more traffic classes by specifying the criteria by which traffic is classified. Policy-mapdefine one or more QoS policies (such as shaping, policing, and so on) to apply to traffic defined by a class map. Service-policyattaches a policy map to an interface on the router.

NBAR configuration example: class-map match-any dropthis match protocol fasttrack file-transfer "*" match protocol gnutella file-transfer "*" match protocol kazaa2 match protocol napster policy-map not4u class dropthis drop

Int faO Ip nbar protocol-discovery service-policy input not4u

(Note: The 'drop' keyword was introduced in Cisco IOS Software Release 12.2(13)T) Differentiated Services Code Point (DSCP) Differentiated Services (DiffServ) systems to treat traffic according called the Type of Services (ToS) ToS byte to increase the number is a QoS model that allows intermediate to relative priorities depending on what was field. This is done by reallocating bits of the of definable priority levels.

243 Chapter 5: Quality of Service (QoS) The altered packet structure results in the DiffServ field taking over the Ipv4 ToS field, which is one entire byte (eight bits) of an IP packet, the last two bits of which have been unused. The six most signification bits of the former ToS byte now become the DiffServ field. IP precedence did use the three most significant bits; while DSCP, an extension of IP precedence, uses the whole six bits to select the per-hop behavior for the packet at each network node. The last two bits in the DiffServ field, which are not defined within the DiffServ field architecture, are now used as Early Congestion Notification (ECN) bits. il P2

PI DS4

PO DS3

DS2

2 DSI

TI DSO

ECN

cu ECN

ToS Byte DiffServ Field

I DS5

The first three bits are still used for Precedence, making the DSCP is backwards compatible with IP Precedence. DSCP increases the priority levels using the next three bits (DS2, DSI, DSO). DS2 is the delay bit (0 = normal, 1 = low) DSI is the throughput bit (0 = normal, 1 = high) DSO is the reliability bit (0 = normal, 1 = high) Assured Forwarding and Expedited Forwarding There are four AF classes, AFlx through AF4x where x is the drop probability ( 1 low, 2-med, 3-high). The drop probabilities use the DS2 and DSI bits. ExampleThe Binary DSCP 011100 would have a decimal value DSCP 28. The first three bits denote a precedence value of 3, and the next two show a value of 10, therefore the value of AF32. Expedited forwarding is considered premium, and codepoint 101110 is recommended. AF is defined in RFC 2597, EF is defined in RFC 2598. Cisco uses queuing techniques to control the per-hop behavior using the IP precedence or DSCP values in the IP header of the packet to define traffic as belonging to a particular service class. Packets are first prioritized by class, then differentiated and prioritized by considering the drop percentage. It is important to note that DSCP does not specify a precise definition of "low," "medium," and "high" drop percentages. Also remember that Diffserv is designed to allow a finer granularity of priority setting for the applications and devices that can make use of it; it does not specify interpretation (that is, the action to be taken) once the differentiation is made. Per-hop packet behavior decisions can depend on traffic conditions and how packets are classified. There are three ways you can use the DSCP field:

Chapter 5: Quality of Service (QoS)

244

Classifierusing a traffic descriptor (either an ACL or map-class definition) to categorize packets within a specific group to make them available for QoS handling by the network depending on service characteristic defined by the DSCP value. Network traffic can be partition into multiple priority levels or classes of service. MarkerSetting the DSCP field depending on actual traffic conditions defined in a traffic profile. MeteringUsing Committed Access Rate, Class-Based Policing or DSCPCompliant WRED to check compliance to the defined traffic profile using either a shaper or dropper function.

Resource Reservation Protocol (RSVP) RSVP is a protocol that creates a reservation for a certain amount of bandwidth through a network. A client makes a request from their router to a specific destination. The router then checks with the rest of the routers in the path to ensure that bandwidth is available. If it is available, the routers will guarantee the session will have the requested bandwidth. If it is not available, the router will notify the client. The client has the option of reducing the request and trying again, or just communicating with the desired server or device without using QoS. Configuring RSVP requires setting the bandwidth for the interface(s) and defining the amount of bandwidth available to RSVP. Router(config)# interface sO Router(config-if)# bandwidth 64 Router(config-if)# ip rsvp bandwidth 32 1 In this example, we set the bandwidth of the serial interface to 64. The first number (32) in the RSVP command refers to maximum of bandwidth that can be used by RSVP. The second number refers to the maximum amount that can be reserved by a single session. If you do not specify these numbers when you type the command ip rsvp bandwidth it will default to 75% of the bandwidth for both numbers. In this example, the result would be the same as typing the command ip rsvp bandwidth 48 4 8 . If the default 75% is sufficient for your network, you do not need to specify these numbers. Load Balancing Load balancing allows a router to take advantage of multiple best paths to a given destination. The paths are derived either statically or with dynamic protocols, such as RIP, EIGRP, OSPF, and IGRP. Sometimes the router must select a route from among many learned via the same routing protocol. The router usually chooses the path with the lowest cost (or best metric) to the destination. Each routing process calculates its cost

245 Chapter 5: Quality of Service (QoS) differently and the costs may need to be manipulated in order to achieve load balancing. If the router receives and installs multiple paths with the same administrative distance and cost to a destination, load balancing can occur. The IGRP and EIGRP routing processes also support unequal cost load balancing. You can use the variance command with IGRP and EIGRP to accomplish unequal cost load balancing. You can have up to sixteen equal costs with Cisco IOS, but some IGPs have their own limitations. All IGPs by default will load-share over FOUR equal paths. With IOS commands, it is possible to allow more equal paths when desired. Load balancing can be configured to work per destination, or per packet. Perdestination load balancing has the router distribute the packets depending on the destination address. Given two paths to the same network, all packets for destinationX on that network go over the first path, all packets for destinationY on that network go over the second path, and so on. Per-packet load balancing means that the router sends one packet for destinationX over the first path, the second packet for (the same) destinationX over the second path, and so on. The router's CPU looks at every single packet and load balances on the number of routes in the routing table for the destination. This can crash some routers because the CPU may become overloaded.

8 0 2 . l x and QoS
Consider this scenarion: a branch office wireless deployment using LEAP sends authentication requests back to the authentication server at the central site. Since this traffic is across a WAN link, network congestion could cause UDP RADIUS packets to be dropped or delayed. The recommended workaround is to configure the RADIUS traffic to be classified, marked with DSCP or an IP precedence value, and placed in a priority transmit queue. The value following the shape peak command is the committed information rate (CIR). From the configuration above, DSCP value 10 falls under the class gold, which has a CIR value of 512000. To shape traffic to the indicated bit rate according to the algorithm specified, use the shape policy-map class configuration command. shape [average | peak] mean-rate [[burst-size] [excess-burst-size]] Syntax average (Optional) Committed Burst (Be) is the maximum number of bits sent out in each interval.

Chapter 5: Quality of Service (QoS)

246

peak (Optional) Be + Excess Burst (Be) is the maximum number of bits sent out in each interval. meanrate (Optional) Also called committed information rate (CIR). Indicates the bit rate used to shape the traffic, in bits per second. When this command is used with backward explicit congestion notification (BECN) approximation, the bit rate is the upper bound of the range of bit rates that will be permitted. burstsize (Optional) The number of bits in a measurement interval (Be). excessburstsize (Optional) The acceptable number of bits permitted to go over the Be. The main reasons to use traffic shaping are to: Control access to available bandwidth Ensure that traffic conforms to specific policies Regulate the flow of traffic in order to avoid congestion. Reference: http://www.cisco.eom/univercd/cc/td/doc/product/lan/cat4224/sw_confg/t raffic.htm Here is sample output of the show traffic-shape command: Target Rate = CIR = 100000 bits/s Mincir = CIR/2 = 100000/2 = 50000 bits/s Sustain = Be = 8000 bits/int Excess = Be = 8000 bits/int Interval = Bc/CIR = 8000/100000 = 80 ms Increment = Bc/8 = 8000/8 = 1000 bytes Byte Limit = Increment + Be/8 = 1000 + 8000/8 = 2000 bytes Reference: http://www.cisco.com/warp/public/125/framerelay_ts_cmd. html The ip rtp priority command is used to create a strict priority queue for voice packets while providing WFQ for non-voice traffic. Strict priority means that if packets exist in the priority queue, they are dequeued and sent firstthat is, before packets in other queues are dequeued. To reserve a strict priority queue for a set of Real-Time Transport Protocol (RTP) packet flows to a range of User Datagram Protocol (UDP) destination ports, use the ip rtp priority command in interface configuration mode. This command is useful for voice or other applications which are delay-sensitive.

247 Chapter 5: Quality of Service (QoS) The ip rtp priority command extends and improves on the the ip rtp reserve command by allowing you to specify a range of UDP/RTP ports whose voice traffic is guaranteed strict priority service over other queues or classes using the same output interface. Custom Queuing (CQ) With custom queuing there is no preemptive queue. Bandwidth is statically serviced depending on the configuration, and the queue that is being serviced at any given time will finish before servicing the next queue. With CQ, bandwidth is allocated proportionally for each different class of traffic. CQ allows you to specify the number of bytes or packets to be drawn from the queue. This is especially useful on slow interfaces Why Use CQ? Cisco IOS QoS CQ allows you to provide specific traffic guaranteed bandwidth at a potential congestion point, assuring the traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic. For example, you could reserve half of the bandwidth for a specific data protocol, allowing the other half to be used by other protocols. If the specified type of traffic is not using the bandwidth reserved for it, then unused bandwidth can be dynamically allocated to other traffic types. Restrictions With CQ enabled, packet switching takes longer than FIFO, since packets are classified by the processor card. CQ is statically configured and doesn't adapt to changed network conditions. You should use the class-map global configuration command to configure a traffic class and the match criteria that will be used to identify traffic as belonging to that class. In the following procedure, traffic matching a specified protocol will be classified as belonging to the traffic class. This will classify traffic, and the traffic policy configuration will determine how to treat the traffic.

Chapter 5: Quality of Service (QoS) To define the match criteria, use the following commands starting in global configuration mode. 1 Command Step 1 Purpose Router(config)# class-map [matchall | match-any] class-name

248

Specifies the user-defined name of the class map. The match-all option specifies that all match criteria in the class map must be matched. The match-any option specifies that one or more match criteria must match. Specifies a protocol supported by NBAR as a matching criterion.

Step 2

Router(config-cmap)# match protocol protocol-name

For example, if all FTP traffic needs to be marked with a QoS group value of 1, you would use the match protocol ftp command in class-map configuration mode, and use the set qos-group 1 command in policy-map class configuration mode (assuming that the traffic policy uses the specified class). Thus classifying FTP traffic would be handled in the traffic class, while marking the QoS group value to 1 would be handled under the traffic policy. Configuring a Traffic Policy To specify the QoS policies to apply to traffic classes defined by a traffic class, use the following commands beginning in global configuration mode: Command Step 1 Router(config)# policymap policy-name Router(config-pmap)# class class-name Router(config-pmap-c)# Purpose Specifies the traffic policy name entered by the user. Specifies the name of a previously defined traffic class. Enters policy-map class configuration mode, a prerequisite for entering QoS policies.

Step 2

Step 3

Attaching a Traffic Policy to an Interface To attach a traffic policy to an interface and to specify the direction in which the traffic policy should be applied (on either packets coming into the interface or

249 Chapter 5: Quality of Service (QoS) packets leaving the interface), use the following commands in interface configuration mode, as needed: Command Router(configif)# service-policy output policy-mapname Router(configif)# service-policy input policy-mapname Purpose Specifies the name of the traffic policy to be attached where it can be applied to all packetes leaving the interface.

Specifies the name of the traffic policy to be attached where it can be appliced to all packets entering the interface.

To detach a policy map from an interface, use the no service-policy [input | output] policy-map-name command. Configuring a Traffic Class with NBAR Example The class-map classl command uses the NBAR classification of SQL*Net as the matching criterion: Router(config)# class-map classl Router(config-cmap)# match protocol sqlnet Class of Service (CoS) is the ability of a network to differentiate service to specified network traffic over packet networks and cell networks. Cisco IOS software default is to leave the IP keep the precedence value set in the header unchanged, and allowing all internal network devices to use the IP precedence setting. This policy follows the standard approachnetwork traffic should be classified by type of service at the network perimeter, and that service types should be implemented in the network core. Network core routers can then use precedence bits to determine transmission order, likelihood of packet drop, and so on. You can use any of these to set IP precedence in packets: Policy-Based Routing QoS Policy Propagation via Border Gateway Protocol Committed Access Rate

DiffServ introduces the DiffServ Code Point (DSCP) concept, using the first 6 bits of the TOS field, giving 26 = 64 different values. RFC 2474 describes the Differentiated Services (DS) field and the DiffServ Code Point (DSCP). A comparison of these two is displayed below:

Chapter 5: Quality of Service (QoS) ToS Byte 1 0 1 T2 Tl TO CU2

250

cuo |

DiffServ Field 1 0 1 | 0 0 0 ECN ECN

As you can see from the two packet formats, there are no overlapping fields. Modular Quality of Service (MQC) separates: A classification policy Other parameters which act on the results of the classification policy.

MQC is configured and implemented thus: Define a traffic class using the class-map command. Create a service policy, by associating the traffic class with one or more QoS features (using the policy-map command). Attach the service policy to the interface using the service-policy command.

Priority queuing is configured using the priority-list command. Four levels can be defined in a priority list: High Medium Normal Low

The default is the normal queue. This DSCP field definition allows up to 64 (0 through 63) distinct service level classifications on IP frames. IP Precedence is the 3 most significant bits of the ToS field. The last two bits represent the Eariy Congestion Notification (ECN) bits. Consequently, IP Precedence maps to DSCP by using IP Precedence as the 3 high-order bits and padding the lower-order bits with 0. To configure QoS class maps in configuration mode, use the class-map command. Use the no form of this command to delete a class map. class-map name [match-all | match-any] no class-map name [match-all | match-any] Syntax Description name Class map name.

251 Chapter 5: Quality of Service (QoS) match-all (Optional) Matches all match criteria in the class map. match-any (Optional) Matches one or more match criteria. The default is match-all when if do not specify match-any. Quantum values are calculated as MTU + (weight-l)*512 per queue Differences Between Traffic-Shaping Mechanisms The different traffic-shaping mechanisms are similar in their implementation, share the same code and data structures, but differ with respect to their CLIs and queue types used: Generic traffic shaping (GTS) Class-based shaping Distributed traffic shaping (DTS) Frame Relay traffic shaping (FRTS)

For GTS, the shaping queue is a weighted fair queue (WFQ). For class-based shaping, GTS can be configured on a class, rather just on an access control list (ACL), but you must first define traffic classes using match criteria, including protocols, access control lists (ACLs), and input interfaces. Traffic shaping can apply to each defined class. For FRTS, the queue can be: A weighted fair queue, WFQ, configured by the frame-relay fair-queue command A strict priority queue with WFQ, configured by the frame-relay ip rtp priority command in addition to the frame-relay fair-queue command Custom queuing (CQ) Priority queuing (PQ) First-in, first-out (FIFO).

A service provider uses Backward Explicit Congestion Notification messages (BECNs) to notify a Frame Relay customer that there is congestion on the network. The traffic shape adaptive command enables the router to react to this. The traffic-shape adaptive [bit-rate] command configures to the minimum bit rate to which traffic is shaped when BECNs are received on an interface. Under adaptive GTS, the router uses BECNs to estimate available bandwidth and adjust the transmission rate accordingly. The actual maximum transmission rate will be between the rate specified in the traffic-shape adaptive command and the rate specified in the traffic-shape rate command.

Chapter 5: Quality of Service (QoS) Traffic shaping controls outgoing traffic from an interface to match its transmission to the speed of the remote target interface, as well as to ensure that the traffic conforms to policies that apply it.

252

Traffic meeting a particular profile can be shaped to meet downstream requirements, thus eliminating bottlenecks in caused by data-rate mismatches. Buffers are used to do this, with temporary storage for traffic that is queued. An optional class based shaping command allows adjustment of the maximum number of buffers. Committed Burst (Be) is defined as the maximum number of bits the frame relay network commits to transfer during a Committed Rate Measurement Interval (Tc). Tc is defined as: Tc = Be / CIR. The default is 7000 bits. The Tc can be set to 0, which means that no traffic will be able to burst above the CIR.

Using WFQ allows traffic priority management that automatically sorts among individual traffic streams, without requiring that access lists be defined. WFQ can manage duplex data streams between pairs of applications, and simplex data streams such as voice or video. There are two categories of WFQ sessions: High bandwidth, which shares the transmission service proportionally according to assigned weights. For a WFQ enabled interface, new messages for high-bandwidth traffic streams are discarded after the configured or default congestive messages threshold has been met. Low bandwidth, which has effective priority over high-bandwidth traffic. Lowbandwidth conversations, which include control message conversations, continue to queue data. As a result, the fair queue may occasionally contain more messages than specified in its configured threshold.

This example requests a fair queue with a congestive discard threshold of 64 messages, 512 dynamic queues, and 18 RSVP queues: interface Serial 3/0 ip unnumbered Ethernet 0/0

fair-queue 64 512 18
To configure CBWFQ, three required steps are needed: o Define class maps, Configure the policy map for class policy Attaching the service policy enabling CBWFQ.

This is done using the Modular QoS IOS syntax. You must use Modular QoS CLI to configure class-based marking.

253 Chapter 5: Quality of Service (QoS) The Committed Access Rate (CAR) and Distributed CAR (DCAR) services limit the input or output transmission rate on an interface or subinterface using flexible criteria. The rate-limiting feature of CAR gives a network operator the means to: Define Layer 3 aggregate or granular access and egress bandwidth rate limits Specify traffic handling policies for traffic either conforming to or exceeding specified rate limits.

CQ supports fairness not provided through priority queuing (PQ). With custom queuing, all queues are serviced according to priority. With priority queuing, bandwidth hogs can dominate the link. There are some things to remember about CAR rate-limiting: Aggregate access or egress will match all packets on an interface or subinterface. Granular access or egress matches type of traffic depending on precedence. You can designate CAR rate-limiting policies by physical port, packet classification, IP address, MAC address, application flow, and other criteria specifiable in access lists or extended access lists. CAR rate limits can be implemented on input or output interfaces or subinterfaces, including Frame Relay and ATM subinterfaces.

An example of the use of CAR'S rate-limiting capability would be applicationbased rates that limit Web HTTP traffic to 50 percent of link bandwidth (which would reserves capacity for non-Web traffic including, perhaps, mission-critical applications). The command that enables RSVP is ip rsvp bandwidth. With CQ, you can control available bandwidth on an interface when it is unable to accommodate the total traffic demand in the queue. Associated with each output queue is a configurable byte count, which specifies how many bytes of data will be delivered from the current queue before the system moves on to the next queue. When a queue is processed, packets are sent until the queue is empty, or number of bytes exceeds the queue byte count (defined by the queuelist queue byte-count command). CQ and Extended Burst Capability Here is how extended burst capability works. If a packet arrives and needs to borrow n number of tokens when the token bucket contains fewer tokens than the packet size requires, then CAR compares two values: Extended burst parameter value Compounded debt. Compounded debt is computed as sum over all a_i.

Chapter S: Quality of Service (QoS)

254

o_i indicates the ith packet that attempts to borrow tokens since the last time a packet was dropped. o__a indicates the actual debt value of the flow after packet i is sent. Actual debt is a count of how many tokens the flow has currently borrowed. When the compounded debt is higher than the extended burst value, CAR'S exceed action takes effect. After a packet is dropped, compounded debt is effectively set to 0. CAR will then compute a new compounded debt value equal to actual debt for the next packet that needs to borrow tokens. When actual debt is greater than the extended limit, all packets will be dropped until actual debt is reduced through tokens accumulating in the token bucket.

Reference: http://www.cisco.com/univercd/cc/td/doc/product/software/iosl 20/1 2cgcr/qos_c/qcpart4/qcpolts.htm Committed Access Rate (CAR) definition Committed Access Rate (CAR) is used to rate limit traffic. For example, all ICMP traffic that exceeds the defined level will be dropped. This prevents an ICMP flood attack from saturating the link. Rate limiting is a way to allow a network to run in a degraded manner, when it needs to remain up and is receiving a stream of Denial of Service (DoS) attack packets in addition to actual network traffic. Rate limiting can be achieved in a number ways using Cisco IOS software. These include: Committed Access Rate (CAR) Traffic Shaping Shaping and Policing through the Modular Quality of Service (MQS) Command Line Interface (QoS CLI). Reference: http://www.cisco.com/en/US/products/sw/iosswrel/psl830/produ cts_feature_guide09186a0080087a84.html There are two different approaches to QoS: Congestion management, setting up queues to ensure that higher priority traffic gets serviced in times of congestion. Congestion avoidance, which works by dropping packets before congestion occurs on a link.

Random Early Detection (RED) is a congestion avoidance mechanism that takes advantage of TCP's congestion control mechanism. RED takes a proactive approach to congestion. RED does not wait for a queue to become completely filled. Instead, after the average queue size exceeds a minimum threshold, RED starts dropping packets which have a non-zero drop probability. Using packet drop probability ensures that RED randomly drops packets from only a few flows,

255 Chapter 5: Quality of Service (QoS) avoiding global synchronization. A packet drop is a signal to the TCP source to slow down. Responsive TCP flows slow after packet loss by going into slow start mode. To have a router simulate receiving and forwarding Resource Reservation Protocol (RSVP) PATH messages, use the ip rsvp sender global configuration command. To disable this feature, use the no form of this command. ip rsvp sender session-ip-address sender-ip-address {tcp | udp | ipprotocol} sessiondport sender-sport previous-hop-ip-address previous-hop-interface bandwidth burst-size Use the mask keyword to assign multiple IP precedences to the same rate-limit list. To determine the mask value, use these four separate steps: Decide the precedence you want to assign to this rate-limit access list. Convert the precedences into 8-bit numbers with each bit corresponding to one precedence. An IP precedence of 0 corresponds to 00000001, 1 corresponds to 00000010, 6 corresponds to 01000000, and 7 corresponds to 10000000. Add the 8-bit numbers for the selected precedences together. For example, the mask for precedence's 1 and 6 is 01000010. Convert the binary mark into the corresponding hexadecimal number. For instance, 01000010 becomes 0x42. This value is used in the access-list ratelimit command. Any packets with an IP precedence of 1 or 6 will match this access list. A mask of FF matches any precedence. A mask of 00 does not match any precedence.

For a class configured with the priority command, the bandwidth argument is used to set the maximum bandwidth allocated for packets. The bandwidth parameter guarantees bandwidth to the priority class and also restrains the flow of packets from the priority class. When the device is congested, priority class traffic above the allocated bandwidth is discarded. When a device is not congested, the priority class traffic is allowed to exceed its allocated bandwidth. Reference: http://www.cisco.com/en/US/products/sw/iosswrel/psl834/products_ feature_guide09186a0080080232.html#47832 WRED selectively drops packets before congestion occurs, and is considered to be a congestion avoidance feature rather than a queuing feature. The WRED algorithm supports congestion avoidance on network interfaces by providing buffer management, and by allowing TCP traffic to throttle back before buffers are exhausted. This helps avoid tail drops and global synchronization problems, improving network and TCP-based application performance.

Chapter 5: Quality of Service (QoS)

256

Reference: http://www.cisco.com/en/US/products/sw/iosswrel/psl829/products_ feature_guide09186a00801b2406.html The split horizon rule prohibits a router from advertising a route to a destination through any interface used by the router itself to reach that destination. The split horizon rule is a problem with distance vector protocols, such as IGRP. Without sub-interfaces, split-horizon goes into effect, and all routes learned from a serial interface will not be advertised out of that interface. In a Frame Relay configuration, the router interface always makes an initial assumption that the connection is up. Only after missing three consecutive LMI status messages will the interface go down. This explains why an interface shows an "up" status for a short time before going back down. In this case the counters for LMI sent are increasing while the counters for LMI rcvd is still 0. This clearly indicates a case of misconfigured LMI type. For a discussion on troubleshooting serial lines, refer to.
http://www.cisco.eom/univercd/cc/td/doc/cisintwk/itg_vl/trl915.htm#xtocidl95571

The formula for the maximum number of DLCI's for ANSI is (1500-13)/5, giving max DLCIs = 297.4. See below for a discussion of how this formula is generated Analysis The PVC information in a PVC packet for the ANSI and Q933a LMIs is 3 bytes long, whereas for the Cisco LMI it is 6 bytes long, due to the additional "bw" (for Bandwidth) value. The Report Type (RT) portion of the PVC information is one byte long and the KeepAlive (KA) portion is two bytes long. The bw value represents the Committed Information Rate (CIR)the actual bw value can only be seen if the Frame Relay switch is configured to forward this information. The static overhead in each case is 13 bytes: Entire LMI packet minus IEs (10 bytes) + RT (1 byte) + KA (2 bytes)

Subtract this number from the Maximum Transmission Unit (MTU) to get the total available bytes for DLCI information. We then divide that number by the length of the PVC IE (5 bytes for ANSI and Q933a, 8 bytes for Cisco) to get the maximum theoretical number of DLCIs for the interface: For ANSI or Q933a LMIs, the formula is: (MTU - 13) / 5= max DLCIs. For Cisco LMIs, the formula is (MTU - 13) / 8= max DLCIs.

The frame-relay map command is used to map Layer 3 addresses to Layer 2 DLCI information.

257 Chapter 5: Quality of Service (QoS) The frame-relay mincir command specifies the minimum CIR value. This command is optional, and if it is omitted from the configuration, the default value is found by dividing the specified CIR value by two. Frame Relay traffic shaping can use Adaptive Frame Relay Traffic Shaping for Interface Congestion, adjusts permanent virtual circuit (PVC) sending rates depending on interface congestion. When enabled, the traffic shaping mechanism monitors interface congestion. When the congestion level exceeds a specified queue depth, all PVC sending rates are reduced to the minimum committed information rate (minCIR). As soon as interface congestion drops below the queue depth, the traffic-shaping mechanism reverts to the committed information rate (CIR). This process guarantees the minCIR for PVCs under conditions of interface congestion. You should note that the sum of the minCIR values for the PVCs on the interface must be less than the usable interface bandwidth. Adaptive Frame Relay Traffic Shaping for Interface Congestion works with backward explicit congestion notification (BECN) and Foresight. If interface congestion exceeds the queue depth and adaptive shaping for interface congestion is enabled along with BECN or Foresight, then the PVC sending rate is reduced to the minCIR. When interface congestion drops below the queue depth, the sending rate adjusts in response to BECN or Foresight. Reference: http://www.cisco.com/en/US/products/sw/iosswrel/psl839/products_ feature_guide09186a0080087b91.html To keep everyone from sending more data than the network can hold, frames sent above the contracted rate may be marked as Discard Eligible (DE). DE bits are set by the carrier network. They are an indication of congestion within the Frame Relay network, so the DE bits are set on the interior of the carrier network, not at the provider-customer edge boundary. If your equipment receives DE-marked frames, data sent in the future may get dropped. DE-marked frames may be an early indicator of traffic rates that you didn't plan for in the design of your Frame Relay WAN. Frame Relay equipment also notices congestion when it sees frames marked with the Forward Error Correction Notification (FECN) and Backward Error Correction (BECN) bits. These merely indicate an overload within the carrier network, and are only of value in monitoring the carrier's health. You might think that your equipment would notify end stations to stop sending data to keep frames from being discarded or from reaching a congested network. In practice your network equipment does not do thismost routers, bridges and Frame Relay access devices (FRADs) do absolutely nothing when these bits are set. They expect the higher layer protocols, such as TCP/IP, to handle the packet loss.

Chapter 5: Quality of Service (QoS)

258

In a situation when one device sends data to another device across a Frame Relay infrastructure, and an intermediate Frame Relay switch encounters congestion (full buffers, over subscribed ports, overloaded resources, etc.), then the intermediate Frame Relay switch will set the BECN bit on packets being returned to the sending device and the FECN bit on packets being sent to the receiving device. This tells the sending router to back off and apply flow control like traffic shaping. This also informs the receiving device that the flow is congested and that it should inform upper layer protocols. This tells the receiving device that it should close down applications like windowing, if possible, and informs the sending application to slow down. FECN (Forward Error Congestion Notification) tells the receiving device that the path is congested and that upper layer protocols can expect some delay BECN (Backward Error Congestion Notification) tells the transmitting device that the Frame Relay network is congested and that it should back off for better throughput. Reference: http://www.sins.com.au/network/frame-relay-fecn-becn. html Connecting from Spoke to Spoke In a hub and spoke configuration, no mapping exists for the IP addresses of other spokes. You cannot ping from a spoke to another spoke in a hub and spoke configuration using multipoint interfaces. Only the hub address can be learned via the Inverse Address Resolution Protocol (IARP). However, you can ping the addresses of other spokes, if you configure a static map using the frame-relay map command for the IP address of a remote spoke, using the local data link connection identifier (DLCI). The local DLCI should be specified when using the frame-relay map command, which is 102 in this example.

259 Chapter 5: Quality of Service (QoS) Chapter 5 Questions 5-1. Which of the following conditions is caused when a system is attempting to hand a packet to the retransmit buffers when no buffers are available? a) Buffer drops b) Input drops c) Output drops d) Route drops 5-2. Priority queuing supports how many queues? a)l b)2 c) 4 d) 8

e) 16
f) 32 5-3. If you wanted to make sure that each queue receives a fixed percentage of the bandwidth, which queuing method would you use? a) Weighted Fair Queuing b) FIFO c) Priority Queuing d) Fair Weight Queuing e) Link-state Queuing f) Custom Queuing 5-4. Which of the following is most likely to lose packets because a higherlevel queue is hogging resources? a) Weighted Fair Queuing b) FIFO c) Priority Queuing d) Fair Weight Queuing e) Link-state Queuing a) Custom Queuing 5-5. When a packet is blocked because it fails to match a filter or access list, what type of ICMP message is sent to the destination? a) Access Denied b) Source Quench c) Destination Unreachable d) Time Exceeded

Chapter 5: Quality of Service (QoS) e) None 5-6. Custom queuing supports how many queues? a)l b)2 c) 4 d) 8

260

e) 16
f) 32 5-7. What is not true of RED? a) Uses Tail Drop to help prevent the queue from reaching its maximum b) Drops all packets once the queue reaches its maximum c) Uses TCP slow start to throttle back clients sending too much data d) RED randomly drops packets when the queue gets close to its maximum 5-8. Which are true of WFQ? a) Gives high-volume traffic a lower priority over low-volume traffic b) WFQ is enabled by default on links with speeds less than El c) WFQ does not require configuration d) WFQ requires configuration with the fair-queue command 5-9. Which of the following Priority Queuing configurations are correct? a) priority-list 1 protocol ipx high priority-list 1 protocol ip medium priority-list 1 protocol default normal interface serial 0 priority-group 1 b) priority list 1 protocol ipx high priority list 1 protocol ip medium priority list 1 protocol default normal interface serial 0 priority group 1 c) priority-list 1 protocol ipx high priority-list 1 protocol ip medium priority-list 1 protocol default normal interface serial 0 priority group 1

261 Chapter 5: Quality of Service (QoS) d) priority-list 1 ipx protocol high priority-list 1 ip protocol medium priority-list 1 default normal interface serial 0 priority-group 1 5-10. When setting IP precedence, which of the following is a 5 in the TOS field? a) Priority b) Immediate c) Critical d) Network 5-11. In the command ip rsvp bandwidth 128 64 what does the 128 and 64 represent? a) bandwidth of link and burstable speed for rsvp b) rsvp cir of link and burstable speed c) rate-limits for rsvp d) limit for rsvp and limit for single rsvp session 5-12. What are the 3 types of actions that CAR can be configured for when traffic exceeds the average rate? a) Drop b) Transmit c) Exceed d) Set precedence and transmit e) Conform 5-13. How many multiple IP routing paths can OSPF use? a) 2 b) 4 c) 6 d) 8 5-14. How many multiple IP routing paths will EIGRP use by default? a) 2 b)4 c) 6 d) 8 5-15. Looking at the following configuration, which of the statements would be true?

Chapter 5: Quality of Service (QoS) Router(config)# Queue-list 1 protocol ip 1 Router(config)# Queue-list 1 protocol ipx 2 Router(config)# Queue-list 1 default 3 Router(config)# Queue-list 1 queue 1 byte count 3000 Router(config)# Queue-list 1 queue 2 byte count 5000 Router(config)# interface serial 0 Router(config-if)# custom-queue-list 1 a) The router will never get to queue 2 b) The router will send approximately 3000 bytes of IP, approximately 5000 bytes of Novell and then empty queue 3 c) The router will send approximately 3000 bytes of IP, approximately 5000 bytes of Novell and 0 from queue 3 d) The router will send approximately 3000 bytes of IP, approximately 5000 bytes of Novell and approximately 1500 from queue 3 e) Nothing, the configuration is invalid

262

5-16. You want traffic on your frame relay link to conform to specific policies. Because of this, you configure traffic shaping as follows: Router configuration: ip cef class-map match-all gold match ip dscp 10 12 14 class-map match-all bronze match ip dscp 26 28 30 class-map match-all silver match ip dscp 18 20 22 policy-map SHAPE class gold shape peak 512000 bandwidth percent 50 class bronze shape average 384000 bandwidth percent 20 class silver bandwidth percent 30 shape peak 448000 interface Serial4/0 encapsulation frame-relay ip address 14.34.34.51 255.255.255.0 service-policy output SHAPE end You verify your configuration using the show policy-map command as shown below: Router#sh policy-map inter s4/0 Serial4/0

263 Chapter 5: Quality of Service (QoS) Service-policy output: SHAPE (1865) Class-map: gold (match-all) (1866/2) 0 packets, 0 bytes 1 minute offered rate 0 bps, drop rate 0 bps Match: ip dscp 10 12 15 (1868) Traffic Shaping Target Byte Sustain Excess Interval Increment Adapt Rate Limit bits/int bits/int (ms) (bytes) (active)

1024000 3200 12800 12800 25 3299 Queue Packets Bytes Packets Bytes Depth Delayed Delayed Active 0 0 0 0 no Weighted Fair Queueing Output Queue: Conversation 265 Bandwidth 50% Max Threshold 64 (packets) (pkts matched/bytes matched) 0/0 (pkts discards/bytes discards/tail drops) 0/0/0 Using this information, what is the CIR value for all the traffic marked with DSCP values 10? a) 128000 b) 256000 c) 512000 d) 1024000 e) Cannot be determined 5-17. What are the primary reasons to implement traffic shaping on a network? (Choose all that apply). a) To regulate and thus control the average queue size by indicating when transmission of packets should be halted temporarily. b) To control access to available bandwidth on the network. c) To define Layer 3 aggregate or granular bandwidth rate limits. d) To control the maximum rate of traffic on an interface. e) To ensure that traffic conforms to the policies established for it. f) To prevent denial of service attacks. g) To drop high levels of unwanted traffic. 5-18. You have set up priority queuing on the serial interface of your router as follows: priority-list 1 protocol ip high list 101 priority-list 1 protocol ip medium list 102 priority-list 1 protocol ip normal list 103 priority-list 1 protocol ip low list 104 priority-list 1 default low access-list 101 permit ip any any precedence critical access-list 102 permit ip any any precedence flash access-list 103 permit ip any any precedence priority

Chapter 5: Quality of Service (QoS) access-list 104 permit ip any any precedence network A packet reaches the router with an IP Precedence value of 4. What priority will this packet be assigned by the router? a) Low b) Normal c) Medium d) High e) Critical f) Flash

264

5-19. A BootCamp router's interface is configured for traffic shaping as follows: interface Serial 1.1 point-to-point ip address 10.16.1.1 255.255.255.252 frame-relay class BootCamp frame-relay interface-dlci 220
i

! map-class frame-relay BootCamp frame-relay cir 128000 frame-relay be 8000 frame-relay be 8000 no frame-relay adaptive-shaping In what are the be and be parameters measured in the above configuration? a) Bits per millisecond. b) Bits per interval. c) Bytes per interval. d) Bytes per second) e) Bits per second. f) Bytes per millisecond. 5-20. Consider the following scenario: An interface has been configured for custom queuing across a DS3 interface. Bandwidth has been allocated for three application flows: A, B and C. The average packet for each application is as follows: Application A= 2000 Application B= 1000 Application C= 500 You wish to configure the router to allow for 20% of the bandwidth to be allocated to flow A, 50% for flow B, and the remaining 30% for flow C. If only one packet is serviced for flow A per pass, how many packets need to be allowed on flow C to maintain the 20:50:30 ratio? a) 3

265 Chapter 5: Quality of Service (QoS) b)4 c) 5 d) 6 e) 500 f) More information needed 5-21. Your VOIP network needs to give priority to the VOIP traffic across the serial interface of a router. You wish to support this by implementing a solution that enables the router to service the Voice traffic in a strict priority queue. All other non-voice traffic should be serviced using the weighted fair queuing mechanism. Which command should you enable on this serial interface? a) fair-queue b) ip cef c) priority-group d) ip rtp priority e) priority-queuing 5-22. A serial interface with flow-based WFQ is carrying 25 flows in the following fashion: Twelve flows are marked as IP Precedence 0. Ten flows are marked as IP Precedence 1. Three flows are marked as IP Precedence 5. Using the above information, how much interface bandwidth is allocated to one of flows that are marked as IP Precedence 5? a) 1 % b) 4 %

c) 12%
d) 15% e) 33% f) Cannot tell from the information given 5-23. Using the 3 layer hierarchical approach to a network, What QoS functions are performed at the access layer? (Choose 2) a) Packet classification b) Congestion management c) Classification preservation. d) Congestion avoidance e) Admission control 5-24. You need to give your new VOIP traffic priority over other traffic types in your network. To do this, you plan to implement custom queuing. What statement is FALSE about custom queuing?

Chapter 5: Quality of Service (QoS) a) Custom queuing defines up to 16 queues. b) Custom queuing has one preemptive priority queue. This can be extended to multiple priority queues by configuring the 'lowestcustom' queue in the 'queuelist'. c) In custom queuing there is a weight assigned to each queue which specifies how each queue is treated. d) With custom queuing you cannot specify a minimum bandwidth guarantee per queue. e) In custom queuing you can classify based on the incoming interface) 5-25. Which of the following is a required configuration parameter for setting up NBAR? a) match protocol IP b) match nbar type 1 c) match ftp session passive d) match protocol http e) match url www.cisco.com

266

5-26. The BootCamp network is using Class of Service to prioritize the traffic throughout the network. Setting the CoS IP Precedence bits can be done in what situation? a) For ATM CLP traffic only b) To set the frame-relay DE bit c) When we receive HDLC frame with DEADBEEF pattern d) On a router on ISL or D0T1Q trunks in the output direction only e) None of the above f) All of the above 5-27. The BootCamp network plans to implement some method of quality of service using DSCP information. In comparing the different options which of the following statements is TRUE? a) The IP precedence and DSCP have no overlapping fields. b) The DSCP contains class selectors for backward compatibility with the IP precedence. c) The DSCPD is exactly the same as IP precedence; the name change is merely as marketing naming convention. d) The last 2 bits of the DSCP overlap with IP precedence. e) DSCP is only for TEC; IP precedence is for UDP f) None of the above. 5-28. The BootCamp network is using QoS to prioritize the critical traffic over busy links. What command would be used to configure Modular QoS CLI (MQC) to allow for a maximum bandwidth of 64 kb/s during times of network congestion; and when there is no congestion, to allow the use of more bandwidth?

267 Chapter 5: Quality of Service (QoS)

a) bandwidth 64 b) priority 64 c) police 64000 confirm-action transmit exceed-action drop d) shape average 64000 e) all of the above 5-29. Priority queuing is being configured on router NLl to give mission critical traffic priority over the WAN link. What statement is true with regard to priority queuing? a) There are 4 priority queues: high, medium, normal, low. b) The high and medium queues have precedence over the default queue. c) The classification is configurable via the command 'priority-list' d) The default queue is the normal queue, by default. e) All of the above) f) None of the above. 5-30. The IP precedence of a packet can be determine from: a) All 8 bits of the ToS byte b) Bits 3, 4 and 6 of the ToS byte. c) The three most significant bits of the ToS byte. d) The three least significant bits of the ToS byte. 5-31. Router NLl is configured for QoS as shown below:
Ip cef I Class-map match-all cos34 Match cos 3 4

Class-map match-all transp Match protocol http Match protocol telnet

Class-map match-all naughty Match protocol napster Match cos 1 Policy-map bootcamp Class cos34 Set dscp af33 Class transp Set dscp af21 Class naughty Set dscp csl

!
Interface fastethernetO/0/0 Ip address 192.168.1.1 255.255.0.0 Service-policy input bootcamp

Using the configuration displayed above, what statement is correct about ingress traffic to the faO/0/0 interface on NLl? a) All ingress frames marked as COS 0 will be marked as DSCP 0.

Chapter 5: Quality of Service (QoS) b) All ingress frames marked as COS 1 will be marked as DSCP c s l . c) All ingress HTTP traffic will be marked as DSCP af21. d) All ingress Napster traffic will be marked as DSCP c s l .

268

e) All ingress frames marked as COS 3 or COS 4 will be marked as DSCP af33. f) None of the above. 5-32. Which of the following is FALSE regarding differences between Generic Traffic Shaping (GTS) and Frame Relay Traffic Shaping (FRTS)? a) GTS supports the traffic group command while FRTS does not. b) For GTS, the shaping queue is weighted fair queue (WFQ). FRTS does not support WFQ. With FRTS, the queue can be a CQ, PQ or FIFO. c) FRTS supports shaping on a per-DLCI basis, while GTS is configurable per interface or subinterface. d) GTS works with many Layer 2 technologies, including Frame Relay, ATM, Switched Multimegabit Data Service, and Ethernet. FRTS is supported only on Frame Relay interfaces. 5-33. In the BootCamp Frame Relay network, Class Based Shaping is being used to increase network performance. Which of the following is a true statement regarding Class Based Shaping? a) CB shaping allows to rate-limit traffic in both incoming and outgoing directions. b) CB shaping provides a rate-limiting functionality with an associated amount of buffers, to store temporary out of profile traffic. c) CB shaping can only be configured in a child policy in a hierarchical policy map. d) CB shaping is a versatile feature which allows to both queue and remark traffic in input. e) None of the above f) All of the above 5-34. The BootCamp network is using FRTS to optimize the data flows within the network. In Frame Relay traffic shaping (FRTS), what is the Committed burst (Be) parameter? a) The Be is optional, and can be 0. It tells IOS how much extra bandwidth can be used on top of the CIR. b) The Be is a parameter which needs to be negotiated with the provider of the Frame Relay circuit. It defines the percentage of the Frame Relay circuit IOS will use to send bursty traffic. c) Be is a mandatory parameter when configuring FRTS. It defines a traffic rate up to which IOS will send traffic) d) Be defines the amount of token added to the token bucket at each interval. The token bucket algorithm is used in FRTS. If not configured, it defaults to 56000 bits.

269 Chapter 5: Quality of Service (QoS) e) Be is total size of the token bucket. This includes the excess burst and conform burst. f) None of the above are true. 5-35. In weighted fair queuing (WFQ), one can configure a "congestive-discardthreshold'(CDT). What is the CDT value used for? a) This threshold specifies from which point on IOS should start using WFQ. b) The CDT specifies the number of messages allowed in each queue. c) The CDT specifies the maximum amount of messages to be used by WFQ for high bandwidth traffic) d) The CDT defines a value from when IOS starts to account all messages in the WFQ system in conjunction with Netflow. e) CDT means the maximum amount of dynamic flows IOS will allow for WFQ. f) None of the above 5-36. What is true about Class based Weighted Fair Queuing (CBWFQ)? a) CBWFQ provides delay, jitter and bandwidth guarantees to traffic. b) CBWFQ can be configured on any interface in either input or output. c) CBWFQ has to be configured with the Modular QoS CLI. The resulting servicepolicy has to be applied on output. d) CBWFQ can only be configured in a hierarchical policy-map. The parent policymap does policing and the child policy-map does CBWFQ. e) All of the above f) None of the above 5-37. CAR has been configured on router NLl. What best defined Committed Access Rate (CAR)? a) CAR allows metering of traffic for traffic shaping. b) CAR is a feature that allows the rate limiting of traffic in either the incoming or outgoing direction. c) CAR is part of a set of features to be used in conjunction with queuing to form a hierarchical policy. CAR must always be applied in a parent policy-map, whereas CBWFQ should be applied in a child policy-map. d) CAR is a queuing feature. e) CAR matches only on UDP port range {16384 - 32767}. 5-38. You wish to enable the Resource Reservation protocol on one of the interfaces of a router. Which of the following commands will accomplish this? a) ip rsvp sender b) ip rsvp enable c) ip rsvp bandwidth d) rsvp enable e) ip rsvp reservation

Chapter 5: Quality of Service (QoS) f) RSVP is enabled in global configuration mode, not in interface configuration mode. 5-39. Which of the following statements is valid regarding Custom Queuing?

270

a) Custom queuing always services the highest priority traffic first before servicing the lower priority traffic. b) Custom queuing looks at groups of packets from the similar sourcedestination pairs. c) Custom queuing processes the queue depending on the number of packets sent. d) Custom queuing will not proceed to a next queue unless the current queue is empty. e) Custom queuing can prevent one type of traffic from saturating the entire link. 5-40. Due to intermittent congestion issues on a link, Committed Access Rate (CAR) has been configured on an interface. During a period of congestion, a packet arrives that causes the compounded debt to be greater than the value set for the extended burst. Which of the following will occur due to this? (Choose all that apply). a) CAR'S exceed action takes effect, dropping the packet. b) A token is removed from the bucket. c) The packet will be queued and eventually serviced. d) The compounded debt value is effectively set to zero (0). e) The packet is buffered by the CAR process. 5-41. In an effort to minimize the risks associated from DOS and ICMP flooding attacks, the following is configured on the serial interface of a router: interface serial 0 rate-limit input access-group 199 128000 4000 4000 conform-action transmit exceed-action drop access-list 199 permit icmp any any What QoS feature is this an example of? a) CBWFQ b)LLQ c) RSVP d) CAR e) WFQ f) FRTS 5-42. Which of the following are functions of Random Early Discard (RED)? (Choose all that apply) a) To avoid global synchronization for TCP traffic. b) To provide unbiased support for bursty traffic.

271 Chapter 5: Quality of Service (QoS) c) To minimize packet delay jitter. d) To ensure that high priority traffic gets sent first. e) To prevent the starvation of the lower priority queues. 5-43. You issue the following configuration change on router NL1: ip rsvp sender 225.1.1.1 192.1.2.1 UDP 3030 192.1.2.1 serial020 1 What is the effect of this change? a) The router will simulate receiving RSVP PATH messages destined to multicast address 225.1.1.1 from source 192.1.2.1. The previous hop of the PATH message is 192.1.2.1, and the message was received on interface serial 0. b) The router will simulate generating RSVP RESV messages destined to multicast address 225.1.1.1 from source 192.1.2.1. The next hop of the PATH message is 192.1.2.1, and the message was received on interface serial 0. c) The router will act as if it was sending RSVP PATH messages destined to multicast address 225.1.1.1 from source 192.1.2.1. The next hop of the PATH message is 192.1.2.1, and the message was received on interface serial 0. d) The router will act as if it was receiving RSVP RESV messages destined to multicast address 225.1.1.1 from source 192.1.2.1. The previous hop of the PATH message is 192.1.2.1, and the message was received on interface serial 0. 5-44. Rate Limiting is configured on the Ethernet interface of a router as follows: interface Ethernet 0 rate-limit input access-group rate limit 1 1000000 10000 10000 conform-action access-list rate-limit 1 mask 07 What effect will this configuration have? a) The command access rate policing limits all TCP traffic to 10Mbps. b) Traffic matching access-list 7 is rate limited. c) Voice traffic with DiffServ code point 43 is guaranteed. d) Traffic with IP Precedence values of 0, 1, and 2 will be policed) 5-45. When configuring Low Latency Queuing (LLQ), a bandwidth parameter is needed. What does this parameter specify? a) It provides a built in policer to limit the priority traffic in the LLQ during congestion. b) This parameter is optional, since the LLQ will always have precedence over other queues. c) This parameter should be as low as possible. It represents bandwidth which will always be reserved. It reduces the amount of bandwidth on the interface, even if it is not used by any LLQ traffic)

Chapter 5: Quality of Service (QoS)

272

d) It represents the reference CIR to calculate the burst size of the token bucket of the built-in policer. e) None of the above 5-46. What statement is FALSE with regards to Weighted RED (WRED)? a) WRED is a congestion avoidance mechanism, based on the adaptive nature of TCP traffic for congestion. b) WRED is a queuing feature. c) WRED allows for differentiated dropping behavior depending on either IP precedence or DSCP. d) WRED is configurable in a CBWFQ policy-map. e) All of the above are false statements.

273 Chapter 5: Quality of Service (QoS) Chapter 5 Answers 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18 5-19 5-20 5-21 5-22 5-23 5-24 5-25 5-26 5-27 5-28 5-29 5-30 5-31 5-32 5-33 5-34 5-35 5-36 5-37 c c f c c b b, c a c d a, b, d c b d c b, a b d d c a, e b d b a c b b a c c b

Chapter 5: Quality of Service (QoS) 5-38 5-39 5-40 5-41 5-42 5-43 5-44 5-45 5-46 c a,d d a, b a d a b

274

Chapter 6 Wide Area N e t w o r k i n g (WAN)


Leased Line Protocols The two common leased line protocols are HDLC and PPP. HDLC is the default encapsulation on serial interfaces. High-Level Data Link Control (HDLC) HDLC is a Cisco proprietary WAN protocol encapsulation method that allows leased-line point-to-point connections between two sites. It is a connectionless protocol that relies on upper layers to recover any frames that have encountered errors across a WAN link. Station TypesHDLC has three station types: primary, secondary, and combined. With primary and secondary stations, the primary station manages the connections to the secondary stations, and the secondary stations respond to commands from the primary station (examplemainframe & terminal). Combined stations have features of both primary and secondary, and are usually used in point-to-point links. Communication TypesHDLC has two communication types: balanced and unbalanced. Balanced is used for communication between two combined systems, and unbalanced is used in communication between a primary and a secondary station. Data Transfer ModesHDLC has three data transfer modes: NRM, ABM, and ARM. Balanced communication uses Asynchronous Balanced Mode (ABM), and either combined station can initiate communication. Unbalanced communication uses either Normal Response Mode (NRM) or Asynchronous Response Mode (ARM). In Normal Response Mode, the secondary station can only send data after receiving a command from the primary station. In Asynchronous Response Mode, the secondary can respond at will, and the primary is responsible for error detection and flow control. Frame TypesThere are three types of frames, I, S, and U. Information (I) frames carry the data, including sequence numbers. Supervisory (S) frames carry commands and responses, acknowledge frames, and request retransmission. Unnumbered (U) frames are used for link initiation and disconnection.

Chapter 6: Wide Area Networking Point-to-Point Protocol (PPP)

276

PPP is a standard method of transporting multi-protocol datagrams over pointto-point links. PPP only runs over the Channels, while Q.921/Q.931 protocols run only over the D Channel. PPP provides: A means of encapsulating multi-protocol datagrams A Link Control Protocol (LCP) for establishing, configuring and testing the data-link connection A set of Network Control Protocols (NCPs) for establishing and configuring network layer protocols

PPP traffic can be compressed, if compression is enabled. This compression is controlled by the Compression Control Protocol (). There are three types of PPP compression available today: MPCC (Microsoft Point to Point Compression) Predictor Stacker

PPP provides two methods of authentication, PAP and CHAP. CHAP is preferred because PAP transmits passwords in clear text over the network. CHAP and PAP authentication requires the router to have a user/password database that is created by issuing the global username command. For example, this command would add the username "RouterA", password "ciscorocks" to the local database (remember that everything is case sensitive): Router (config)# username RouterA password ciscorocks The host name is normally the destination router's hostname, but can be configured to use a different username by using this command on the destination interface (the new username is "ciscoland"): Router (config-if)# ppp chap hostname ciscoland Modems and Async Cisco routers have a built-in modem compatibility database (modemcap) to issue the correct initialization strings. There is no clock (hence the term), and something must maintain in-band timing. Use DTE lock to avoid speed mismatches. Modems often try to match the inbound transfer rate of the modem to the DTE. Set the speed under the TTY line, and at the modem with AT commands. Chat Scripts and System Scripts can be triggered for DDR on startup, on connection, at line activation, and to reset the modems. Chat scripts are especially useful because they can reset modem configurations, dial and

277 Chapter 6: Wide Area Networking remotely login to a host and detect line failure. They can be used to initialize a modem attached to a router, automatically dial out on a modem, and login and execute commands on another system or router. Frame Relay Basic Facts Frame Relay is a Layer 2 protocol. Serial interfaces use DB-60 connectors. Connection-oriented to transport data between a DTE device and a Frame Relay switch. Simple error checking is provided by appending a Frame Check Sequence (FCS) to each frame (similar to a CRC). No error correction (error checking, but no correctionthat's left to the host). Frame Relay uses HDLC, PPP, or ISDN/LAPD encapsulations. Maximum speed of Frame is 45 Mbps.

Types of Circuits Permanent Virtual Circuits (PVCs) are used for frequent and long connection times. As the name implies, they are brought up to be permanent connections, and are always available (except during an outage). Switched Virtual Circuits (SVCs) are for sporadic or infrequent traffic. They are setup when needed, broken down when not.

In a Frame Relay configuration, the router interface initially assumes that the connection is up. After missing three consecutive LMI status messages, the interface will go down. This explains why the interface will show an "up" status for a short time before going back down. In this case the count for LMI sent will increase while the count for LMI received remains at zero. This indicates a misconfigured LMI type. Data Link Connection Identifier (DLCI) DLCI's are assigned by the Frame Relay circuit provider, and have local significance only. They provide an identifier for the connection between the router at your site and the big Frame Relay switch at the provider. There is often confusion about this, so to make it clearthe DLCI is used only between your site and the provider's point-of-presence, it has no significance beyond that. DLCI states are: DeletedNo LMI signal is being received from switch, or no service is available from switch.

Chapter 6: Wide Area Networking

278

ActiveLines are up; connections are active. Routers are exchanging data. InactiveFrame relay switch to local connection is working. The remote routers' connection to the frame switch is not working.

Local Management Interface ( L M I ) LMI provides the control protocol for PVC setup and management. There are three types available: Cisco, ANSI and q.933a (default is Cisco). The service provider will specify the LMI in use. LMI's control data keepalives and verify the dataflow. The LMI type must be identical between the local device (router) and the local Frame Relay switch; it does not have to be identical for the end devices. LMI AutoconfigureA router with IOS 11.2 and newer does not need to be configured for the LMI. The newer IOS will send all three to the Frame Relay switch until the switch responds. DLCI Capacity Calculation To recapitulate what was covered in Chapter 5: The Report Type (RT) portion of a PVC information packet is one byte long and the KeepAlive (KA) portion is two bytes long. For the ANSI and Q933a LMIs, the PVC information is 3 bytes long For the Cisco LMI it is 6 bytes long, with 3 additional bytes for the bw (Bandwidth) value. The bw value represents the Committed Information Rate (CIR); the actual bw value will only be seen if the Frame Relay switch is configured to forward this information. The static overhead in each case is 13 bytes: Entire LMI packet minus IEs (10 bytes) + RT (1 byte) + KA (2 bytes). We can subtract for this number from the Maximum Transmission Unit (MTU) to get the total available bytes for DLCI information. We then divide that number by the length of the PVC IE (5 bytes for ANSI and Q933a, 8 bytes for Cisco) to get the maximum theoretical number of DLCIs for the interface: o o For ANSI or Q933a, the formula is: (MTU - 13)/5= max DLCIs. For Cisco, the formula is: (MTU - 13)/8= max DLCIs.

Encapsulation The encapsulation choices are Cisco and IETF, with Cisco being the default. This designation can be made through DLCI. The encapsulation type must be identical at both end devices. If Cisco devices are used across the entire network, Cisco encapsulation will likely be the encapsulation type; however, since the Cisco encapsulation type is proprietary, if another manufacturer's devices are used at the Frame Relay endpoints, then IETF encapsulation type will be required. Remember, encapsulation can be set per interface or per destination.

279 Chapter 6: Wide Area Networking Split Horizon and Frame Relay Interfaces Split horizon dictates that if a router has received a route advertisement from another router, it will not re-advertise it back out the interface on which it was learned. The default condition for Frame Relay interfaces is: Physical interfacessplit-horizon is disabled by default Multipoint subinterfacessplit-horizon is enabled by default Point-to-point subinterfacessplit-horizon is enabled by default

With distance vector protocols such as IGRP, the split horizon rule can be a problem. The split horizon rule prohibits a router from advertising a route through any interface that the router itself uses to reach a destination. Without subinterfaces, split-horizon goes into effect, and all routes learned from the serial interface will not be advertised from that interface. Subinterfaces under Frame Relay have a mechanism for supporting partially meshed Frame Relay networks. Most protocols assume transitivity on the logical networkif station A can communicate with station B, and station can communicate with station C, then station A should be able to communicate with station C. This is true on LANs, but is not true on Frame Relay networks unless station A is directly connected to station C. By using Frame Relay subinterfaces, a single physical interface can be treated as multiple virtual interfaces. This overcomes split horizon rules. Packets received on one virtual interface can then be forwarded on another virtual interface, even if they are recived via the same physical interface. The problem is that certain protocols (such as AppleTalk and transparent bridging) cannot be supported on partially meshed networks because they require split horizon, and packets received on an interface cannot be transmitted out the same interface even in cases where the packet is received and transmitted on different virtual circuits. These limitations are addressed in subinterfaces through a means to subdivide a partially meshed Frame Relay network into a number of smaller, fully meshed (or point-to-point) subnetworks. Each subnetwork has its own network number assigned, and appears to the protocols to be reachable through a separate interface. One thing that helps is that point-to-point subinterfaces can be unnumbered for use with IP, reducing the addressing burden that might otherwise result. IP RIP allows split horizon to be disabled. This can be accomplished through subinterfaces, or with the no ip split-horizon interface command. This will disable split horizons for IP traffic, including RIP. IPX RIP traffic cannot be disabled, however.

Chapter 6: Wide Area Networking Speed Elements

280

Committed Information Rate (CIR)The maximum transmission rate you've negotiated in your contract with the provider to transfer information under normal circumstances. This is what you are defining as the peak level of traffic you will send and be guaranteed service. Be careful when reviewing the contracts, some vendors will attempt to slip in a OR of 0, meaning they will do their best to provide service, but they're not guaranteeing anything. Local Port SpeedThe maximum speed at which your local interface can send information. Committed Burst RateThe maximum amount of data that a Frame Relay internetwork is committed to accept and transmit at the CIR. Providers oversubscribe their networks because the vast majority of the time their customers are not using the entire CIR. This is how long you can stay at the speed limit. Excess Burst RateThe maximum bits a Frame Relay node will attempt to transmit after the committed burst rate is exceeded. You can negotiate in the contract that you can occasionally exceed your CIR for short periods of time for some additional charge, and that the provider will attempt to cover you, but no guarantees. Congestion Backward Explicit Congestion Notification bit (BECN), Forward Explicit Congestion Notification bit (FECN) and the discard eligible bit (DE) provide notification of congestion in the Frame Relay cloud. The BECN bit is set by the Frame Relay network in frames traveling in the opposite direction of the frame encountering the congested path (goes back to the sender). The DTE end receiving the frames with the BECN bit set can request higher-level protocols to take flow control action to alleviate some of the congestion. The Frame Relay network sets the FECN bit to provide notification that congestion has been encountered during the transmission path of the packet from the source to the destination (goes to the receiver). The DTE receiving these frames can request higher protocols to take flow control action. The DE bit can be set by any Frame Relay switch or DTE device. If congestion is encountered, frames with the DE bit set will be discarded first. Frame Relay Compression There are two types of Frame Relay compression: payload and header. With payload compression, the actual data contained in each Frame Relay packet is compressed. With header compression, only the Frame Relay header is compressed. Each type of compression has its advantages. For example, say there are many small packets traversing the Frame Relay network. The Frame

2 8 1 Chapter 6: Wide Area Networking Relay header takes up a lot of overhead compared to the small payload. In this case, header compression is preferable. On the other hand, with large text documents transmitted over the network, payload compression is the better choice, since the packets are large and compressible. There are three types of Frame Relay payload compression: FRF9 Cisco data-stream Cisco packet-by-packet

Frame-Relay Mapping Frame Relay is a Layer 2 protocol. To be used for layer-3 communication, you must map Layer 2 to Layer 3 addresses. This mapping can be done dynamically or statically. Dynamic Frame Relay mapping is done with inverse-arp. Static Frame Relay mapping is done with either the frame-relay interface-dlci command or the frame-relay map command. A Frame Relay network can be fully-meshed, partially-meshed, or a combination of these. With a fully-meshed network, every router has a PVC to every other router. A partially-meshed Frame Relay network is usually designed around a hub and spoke topology in which one router has a PVC to every other router but all remote routers must go through the hub router to reach other remote routers. A Frame Relay network can also be a combination of a partially and fully meshed network. In that case, you might have a hub and spoke partial mesh configured but some remote routers have direct connections to other remote routers. Design of a Frame Relay network is important since design dictates how Frame Relay mapping statements are configured. There are different ways to configure different network topologies. When configuring Frame Relay, you can configure only the physical interfaces, use point-to-point subinterfaces, or multipoint subinterfaces. When using a physical interface, the mapping statements would apply to that interface (example: SerialO/0). When using any kind of subinterface, the mapping statements would apply to the subinterface (example: SerialO/0.1). Some representative configurations are below:

Chapter 6: Wide Area Networking Frame Relay interface with multiple subinterfaces, using the interface-dlci command: interface Serial 1/0:0 description Frame Relay T1-1536K no ip address encapsulation frame-relay IETF I interface Seriall/0:0.1 point-to-point ip address 10.1.100.1 255.255.255.252 frame-relay interface-dlci 80
i

282

interface Seriall/0:0.2 point-to-point ip address 10.2.100.1 255.255.255.252 frame-relay interface-dlci 106


i

interface Seriall/0:0.5 point-to-point ip address 10.5.100.1 255.255.255.252 frame-relay interface-dlci 91 Frame-Relay interface using map statements on the physical interface: interface SerialO ip address 192.168.1.1 255.255.255.0 encapsulation frame-relay frame-relay map ip 192.168.1.5 105 broadcast frame-relay map ip 192.168.1.6 106 broadcast Other Frame Relay Issues Traffic ShapingSince the speed of the frame relay circuits can vary, it is important to control how much and which traffic is sent or received on an interface. QueuingPriority, weighted, fair and custom queuing allow for specialized control of the traffic. Rate EnforcementYou can configure the maximum amount of traffic to pass out the interface by setting the transmission rate.

Frame Relay Adaptive Traffic Shaping Frame Relay adaptive traffic shaping improves standard Frame Relay traffic shaping by adjusting permanent virtual circuit (PVC) sending rates depending on interface congestion. With adaptive traffic shaping enabled, interface congestion is monitored. When the congestion level exceeds a defined value called queue depth, the sending rate for all PVCs is reduced to the minimum committed information rate (minCIR). As soon as interface congestion falls below the defined queue depth, the PVC sending rate reverts to the committed information rate (CIR). This process guarantees minCIR for PVCs under interface congestion.

283 Chapter 6: Wide Area Networking The sum of minCIR values for all PVCs on the interface must be less than the usable interface bandwidth. Adaptive traffic shaping works in conjunction with backward explicit congestion notification (BECN) and Foresight. With BECN and Foresight, when interface congestion exceeds the queue depth with adaptive shaping for interface congestion is enabled, then the PVC sending rate is reduced to the minCIR. When interface congestion drops below the queue depth, the sending rate is adjusted in response to BECN or Foresight. The minimum CIR is value is specified by the frame-relay mincir command. This command is optional. If it is omitted from the configuration, the default value is half the specified CIR value. Physical Layer Serial Interface Abbreviations CSU CTS DCD DCE DSR DSRS DSU DTE DTR FG NC RCk RI RTS RxD SG SCTS SDCD SRTS SRxD STxD TxD Channel Service Unit Clear To Send [DCE --> DTE] Data Carrier Detected (Tone from a modem) [DCE > DTE] Data Communications Equipment (modems, DSU, etc.) Data Set Ready [DCE --> DTE] Data Signal Rate Selector [DCE --> DTE] (Not commonly used) Data Service Unit Data Terminal Equipment (computer, printer, etc.) Data Terminal Ready [DTE --> DCE] Frame Ground (screen or chassis) No Connection Receiver (external) Clock input Ring Indicator (ringing tone detected) Ready To Send [DTE --> DCE] Received Data [DCE --> DTE] Signal Ground Secondary Clear To Send [DCE --> DTE] Secondary Data Carrier Detected (Tone from a modem) [DCE - > D T E ] Secondary Ready To Send [DTE --> DCE] Secondary Received Data [DCE --> DTE] Secondary Transmitted Data [DTE --> DTE] Transmitted Data [DTE --> DTE]

Chapter 6: Wide Area Networking Is Your Interface a DTE or a DCE?

284

Generally a DTE provides a voltage on TD, RTS, & DTR. A a DCE provides voltage on RD, CTS, DSR, & CD. You can figure what you have in front of you by following these steps: Measure the DC voltages between (DB25) pins 2 & 7 and between pins 3 & 7. Be sure the black lead is connected to pin 7 (Signal Ground) and the red lead to whichever pin you are measuring. If the voltage on pin 2 (TD) is more negative than -3 Volts, then it is a DTE, otherwise it should be near zero volts. If the voltage on pin 3 (RD) is more negative than -3 Volts, then it is a DCE. If both pins 2 & 3 have a voltage of at least 3 volts, then either you are measuring incorrectly, or your device is not a standard EIA-232 device. RS-232/EIA-232 The RS-232/EIA-232 standard has been around for decades, providing an interface between DTE and DCE devices. It is simple, universal, and well understood; although it does have many shortcomings. It has had various designations, including RS-232C, RS-232D, V.24, V.28 and V.10, but all these interfaces are essentially interoperable. EIA-232 is used for asynchronous data transfer as well as synchronous links, such as SDLC, HDLC, X.25 and Frame Relay. The standards provided connectivity at up to 256k bps with line lengths of 15m (49 ft), however high speed ports and high quality cable have allowed these limitations to be exceeded. The length of cable and the speed it supports depends on the quality of the cable. Clock signals are only used for synchronous communications. The modem or DSU extracts the clock from the data stream and provides a steady clock signal to the DTE. The transmit and receive clock signals do not have to be the same. Some of the shortcomings of EIA-232 include: The interface uses a common ground between DTE and DCE, which is acceptable when you are using a short cable that connects DTE and DCE devices in the same room. But with longer links between devices, this may not hold true. It is not possible to effectively screen noise for a signal on a single line. By screening the entire cable, you can reduce the influence of outside noise, but internally generated noise is still a problem. As the Baud rate and line length increase, capacitance between the cables introduces crosstalk, until the data becomes unreadable.

285 Chapter 6: Wide Area Networking V.35 Interface V.35 is a high-speed serial interface standard designed to support DTE and DCE connectivity over digital lines. It was originally specified by CCITT as an interface for 48kbps line transmissions and has since been adopted for higher speeds. It was discontinued by CCITT in 1988, and replaced by recommendations V.IO and V.ll. V.35 combines the bandwidth of several telephone circuits to provide an interface between a DTE or DCE and a CSU/DSU. Cable distances can theoretically reach 4000 feet (1200 m) at speeds up to 100 Kbps, depending on equipment used and the cable quality. To achieve such speeds and distances, V.35 combines balanced and unbalanced voltage signals on the same interface. V.35 control signals are common earth single wire interfaces, because these signal levels are mostly constant or vary at low frequencies. The high frequency data and clock signals are carried by balanced lines (this means each signal has its own ground). Most 56 kbps DSUs are supplied with both V.35 and EIA-232 ports because EIA232 is perfectly adequate at speeds up to 200 kbps and generally provides significant cost savings. Troubleshooting Serial Links An important diagnostic tools for serial links is the show interfaces serial privileged exec command, which displays serial interface statistics and information. Presented below is a sample output, with some of the more important data described: router#show interface sO Line 01 SerialO is up, line protocol is down Line 02 Line 03 Line 04 Line 05 Line 06 Line 07 Line 08 Line 09 Line 10 Line 11 Line 12 Hardware is HD64570 Internet address is 192.168.1.1/24 MTU 1500 bytes, BW 1544 Kbit, DLY 2 0 0 0 0 usee, rely 2 5 5 / 2 5 5 , load 1 / 2 5 5 Encapsulation HDLC, loopback not set, keepalive set (10 sec) Last input never, output 00:00:05, output hang never Last clearing of show interface counters never Input queue: 0/75/0 (size/max/drops); Total output drops: 0 Queueing strategy: weighted fair Output queue: 0/1000/64/O (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated)

Chapter 6: Wide Area Networking 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 0 packets input, 0 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 114 packets output, 3343 bytes, 0 underruns 0 output errors, 0 collisions, 39 interface resets 0 output buffer failures, 0 output buffers swapped out 74 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up

286

Line 13 Line 14 Line 15 Line 16 Line 17 Line 18 Line 19 Line 20 Line 21 Line 22 router*

Line 1: This tells you whether the physical interface and line protocols for the interface are active. The physical interface can be up (Carrier Detect (CD) is present), down (CD not present), or administratively disabled (meaning someone has turned the interface off by issuing a shut command). The line protocol (the router Layer 2 process) considers the interface to be up if keepalives are being received. Below are descriptions of possible conditions for these two entries. Line 4: Provides information about bandwidth, delay and reliability of the link. Line 5: Shows the Layer 2 encapsulation type (Frame Relay, HDLC, X.25, etc.). Line 8: Shows number of input drops. Line 9: Shows packet queue information (weighted fair queuing in this example). Line 10: Shows number of output drops. Line 17: This line gives you significant troubleshooting information, including the number of input, CRC, frame and abort errors. Keep in mind these counters are cumulative, so when working a problem, run the show interface serial command multiple times to check whether the numbers are incrementing. Line 19: Shows number of interface resets. Line 2 1 : The number of carrier transitions shows how many times the CD signal of a serial interface has changed state. Usually this is a problem with the interface, or a problem with the carrier. Show Controllers Command One of the most important diagnostic tools for serial links is the show controllers exec command that displays statistics and information about a

287 Chapter 6: Wide Area Networking serial interface. Below is sample output, with some of the more important data described: While there are variations on this command for other platforms, most access layer switches will provide output similar to this: Router#show controllers serial [SerialOJ HD unit 0, idb = OxDCOBC, driver structure at 0xE1548 buffer size 1524 HD unit 0, V.35 DCE cable, clockrate 19200 cpb = Oxl, eda = 0x4940, cda = 0x4800 RX ring with 16 entries at 0x4014800 00 bd_ptr=0x4800 pak=0x0E45DC ds=0x401ECC8 status=80 pak_size=0 . [Section omitted] 16 bd_ptr=0x4940 pak=0x0E259C ds=0x4018108 status=80 pak_size=0 cpb = Oxl, eda = 0x5000, cda = 0x5000 TX ring with 1 entries at 0x4015000 00 bd_ptr=0x5000 pak=0x000000 ds=0x000000 status=80 pak_size=0 01 bd_ptr=0x5014 pak=0x000000 ds=0x000000 status=80 pak_size=0 0 missed datagrams, 0 overruns 0 bad datagram encapsulations, 0 memory errors 0 transmitter underruns 0 residual bit errors HD unit 1, idb = 0xE584C, driver structure at 0xEACD8 [Seriall] buffer size 1524 HD unit 1, No cable, clockrate 19200 cpb = 0x2, eda = 0x3140, cda = 0x3000 RX ring with 16 entries at 0x4023000 00 bd_ptr=0x3000 pak=0x0EDD6C ds=0x402CE0C status=80 pak_size=0 . [Section omitted] 16 bd_ptr=0x3140 pak=0x0EBD2C ds=0x402624C status=80 pak_size=0 cpb = 0x2, eda = 0x3800, cda = 0x3800 TX ring with 1 entries at 0x4023800 00 bd_ptr=0x3800 pak=0x000000 ds=0x000000 status=80 pak_size=0 01 bd_ptr=0x3814 pak=0x000000 ds=0x000000 status=80 pak_size=0 0 missed datagrams, 0 overruns

Chapter 6: Wide Area Networking 0 bad datagram encapsulations, 0 memory errors 0 transmitter underruns 0 residual bit errors

288

From the above output you can see that SO is connected via a V.35 cable, while SI is does not have a cable connected. Debug Commands There are a number of debug commands useful for diagnosing problems on serial links, including: debug serial interfaceVerifies whether HDLC keepalive packets are incrementing. If they are not, a timing problem may exist on the interface card or in the network. debug arpShows whether the router is sending information about or learning about routers (with ARP packets) on the other side of the WAN cloud. Use this command when some nodes on a TCP/IP network are responding, but others are not. debug frame-relay ImiGets Local Management Interface (LMI) information useful for determining whether a Frame Relay switch and a router are sending and receiving LMI packets. debug frame-relay eventsFinds out whether exchanges are occurring between a router and a Frame Relay switch. debug ppp negotiationShows Point-to-Point Protocol (PPP) packets transmitted during PPP startup, where PPP options are negotiated. debug ppp packetShows PPP packets being sent and received. This command displays low-level packet dumps. debug ppp errorsShows PPP errors (e.g. illegal or malformed frames) associated with PPP connection negotiation and operation. debug ppp chapShows PPP Challenge Handshake Authentication Protocol (CHAP) and Password Authentication Protocol (PAP) packet exchanges. debug serial packetShows Switched Multimegabit Data Service (SMDS) packets being sent and received. Also prints error messages indicating why a packet was not sent or was received erroneously. When an SMDS packet is transmitted or received, this command dumps the entire SMDS header and some payioad data.

289 Chapter 6: Wide Area Networking Increasing Output Drops Output drops occur when the router attempts to hand off a packet to a transmit buffer when none is available. By reviewing the output of repeated show interfaces serial privileged exec commands, you can determine if the output drop count is incrementing. In most cases this would be a problem, but if the link is understood to be oversubscribed, it might be preferable to drop packets in cases where the protocol provides flow support and can retransmit. There are several ways to address this problem: Increase the bandwidth. This is the quick-and-dirty, throw-money-at-thesituation answer, but it should be considered. If you are dropping packets because there's too much traffic, widen the road. Reduce periodic broadcast traffic through access lists and other means. Turn off fast switching on the impacted interfaces for heavily used protocols. Increase the output hold queue size using the hold-queue out interface configuration command. This will prevent packet drops, but should be done carefully and in small (25 percent) increments. Introduce priority queuing on slower serial links by configuring priority lists. Once of the main features of priority queuing is that lesser priority traffic will be dropped in favor of more important. Again, this should be done with caution.

Increasing Input Drops If you review the output of repeated show interfaces serial privileged exec commands and determine the input drop count is incrementing, this may be caused by one of several conditions: oversubscribing the line, hardware problems, or provider issues, including framing errors, aborts and CRC errors. A common cause of excessive input drops is that more packets are being received by the interface than can be processed by the router. This is typically seen when traffic is routed between higher speed LAN interfaces and serial interfaces. Backups can occur, forcing the router to drop packets during periods of congestion. There are several ways to address this: Once againincrease the bandwidth. If you are dropping packets because there's too much traffic, increase the size of the pipe. Use the hold-queue number out interface configuration command to increase output queue size on the interface that is dropping packets. Reduce input queue size from its default of 75 packets, using the hold-queue number in interface configuration command. This forces input drops to become output drops, which are less impactful. Consistent high levels of input drops (exceeding one percent of total interface traffic) can be symptoms of:

Chapter 6: Wide Area Networking Serial line noise Defective or misconfigured CSU or DSU Defective or misconfigured router

290

Faulty provider network equipment Clocking misconfiguration Bad or incorrectly configured cable, or a cable that exceeds maximum specification length Problems caused by a data converter or other device in use between router and DSU

There problems can be addressed in several ways: You can use a serial analyzer to isolate the source of the input errors, looking at traffic before it reaches the router. If errors are detected the problem is probably external to the router, or ithere may be a clock mismatch, or a hardware problem on the external network. Use caution in doing this, as Cisco recommends against use of data converters when connecting a router to a WAN or a serial network. Use a combination of loopback configurations and ping tests to isolate specific problem sources. Look for error patterns. Do errors occur at uniform intervals? Are they sporadic? Could that be related to some periodic function, such as routing updates? Cyclic redundancy check (CRC) errors, framing errors, or aborts above one percent of the total interface traffic can indicate a significant link problem that should be diagnosed and repaired immediately.

Excessive Aborts Aborts indicate an illegal 1 bit sequence (more than seven in a row). This condition can be caused by any of the following: Line clocking is improperly configured. SCTE mode is not enabled on DSU. The serial cable is too long, or improperly shielded. A packet terminated in mid-transmission (oftenl because an interface was reset, or a framing error occurred). A hardware problem (a bad circuit, bad CSU/DSU, or bad remote router sending interface). A "ones" density problem on the Tl link (incorrect framing or coding specification).

Proper steps to resolve abort problems: Ensure all devices are configured to use a common line clock. If capable, set SCTE on both the local and remote CSU/DSUs.

291 Chapter 6: Wide Area Networking Make sure the cable is properly shielded and shorter than the recommended length. Check hardware at both ends of the link. Swap suspected faulty equipment, and ensure all connections are properly seated. Lower the data transmission rates, and monitor the situation to determine if the rate of aborts decrease. Use local and remote loopback tests to find out where the aborts are occurring. Contact the provider and ask that they perform line integrity testing.

Clocking Problems Serial connection clocking conflicts can lead to degraded performance and even chronic loss of connection service. In general, clocking problems in serial WAN interconnections can be attributed to one of the following causes: Incorrect CSU or DSL) configuration Noisy or poor patch panel connections Nonstandard cables that are too long or not properly unshielded Several cables connected in a row In the lab, the failure of a network engineer to apply the clock rate interface configuration command to the DCE side of the link

To find out whether you have a clocking problem, review output from the show interface serial exec command on both end routers. CRC, framing errors, and aborts are clocking problem indications. If the errors are in the range of 0.5 percent to 2.0 percent of traffic on the interface, there are probably clocking somewhere in the WAN. After you've decided that clocking conflicts are the most likely cause of input errors, use ping and loopback tests (both local and remote) to determine if the problem is in the line or one of the connections. Depending on these results, and the output of the show interfaces serial exec commands on the various routers, you can usually determine where the errors are accumulating: If only one end is experiencing input errors, there is probably a DSL) clocking or cabling problem. Aborts on one end suggest that the other end is sending bad information or that there is a line problem. If input errors are accumulating on both ends of the connection, clocking of the CSU is the most likely problem.

Clocking can be internal, looped, or line. The default is line clockingthe router is receiving clocking from the carrier network line. Long cablescables that are not transmitting the TxC signal (transmit echoed clock line, also known as TXCE or SCTE clock)can create high error rates at

Chapter 6: Wide Area Networking

292

faster transmission speeds. For example, when a PA-8T synchronous serial port adapter reports many error packets, the problem might be a phase shift. Inverting the clock may correct the shift. Increasing Interface Resets on a Serial Link You can determine whether the Interface reset count is incrementing by reviewing output from repeated show interfaces serial privileged exec commands. These errors indicate missed keepalive packets. This condition can be caused by: Congestion on the link (typically associated with output drops). Possible hardware problems at the CSU, DSU, or switch. Bad line causing CD transitions.

When interface resets occur, you should examine other fields of the show interfaces serial command output to determine the problem source. Assuming that an increase in interface resets is recorded, examine the following fields: If carrier transitions or input errors are high while interface resets are being registered, the problem may be a bad link or a bad CSU or DSU. Swap out any suspected faulty equipment. If there many output drops, address this problem as described earlier.

Increasing Carrier Transitions Count on Serial Link By reviewing the output of repeated show interfaces serial privileged exec commands you can determine if the carrier transitions count is incrementing. This occurs whenever there is an interruption in the carrier signal (such as an interface reset at the remote end of a link). This condition can be created by any of the following: Line interruptions due to an external source, such as a break in the cabling, CSU/DSU alarms, or a lightning striking somewhere along the network. Equipment failure, such as a faulty switch, DSU, or router.

The proper steps to resolve abort problems is: Use a breakout box or a serial analyzer to check hardware at both ends of the link. Check the router. Swap out any suspected faulty equipment.

293 Chapter 6: Wide Area Networking CRC and Framing Errors CRC and Framing errors occur when a CRC calculation fails (indicating that data is corrupted), or when a packet does not end on an 8-bit byte boundary for one of the following reasons: The serial line is too noisy. The serial cable is too long or improperly shielded. Clocking is incorrectly configured. A "ones" density problem has occurred on a Tl link (indicating incorrect framing or coding specification).

To resolve CRC and Framing error problems: Make sure that the cable is properly shielded and within the recommended length. Double check that all devices are properly configured with common line clocking, and that the local and remote CSU/DSU's are configured for the same framing and coding scheme as that used by the serial link provider inbetween (for example, ESF/B8ZS). Ensure that the line is clean enough for transmission requirements. Contact the provider and request they perform integrity tests on the line.

Alarms

Receive Alarm Indication Signal (Blue) A receive (Rx) alarm indication signal (AIS) means there is an alarm occurring on the line upstream from the equipment connected to the port. An AIS failure is declared when an AIS defect is detected at the input and still exists after the loss of frame failure is declared (caused by the unframed nature of the "all-ones" signal). The AIS failure is cleared when the loss of frame failure is cleared. Receive Remote Alarm Indication (Yellow) A receive remote alarm indication (RAI) means the far end equipment has a problem with the signal received from the upstream equipment. Transmit Sending Remote Alarm (Red) A Red alarm is declared when the channel service unit (CSU) cannot synchronize with the Tl line framing pattern. Transmit Remote Alarm Indication (Yellow) A transmit (Tx) remote alarm indication (RAI) at a DS1 interface means that the interface has a problem with the signal received from the far end equipment.

Chapter 6: Wide Area Networking Transmit Alarm Indication Signal (Blue)

294

A transmit (Tx) alarm indication signal (AIS) means there is an alarm occurring on the line downstream from the equipment connected to the port. Dynamic Packet Transport/Spatial Reuse Protocol (DPT/SRP) DPT is a Cisco technology that uses dual, counter-rotating rings (inner and outer rings). DPT sends data packets in one direction on one fiber ring, and the corresponding control packets in the opposite direction on the other fiber ring. DPT takes advantage of a new MAC layer protocol, Spatial Reuse Protocol (SRP), which gets its name from allowing the destination node to strip packets from the ring, making bandwidth on the ring available to other packets. Data Compression Data compression can be classified as either hardware or software. Software compression can be further classified as CPU-intensive or memory-intensive. There are two main compression algorithms: Stacker ccompression and predictor compression. The Stacker method is more versatile, because it will runs on any supported point-to-point Layer 2 encapsulation. Predictor only supports PPP and LAPB. Stacker Compression Stacker compression uses the Lempel-Ziv compression algorithm. The Stacker algorithm uses an encoded dictionary to replace a continuous sequence of characters with codes. This stores the symbols represented by the codes in memory in a dictionary-style list. Since the relationship between a code and the original symbol varies as the data varies, this approach is responsive to patterns in the data. This flexibility is especially important for LAN data, because many different applications can be transmitting over the WAN at any one time. As the data varies, the dictionary changes to adapt to the varying needs of the traffic. Stacker compression is more CPU-intensive than memory-intensive. To configure Stacker compression, issue the command compress stac from the interface configuration mode. Predictor Compression The algorithm used in predictor compression attempts to predict the next sequence of characters in a data stream, using an index to look up a sequence in a compression dictionary. It then examines the next sequence in the data stream to see if it has a match. If there is a match, that sequence replaces the looked-up sequence in the dictionary. If there is no match, the algorithm locates the next character sequence in the index and the process starts over.

295 Chapter 6: Wide Area Networking The index updates itself by hashing the most recent character sequences from the input stream. No resources are used to compress already-compressed data. The compression ratio obtained using predictor is not as good as other compression algorithms, but predictor compression remains one of the fastest algorithms available. Predictor compression is more memory-intensive than CPUintensive. To configure predictor compression, issue the command compress predictor from the interface configuration mode. Cisco devices use both the Stacker and predictor data compression algorithms. The Compression Service Adapter (CSA) only supports the Stacker algorithm.

Chapter 6: Wide Area Networking Chapter 6 Questions 6-1. Which part of the frame relay network sends BECN? a) The frame switch b) The DTE device c) The DCE device d) The end host 6-2. Which of the following are true about "4B/5B" encoding? a) 4 bit values sent as 5-bit codeword b) 4 byte values sent as 5-byte codeword c) Has a 30% overhead d) Has a 20% overhead 6-3. What is the maximum speed of a Frame Relay? a) 2 Mbps b) 10 Mbps c) 45 Mbps d) 52 Mbps e) 100 Mbps 6-4.

296

If you are establishing, configuring, maintaining, or terminating the data link layer of a PPP connection, which protocol are you running? a) TCP b) LCP c) NCP d) RDP

6-5.

Which steps take place when PPP is establishing a link? a) First, PPP sends LCP frames to establish, test, and configure the link b) First, PPP sends NCP frames to establish, test, and configure the link c) Second, PPP chooses and configures network protocols using LCP d) Second, PPP chooses and configures network protocols using NCP

6-6.

When a PPP link is coming up, what is the sequence of events? a) Authentication, protocol negotiation, link establishment, link termination

297 Chapter 6: Wide Area Networking b) Authentication, link establishment, protocol negotiation, link termination c) Link establishment, protocol negotiation, authentication, link termination d) Link establishment, authentication, protocol negotiation, link termination 6-7. Which of the following does PPP provide? a) A means of encapsulating multi-protocol datagrams b) A Link Control Protocol (LCP) for establishing, configuring and testing the data-link connection c) Different forms of authentication d) A set of Network Control Protocols (NCPs) for establishing and configuring network layer protocols 6-8. Routers EMI, EM2, and EM3 are configured in a hub and spoke frame relay environment, with router EMI as the hub. You have configured Router EMI, Router EM2, and Router EM3 to run IGRP over the frame relay connections. No sub-interfaces are used. You have configured a single IP subnet on all the Frame Relay interfaces. Router EMI can reach both router EM2 and EM3, but EM2 and EM3 can not reach each other. What is the probable cause of this problem? a) Router EMI is missing frame maps. b) Split-horizon is enabled on Router EMI. c) Router EM2 and Router EM3 are not performing frame map updates. d) LMI mismatches between routers EM2 and EM3. e) Split-horizon is disabled on Router EMI. 6-9. You are troubleshooting a frame relay problem with the serialO interface on one of your EnableMode routers. When the interface is brought up, it stays up for a short time before it goes back down. You issue the show interface command, and from this you can see that your interface shows LMI status messages sent, but none received. What could be the problem? a) The Frame Relay Imi-type is set incorrectly. b) Keepalives are not set correctly on both ends. c) The DCD not set correctly for a Frame Relay circuit. d) Too many sub-interfaces are exceeding IDB limits. e) There are too many input errors on the line. 6-10. What is the maximum theoretical number of DLCI's that can be advertised on a Frame Relay interface with an MTU of 1500 bytes when using ANSI LMI?

Chapter 6: Wide Area Networking

298

a)1024 b) 297 c) 796 d) 992 e) 1023

f) 186
6-11. Which of the following are standards for physical WAN interfaces? (Choose all that apply) a) RFC 1711 b) HSSI c) V.35 d) 802.3 e) 802.5

f) 802.11
g) EIA/TIA 232 h) ISO 8648 6.12. What is the maximum theoretical number of DLCI's that can be advertised on a Frame Relay interface with an MTU of 1500 bytes when using ANSI LMI? a)1023 b) 1024 c) 186 d) 297 e) 796

f) 992
6-13. What is a reason for the invert txclock command being configured? a) It inverts the phase of the local clock used for timing incoming data the serial line. b) It corrects systems that use long cables that experience high error rates when operating at the higher transmission speeds. c) It is used for adjusting the transmit clock properties of the PPP negotiation process, d) It synchronizes TXD and RXD clocks. e) It is used to allow the interface to provide clocking, rather than receiving clocking from the line. 6-14. You are troubleshooting a frame relay problem with the serialO interface on one of your EnableMode routers. When the interface is brought up, it stays up for a short time before it goes back down. You issue the show interface command, and from this you can see that your interface shows

299 Chapter 6: Wide Area Networking LMI status messages sent, but none received. What could be the problem? a) Keepalives are not set correctly on both ends. b) The Frame Relay Imi-type is set incorrectly. c) Too many sub-interfaces are exceeding IDB limits. d) The DCD not set correctly for a Frame Relay circuit. e) There are too many input errors on the line. 6-15. Split horizon is often used with Poison Reverse to prevent routing loops. Of the following choices, which statement is FALSE regarding the rule of split horizon? a) It aids in preventing routing loops. b) It is enabled by default on multipoint Frame Relay subinterfaces. c) It can be disabled for IP/RIP and IPX/RIP. d) It can cause problems on cetain Frame Relay Hub-and Spoke configurations. e) None of the above. 6-16. On router EMI, you want to view the status of a frame relay connection. Which show commands will display the status of a Frame Relay PVC? (Select all that apply) a) show frame relay pvc b) show frame relay interface c) show frame-relay map d) show frame-relay Imi e) show frame-relay pvc f) show frame-relay interface 6-17. Which of the following statement is true regarding clocking for a Cisco Tl interface? a) Routers are DTEs and NEVER supply clocking to T l / E l line. b) The clock source identifies the stratum level associated with the router T l / E l . The default is Stratum 1. c) The clock source command specifies the location of the NTP server for timing. d) The clock source selects a source for the interface to clock outgoing data. The default is clock source line -Specifies that the T l / E l link uses the recovered clock from the line. e) The clock source command selects a source for the interface to clock received data. By default, it is clock source loop-timed (specifies that the T l / E l interface takes the clock from the Tx (line) and uses it for Rx).

Chapter 6: Wide Area Networking

300

6-18. You are seeing a large number of clocking problems on the serial interface of one of your routers. Which of the following would NOT cause this? (Choose all that apply.) a) Improper DSU configuration. b) Impedance mismatching. c) Several cables connected together in a row. d) Mismatching encapsulations on each end. e) Improper CSU configuration. 6-19. A serial interface on a Cisco router is being connected to an external CSU/DSU. The CSU/DSU has an RS-232 interface with a DB-25 connection. Which cables would be used to connect the router to the external CSU/DSU? a) DB-60 female to DB-25 male (DTE) b) DB-60 female to DB-25 female (DTE) c) DB-60 male to DB-25 female (DTE) d) DB-60 male to DB-25 female (DCE) e) None of the above 6-20 A router has a Tl private line connection, with the encapsulation type set to HDLC. Which of the following are transfer modes that could be supported over this HDLC circuit? (Choose all that apply)

a) ARB b) ABM c) ARM d) LAPB e) LAPD f) N R M


6-21. Routers EMI, EM2, and EM3 are configured in a hub and spoke frame relay environment, with router EMI as the hub. You have configured Router EMI, Router EM2, and Router EM3 to run IGRP over the frame relay connections. No sub-interfaces are used. You have configured a single IP subnet on all the Frame Relay interfaces. Router EMI can reach both router EM2 and EM3, but EM2 and EM3 can not reach each other. What is the probable cause of this problem? a) Split-horizon is enabled on Router EMI. b) Split-horizon is disabled on Router EMI. c) Router EMI is missing frame maps. d) Router EM2 and Router EM3 are not performing frame map updates. e) LMI mismatches between routers EM2 and EM3.

3 0 1 Chapter 6: Wide Area Networking 6-22. You are troubleshooting connection problems from router EMI. In doing so, you issue the show interface serial 0 command and see: "serial 0 is up, line protocol down (disabled)." What can you conclude from this? a) The SerialO b) The SerialO command. c) The SerialO problems. d) The SerialO interface is using the wrong protocol. interface needs to be enabled with the no shut down interface is not working properly due to telco service interface is operating properly.

e) A loop exists in the circuit. 6-23. When troubleshooting a Tl problem on your network, you discover that a number of RED alarms are being generated. What does this red alarm on a Tl indicate? a) The CSU cannot synchronize with the framing pattern on the Tl line. b) There is an alarm from the equipment connected to the port generating the alarm. c) There is an alarm on the line upstream form the equipment connected to the port generating the alarm. d) The far end equipment has a problem with the signal it is receiving from the upstream equipment. e) The CSU is in a loopback. 6-24. The EMI frame relay router is configured for frame relay traffic shaping as shown below: hostname EMI ! interface SerialO/0 bandwidth 384 encapsulation frame-relay
i

interface SerialO/0.101 bandwidth 128 ip address 192.168.1.1 255.255.0 frame-relay interface-dlci 101 class ccie ! map-class frame-relay ccie frame-relay cir 128000 frame-relay be 16000 frame-relay be 0 frame-relay adaptive-shaping been Router EMI is receiving BECNs. What is the lowest rate EMI will shape its output traffic to? a) 0 kbps

Chapter 6: Wide Area Networking

302

b) 16 kbps c) 64 kbps d) 128 kbps e) 384 kbps 6-25. The EnableMode network is implementing VOIP on a frame relay/ATM internetworking network. Voice quality on the network is being affected by FTP traffic. What is required to enable fragmentation of the large FTP packets? a) Fragmentation is not supported with frame relay to ATM service internetworking. b) Fragmentation is already provided by default from the ATM network. c) Configure FRF. 12 fragmentation on the Frame Relay interface. d) Configure MLPPP on the Frame Relay and ATM interfaces. e) Configure PPP link fragmentation and interleaving on the EMI and EM2 routers. 6-26. Two routers, EMI and EM2, are configured for OSPF. Router EM2 is the HQ router with an ATM DS3, while router EMI is a remote router connected via Frame Relay. These two locations are connected via Frame Relay to ATM internetworking as shown below: Router EMI shows the EXSTART state for neighbor Router EM2. Router EM2 shoes the EXCHANGE state for neighbor Router EMI. What would be the most probable reason for this? a) This is the normal OSPF operation. b) Multicast address 224.0.0.5 is being filtered at router EMI. c) Multicast address 224.0.0.6 is being filtered at router EM2. d) There is an OSPF network type mismatch. e) There is an MTU mismatch. 6-27. With regard to PPPoA, which of the following statements are true? Choose all that apply.) a) PPPoA is not a standard based protocol. b) PPPoA contains information about NCP LCP and supports all AAL. c) PPPoA uses adaptation layer 5 (AAL5) as the framed protocol and is used primarily in xDSL. d) In PPPoA architecture, IP address allocation for the subscriber CPE uses IPCP negotiation. e) PPPoA supports all ppp features except password PAP CHAP. 6-28. An EnableMode branch office uses Telnet and FTP to access an application at the main office over a point to point Tl HDLC link. You wish to increase the performance over this link through the use of a compression

303 Chapter 6: Wide Area Networking algorithm. What compression type will provide the best performance improvement? a) Predictor compression b) Stacker compression c) TCP header compression d) Compressed Real-time Transport Protocol 6-29. Under one of the serial interfaces of your router you see the following configured: Interface serial 0/0 Encapsulation PPP IP address 10.1.1.1 255.255.255.252 Invert txclock What is a reason for the invert txclock command being configured? a) It inverts the phase of the local clock used for timing incoming data the serial line. b) It synchronizes TXD and RXD clocks. c) It is used for adjusting the transmit clock properties of the PPP negotiation process. d) It corrects systems that use long cables that experience high error rates whenoperating at the higher transmission speeds. e) It is used to allow the interface to provide clocking, rather than receiving clocking from the line.

Chapter 6; Wide Area Networking

304

Chapter 6 Answers 6-1. 6-2. a, c a, d 6-3. c 6-4. b 6-5. a, d 6-6. d 6-7. a, b, c, d 6-8. b 6-9. a 6-10. b 6-11. b, c, g 6-12. d 6-13. b 6-14. b 6-15. c 6-16. c, 6-17. d 6-18. b, d 6-19. a 6-20. b, c, f 6-21. a 6-22. c 6-23. a 6-24. c 6-25. c 6-26. 6-27. c, d 6-28. b 6-29. d

Chapter 7 IP Multicast
Multicast technologies are located in routers and switches in the network infrastructure. IP Multicast relies on a single data stream. This data stream is replicated by routers at branch points in the network. IP multicast can thus use bandwidth efficiently, with reduced load on content servers, supporting greater reach at a lower cost per user. Three elements are required for multicast deployment: The application The network infrastructure Client devices.

Benefits of IP Multicast IP Multicast technologies allow enterprises and service providers to leverage network resources for massively scalable distribution of data, voice, and video streams to a large number of users. Multicast allows organizations and service providers to: Deploy and scale distributed applications across the network Create a standard, enterprise-wide content distribution model Resolve traffic congestion issues Leverage existing infrastructure in deploying and supporting value-added services

Multicast UnicastA frame that is processed by the destination host (a single machine to a single machine). BroadcastA frame that is processed by every host on the broadcast domain (one machine to all machines). MulticastA frame that is processed by multicast members in the broadcast domain (a single machine to a select list of machines).

Using Multicast, an application sends a single stream of packets to a defined list or group of computers, instead of sending the stream individually to each recipient, flooding the network with broadcasts. Class-D addresses are allocated dynamically and are reserved for multicast traffic.

Chapter 7: IP Multicast

306

ISP

SPA

, -4JRL
M u h i s'<! Vnnin < Ft KP

' { - " : ^-,-.,'"


' MIMtliP

ISP A 1, Snooping, CGMP, RUMP "'IM SM Uiji i ii,i SJ;M MVPN IGMP

Cm|iit$ Multicast

Ititgrtfomam Multicast

Figure 7 - 1 . IP Multicast Network IGMP and CGMP Multicast Protocols The way to manage multicasts is to allow directed switching of multicast traffic, and also to configure switch ports dynamically. This ensures that IP multicast traffic is forwarded to the appropriate ports and to those ports only. Cisco switches use: Internet Group Management Protocol (IGMP)IGMP is a standard protocol for managing multicast transmissions to routed ports. A problem with this protocol is that when a VLAN on a switch is set to receive, all workstations on the VLAN will receive the multicast stream. Cisco Group Management Protocol (CGMP)CGMP is a Cisco proprietary protocol for controlling the flow of multicast streams to individual VLAN port members. This solves the problem cited above, but the router must be running IGMP.

CGMP and IGMP software components run on Cisco routers and switches. When a CGMP/IGMP-capable router receives an IGMP control packet, it responds by creating the appropriate CGMP or IGMP packet containing the request type, the multicast group address, and the host MAC address. These request types can be "join" or "leave" messages. The router sends the packet on to a well-known

307 Chapter 7: IP Multicast address to which all switches listen in order for the supervisor engine module to interpret the packet and automatically modify the forwarding table. When a spanning-tree VLAN topology changes, the CGMP/IGMP-capable router purges the old information generates new multicast group information. When a CGMP/IGMP-learned port link is disabled, the corresponding port is removed from any multicast group. Routers running CGMP/IGMP transmit periodic multicast group queries, so when a host desires to remain in a multicast group, it must respond to the query. When the router receives no reports from any host in a multicast group after several queries, the router sends the switch a CGMP/IGMP command to remove the group from the forwarding tables. CGMP has a fast-leave-processing capability that allows the switch to detect IGMP version-2 leave messages sent by hosts on any of the supervisor engine module ports to the all-routers multicast address. CGMP and IGMP snooping are used to limit multicast traffic in a switched network. A LAN switch floods multicast traffic within the broadcast domain by default. This consumes bandwidth when many multicast servers are simultaneously sending streams to the segment. RGMP stops multicast traffic from exiting the Cisco router through ports to which only disinterested multicast routers are connected. RGMP thus reduces network congestion by forwarding multicast traffic only to those routers that are configured to receive it. Use of RGMP requires IGMP snooping to be enabled on the Cisco router. IGMP snooping allows the switch to "listen to" IGMP conversations between hosts and routers. When a switch detects an IGMP report from a host for a given multicast group, the switch adds that host's port number to the GDA list for that group. And, when the switch detects an IGMP leave message, the switch removes the host's port from the CAM table entry. The result is that IGMP snooping stops multicast traffic from exiting through LAN ports to which hosts are connected, and allows multicasts that exits through LAN ports to which one or more multicast routers are connected. Designated Querier Under IGMP the designated switch is the switch that transmits IGMP host query messages. IGMP versions 1 and 2 use different methods for selecting this switch. Under IGMPvl, the "designated querier" is elected according to the multicast routing protocol on the LAN (for example, PIM). With IGMPv2, the designated querier is the multicast switch that has the lowest IP address. IGMP Versions 1, 2, and 3 IGMP version 2 is similar to version 1, but with enhancements, including support for IGMP-leave messages and group-specific queries. The main enhancement is the IGMP leave group message. With this message, a multicast host signals the

Chapter 7: IP Multicast multicast router that it is leaving the multicast group and no longer requires multicast traffic. The multicast router then queries the group to determine whether any other group member still requires the multicast. The leave group message frees bandwidth and network usage by reducing delay in stopping unneeded multicast traffic.

308

IGMPv2 is the current industry-standard protocol for managing multicast group membership. The leave group message is a new type introduced in IGMPv2. The membership report message is issued by a host in order to join a specific multicast group (GDA). An IGMP router, on receiving a membership report message, will add the GDA to the multicast routing table and begin forwarding IGMP traffic to this group. Membership queries are issued by router at regular intervals to check whether any hosts in that segment are interested in the GDA. Host membership report messages are sent when the host wants to receive GDA traffic, and also as a response for a membership query from the IGMP router. A leave group message is sent when a host no longer needs IGMP traffic. The multicast router removes the GDA from the multicast routing table when it receives this leave group message. IGMP multicast routers periodically send host membership query messages to determine which host groups have members on their local networks. If no membership report is returned for a particular group after some number of membership query messages, the routers assume that that group has no local members and the routers stop forwarding remotely-originated multicasts for that particular group. In IGMP version 2, a leave group message generates a group specific query from the router to check for additional hosts participating in the multicast session. The destination for a group specific query is always the multicast address in use. IGMPv3 is today available on Cisco routers, Cisco switches, under Unix and Windows OS. IGMPv3 is slowly replacing IGMPv2 as the standard. Under IGMPv3, a host can designate which groups it wants to receive multicast traffic from and also the IP address the host believes will transmit the traffic stream. IGMPv3 allows hosts to specify the list of hosts from which they will receive traffic. Traffic from other hosts is blocked inside the network. Hosts can also block packets that come from sources sending unwanted traffic. Here are the IGMP message types by version: Message Membership Query VI Membership Report VI Membership Report V2 Leave Group Membership Query V3 Membership IGMP Version 1 IGMP Version 2 X X X X IGMP Version 3

" 'x "


X

309 Chapter 7: IP Multicast

Report V3 Multicast Addressing Many organizations are deploying IP multicast in their networks. Some common problems encountered during IP multicast deployment are: Linking to the Internet for multicast Controlling distribution and use of IP multicast group addresses within an organization Controlling distribution and scope of multicast application data within an organization Controlling IP multicast traffic on WAN links to prevent high rate groups from saturating low speed links Controlling who can send and who can receive IP multicast for security Locating Rendezvous Points with PIM Sparse mode and determining which IP multicast groups each will serve Supporting deployment of next generation IP multicast protocols (ie: bidirectional PIM, Source Specific Multicast, [SSM]) Simplifying IP multicast administration and troubleshooting. Planning ahead so that future re-addressing will not be required.

Without a well defined and planned IP multicast addressing scheme, organizations can potentially lose control of IP multicasts over the network and increase their administration and support overhead. A good addressing policy is the key to resolving many of these issues. Under IPv4, the unicast and multicast address spaces are handled differently. , The multicast address space is open, unlike unicast addresses, which are uniquely assigned to organizations by IANA. This openness creates a potential for address collision problems, so steps are needed to minimize this possibility. To guide network operators and to facilitate application deployment, the multicast address space has been divided into some commonly-used ranges, which include: Multicast Address range: 224.0.0.0/4 RFC1112 Local Scoped range: 224.0.0.0/24 (http://www.iana.org/assignments/multicast-addresses) IANA Assigned range: 224.0.1.0/24 (http://www.iana.org/assignments/multicast-addresses) SSM range: 232.0.0.0/8 IETF: draft-ietf-ssm-arch-01.txt GLOP range: 233.0.0.0/8 RFC2770 Administrative Scoped range: 239.0.0.0/8 RFC2365

Chapter 7: IP Multicast

310

RFC 2365 has guidelines on how the multicast address space can be divided and used by organizations. Administrative Scoped Addresses The term "administratively scoped IPv4 multicast space" covers the group address range 239.0.0.0 to 239.255.255.255. The key properties of administratively scoped IP multicast are: Administratively scoped multicast addresses are locally assigned, so they need not be unique across administrative boundaries. Packets addressed to administratively scoped multicast addresses do not cross configured administrative boundaries

These address ranges gives autonomous networks a set of dedicated multicast addresses for use inside their networkswithout fear of address collision from outside entities. This is the equivalent of RFC 1918 unicast private addresses. The network edge must be defined as the local scope boundary in order to maintain the integrity of this address space and stop leaks of control or data traffic across the boundary. Further information on IP multicast addressing and scoping can be found at http://ftp-eng.cisco.com. Cisco IOS Software Releases 12.2.12, 12.2.12S and 12.1.13E introduced an ip multicast boundary interface command. This command filters RP-announce and RP-discovery messages depending on the multicast groups allowed to pass by the boundary command. The command option is: ip multicast boundary <acl> [ filter-autorp ] After the command is issued, filtering for Auto-RP messages will occur in the following three cases: An RP-announce or a RP-discovery packet received from the interface An RP-announce or a RP-discovery packet received from another interface is forwarded out to this interface An internally generated RP-announce or a RP-discovery packet is sent out to the interface

The ip multicast boundary command is not enabled by default. Default PIM Interface Configuration Mode The recommended PIM interface configuration mode is sparse-mode. There is a global configuration command that allows Auto-RP to function with 224.0.1.40 and 224.0.1.39, which is fixed to operate in Dense-mode for announcement and discovery of dynamic Group-RP mappings between all routers. This command is: ip multicast auto-rp [listener]

311 Chapter 7: IP Multicast This command enables global configuration of all router interfaces in sparsemode, thus eliminating potential for dense-mode flooding across the network. This still allows the two groups necessary for Auto-RP to function in Dense-mode for dissemination of Group-RP mapping information and announcements. Distribution Trees (Shared Trees, Source Trees) Multicast-capable routers create distribution trees to control the path of traffic through the network. There are two basic types of multicast distribution trees: Source TreesThe simplest multicast distribution tree form, where the root is the source of the multicast tree and the branches form a spanning tree through the network to receivers. This tree uses the shortest path through the network, and is also called a shortest path tree (SPT). Shared TreesUnlike source trees that have their root at the source, shared trees use a single common root placed at a selected point on the network. This shared root is called the rendezvous point (RP).

Recommendations for PIM-SM Distribution Trees Two recommendations are valid: Shared Tree (RPT) Shortest Path Tree (SPT)

RPT is used in trading networks and other mission-critical networks where a high degree of deterministic operation is required and the network is fairly static and rigid in its configuration. SPT is the recommendation for general enterprise networks where network traffic is less demanding and a less rigorous configuration is required. SPT Versus RPT RPT is recommended for financial trading networks that surpass the 5-6000 (S,G) point and for which high availability/deterministic network behavior is critical. RPT networks are generally deterministic and considerable amounts of system and network testing occurs before release. SPT is the multicast distribution tree based shortest unicast path between an interested receiver and a source for the same group. As a default for enterprise networks, SPT is the simplest to configure. SPT requires no additional configuration above and beyond assigning each multicast-enabled interface a PIM mode (Dense, Sparse, or Sparse-Dense). SPTs scale fairly well up to around 5-6000 (S,G)s on a typical routing platform with either hardware or software forwarding support. As hardware limits increase and PIM is further developed, hardware platform limits may increase. It remains a good idea to limit the size of a multicast routing table and 5-6000 (S,G)s seems to be an appropriate limit. This is by no means a hard limit. With

Chapter 7: IP Multicast

312

networks above this level, some diligence will be needed to certify significantly higher state levels, and assure that steady-state CPU levels and convergence targets are characterized accurately, and overall network availability standards are met. Rendezvous Points (Auto-RP, BSR) The most significant difference between PIM sparse and dense mode configurations is the requirement for Rendezvous Points (RP) to be defined in sparse networks. This acts as the meeting place for sources and receivers of multicast data. The sources send their traffic to the RP, then it is forwarded to receivers down a shared distribution tree. By default, when the first hop router of the receiver learns about the source, it will send a join message directly to the source, creating a source-based distribution tree from the source to the receiver. Since by default the RP is only needed to start new sessions with sources and receivers, it experiences little additional overhead from traffic flow or processing. In PIM-SM version 1, all routers directly connected to sources or receivers (leaf routers) are manually configured with the IP address of the RP; for this reason this type of configuration is also known as a "static RP" configuration. This isn't much of a problem in a small network (like a lab exam), but it can create obvious problems in a large, complex network. PIM-SM version 2 has an Auto-RP feature that automates the distribution of group-to-RP mappings in a PIM network. The advantages are: Not having to configure a static RP address on every router. Changes need only be configured on the RP routers, not on all the leaf routers. The ability to "scope" the RP address within a domain, giving it an area of the network to cover. Scoping can be achieved by defining the time-to-live (TTL) value allowed for the Auto-RP advertisements.

For Auto-RP to work, you must configure a router as a RP mapping agent. The RP mapping agent sends reliable group-to-RP mappings to the other routers, using dense mode flooding. This way, all routers automatically find out what rendezvous point to use for the groups they care for. The two IP addresses used with Auto-RP are 224.0.1.39 and 224.0.1.40. Recommended Rendezvous Point Placement ASM Modeled applications and networks assume that sources can exist anywhere in the network or topology, and that an optimal RP location is logically in the logical center or core of the network. For one-to-many or Single Source Multicast (SSM) applications, information flows from a single source to a number of receivers. The system administrator can choose an SSM forwarding model that uses IGMPv3 and requires no RP, or the RP can be located close to the source or, as in the ASM model, in the core.

313 Chapter 7: IP Multicast Group-RP Mapping Mechanism Recommendation Static RP Configurations are best suited for networks in which all multicast services are well-known and fairly static, and where there is a willingness to modify network configurations to accommodate changes. Examples of these would include Multicast Transit ISP networks and trading floor networks. Auto-RP is recommended when: New services are expected to be defined over time The types and scope of multicast services are unknown The network environment is constantly changing Organizational changes constantly combine and divide network infrastructures There is an identified need for a dynamic Group-RP Mapping mechanism in order to avoid unnecessary network reconfiguration

Comments on Auto-RP Rendezvous Points (RPs) are used by senders to a multicast group to announce their existence and by receivers of multicast packets to learn about new senders. Auto-RP is a feature that automates the distribution of group-to-RP mappings in a PIM network. Auto-RP has the following benefits: Configuring the use of multiple RPs within a network to serve different group ranges is easy. Auto-RP allows load splitting among different RPs and arrangement of RPs according to the location of group participants. Auto-RP avoids inconsistent, manual RP configurations that can cause connectivity problems.

Historically, all Auto-RP interfaces were configured in Sparse-Dense mode. This potentially allowed the interface to fall into Dense-mode operation whenever communication with an RP was lost for more than three minutes. This also had the potential for negative impacts from Dense-mode flooding between routers. A work-around was to configure a Default-RP (RP of last resort) on each router to avoid the potential loss of default-RP information and avoid Dense-mode flooding. With the introduction of Auto-RP Listener, Cisco moved to eliminate need for the Default-RP configuration as the only failsafe mechanism capable of precluding fail-over into Dense-Mode. You can expect Cisco to introduce additional functionality in the future to completely eliminate any need for Default-RP configuration.

Chapter 7: IP Multicast Comments on Static RP

314

There are no known technical drawbacks to using Static RP definitions, rather than some other dynamic method. An issue could arise if the network configuration needs to change and this change involves configuration changes to the Group-RP mapping information. Static RP configurations are recommended for well-defined networks in which the potential for change is very low, or where manual changes are an acceptable option. If senders announce several RPs for a multicast group range, the RP with the highest IP address is selected by the mapping agent. There is no election process for selecting the mapping agent that will source an auto-RP discovery message. The Rendezvous Point must send a register stop in order to clear the registration process, and all routers between the RP and the multicast source must be multicast enabled. Example of configuration using Auto-RP: ip multicast-routing interface ethernet 0/0 ip pirn sparse-dense-mode ip pirn send-rp-announce ethernet 0 scope 16 group-list 1 ip pirn rp-address 10.1.0.23 1 Calculating a Multicast Address Multicast IP addresses are Class-D addresses in the range 224.0.0.0 to 239.255.255.255 (first octet equal to binary 11100000 through 11101111). They are referred to as Group Destination Addresses (GDA). Different protocols typically reserve addresses in the range 224.0.0.1 to 224.0.0.255 for IP multicast. Each GDA has an associated MAC address. The MAC address is defined by appending 01-00-5e to the last 23 bits of the GDA, translated into hex. Since only the last 23 bits of the GDA address are used, the second octet of the address can have either of two values and still be correct. For example: GDA 244.10.11.12 translates to MAC 01-00-5e-0a-0b-0c Step-by-step: Decimal IP address = 244.10.11.12 Binary equivalent = 11110100.00001010.00001011.00001100 Last 23 bits = 0001010.00001011.00001100 Hex equivalent of last 23 bits = 0a-0b-0c Append to 01-00-5e = 01-00-5e-0a-0b-0c

315 Chapter 7: IP Multicast In practice: 224.0.0.1 is reserved for all systems on the subnet. 224.0.0.5 is the all-OSPF routers multicast 224.0.0.6 is the Designated Routers multicast address. 224.0.0.9 is reserved for RIP version 2 announcements 224.0.0.10 is used for IGRP. 224.0.0.13 is used by PIM. Protocol Independent Multicast ( P I M ) Protocol-independent multicast (PIM) allows multicast packets to be forwarded through the network. PIM is independednt of any IP routing protocolas its name suggests. PIM must be enabled for a Cisco interface to spport IP multicast routing. Enabling PIM on an Interface also serves to enable IGMP operation on the interface. PIM can leverage the unicast routing protocols that are used to populate the unicast routing table, including BGP, EIGRP, OSPF, or static routes. PIM uses the unicast routing information to perform the multicast forwarding function. Although PIM is termed a multicast routing protocol, for the reverse path forwarding (RPF) check function it does not build an independent multicast routing table but instead depends on the unicast routing table. PIM, unlike other routing protocols, does not send and receive multicast routing updates between routers. Interfaces can be configured in dense mode, sparse mode, or sparse-dense modethe modes determine the way in which the router populates its multicast routing table, as well as how the router forwards the multicast packets it receives from directly connected LANs. PIM must be in a single mode in order to work. There is no default mode setting, since multicast routing is disabled on an interface by default. Dense-mode interfaces are always included in the table. Dense mode is used when there is adequate bandwidth and multicast group members are densely distributed in the network. Dense mode PIM floods multicast packets to all routers and prunes routers that do not support members of a particular multicast group. PIM-DM sends out multicast packets to all devices downstream. If no devices respond, it uses RPF to inform the upstream multicast device not to send future multicast packets. Sparse-mode interfaces are added to the table only when downstream routers send periodic "join" messages, or when a directly connected group member is on the interface. Sparse mode is used when limited bandwidth exists and members are more dispersed. Sparse mode PIM relies on RPs. For this purpose the PIM neighbor with the highest IP address is elected ("designated") as the Designated Router (DR). If no PIM queries are received from this DR after a certain period of time, the election mechanism will run again and a new DR will be designated. Sparse-dense mode interfaces are treated as dense mode if the group is in dense mode, or in sparse mode, if the group is in sparse mode.

Chapter 7: IP Multicast

316

Understand that an important difference between dense and sparse modes is that under dense mode, a router assumes all other routers are willing and able to forward multicast packets for a group, while under sparse mode the router requires an explicit request before forwading the traffic. PIM Commands Global command to enable Multicast Router(config)# ip multicast-routing Enable dense-mode PIM on the interface Router(config-if)# ip pirn dense-mode Enable sparse-mode PIM on the interface Router(config-if)# ip pirn sparse-mode Enable sparse-dense-mode PIM on the interface Router(config-if)# ip pirn sparse-dense-mode Reverse Path Forwarding (RPF) As mentioned above, PIM uses unicast routing information to create a distribution tree along the reverse path from receivers back to the source. The multicast routers then use this distribution tree to forward packets from the source to the receivers. RPF is a key concept in multicast forwarding that helps guarantee that the distribution tree will be loop-free. RPF enables routers to forward multicast traffic through the distribution tree. RPF makes use of the existing unicast routing table to determine the upstream and downstream neighbors. A router will forward a multicast packet only if it is received on the upstream interface PIM and Distance Vector Multicast Routing Protocol (DVMRP) Distance Vector Multicast Routing Protocol (DVMRP) is a common protocol built on the public-domain mrouted program. Cisco IOS supports the dynamic discovery of DVMRP routers, and can interoperate with them, but IOS does not natively support DVMRP. Cisco routers run Protocol Independent Multicast (PIM), and can successfully forward multicast packets to, and receive packets from, a DVMRP neighbor. Cisco routers can also propagate these DVMRP routes into and through a PIM cloud. DVMRP also uses RPF. Under DVMRP, when a router receives a packet, the router floods the packet through all paths except the path that leads back to the packet's source. Doing so allows a data stream to reach all LANs, often multiple times. A router attached to a set of LANs that do not want to receive a particular multicast group can send a "prune" message back through the distribution tree to stop subsequent packets from traveling to where there are no members.

317 Chapter 7: IP Multicast Dense-mode PIM (PIM-DM) uses RPF in a somewhat similar way to DVMRP. The most significant difference between DVMRP and PIM-DM is that PIM-DM works with whatever unicast protocol is being used. PIM-DM does not require any particular unicast protocol. PIM-SM Mechanics (Joining, Pruning PIM State, Mroute table) PIM-SM uses these PIMv2 messages Hello Bootstrap Candidate-RP-Advertisement Join/Prune Assert Register Register-Stop

In sparse networks, traffic will be forwarded to only those segments with active receivers that have explicitly requested multicast data. A designated router's job is to receive join request, in the form of an IGMP join request, from multicast devices on its network segment. If the requested group is not being received by the DR, it requests that the RP include the requested group in the multicast group. If the DR already is receiving the requested group, then the DR includes the new device on its list. When the DR has no more devices on its network segment needing to receive a particular multicast group, it notifies the RP not to send it to that group anymore. The multicast routing table shows the sources and groups that the router has cached. It shows their incoming and outgoing interfaces, multicast mode, and their RP. The address of the upstream neighbor in any PIMv2 Sparse Mode network is always calculated via the neighbor closest to the Rendezvous Point (RP). Rendezvous points (RPs) (more fully described below) provide a mechanism for supporting multiple distribution points. The source feeds a single stream to the RP, which then redistributes the stream to destinations within the various RP domains. An example of the possible output of the show ip mroute command is: r2#sh ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT M - MSDP created entry, X - Proxy Join Timer Running A - Candidate for MSDP Advertisement Outgoing interface flags: H - Hardware switched

Chapter 7: IP Multicast Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 224.0.1.40), 00:00:42/00:00:00, RP 0.0.0.0, flags: DJCL Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: EthernetO, Forward/Sparse, 00:00:42/00:00:00 r2# PIM-DM

318

PIM DM is no longer in general use as a protocol because PIM SM has proven to be the more efficient multicast Dense mode multicast traffic starts of being flooded to all segments of the network. Routers with no downstream neighbors or directly connected receivers prune back the unwanted traffic. PIM-DM uses the following PIMV2 messages.

Hello
Join/Prune Graft Graft-Ack Assert

Bidirectional PIM (bidir-PIM) Bidir-PIM is a PIM variant, under which packet traffic is routed according to rules applying to multicast groups. In bidir-PIM, packet forwarding rules have been improved over PIM-SM, allowing traffic to be passed up the shared tree toward the RP. To avoid multicast packet looping, bidir-PIM has a mechanism called the designated forwarder (DF) election, which establishes a loop-free SPT rooted at the RP. The Cisco implementation of PIM supports these three multicast group modes: Bidirectional mode Dense mode Sparse mode

A router can simultaneously support all three modes or any combination of them for different multicast groups. In bidirectional mode, traffic is routed along a bidirectional shared tree that is rooted at the rendezvous point (RP) for the group. In bidir-PIM, the IP address of the RP acts as the key for all routers to establish a loop-free spanning tree topology rooted in that IP address. This IP can be any unassigned IP address on a network that is reachable throughout the

319 Chapter 7: IP Multicast PIM domain. Using this technique is the way to establish a redundant RP configuration for bidir-PIM. Explicit join messages are used to signal membership in a bidirectional group. Traffic from sources is sent up the shared tree toward the RP and passed down the tree toward receivers on each branch of the tree. Bidir-PIM is designed for use in many-to-many applications within individual PIM domains. Multicast groups in bidirectional mode can scale to any number of sources. Since bidir-PIM is derived from PIM sparse mode (PIM-SM), the two modes share many shortest path tree (SPT) operations. There are no fundamental differences between bidir-PIM and PIM-SM in forwarding packets downstream from the RP toward receivers. Bidir-PIM differs substantiallyfrom PIM-SM, however, when passing traffic from sources upstream toward the RP. Bidir-PIM, like PIM-SM, has unconditional forwarding of source traffic toward the RP upstream on the shared tree, but lacks PIM-SM's registering process for sources. These bidir-PIM attributes are sufficient to allow all routers to forward traffic based solely on (*,G) multicast routing entries. This eliminates any requirement for source-specific state and allows scaling to any number of sources. Figures 7-2 shows the difference in state created per router for a unidirectional shared tree and source tree.
- P I M source repister message - -* Multicast date lie RP C.Q)

R P
(S,G)

>>/('. G) > '-XC.G)

7
F)ec#VOfi

*/ ,

C 6} '\(S ,

(*, Gi &3& ftj3>3 S i S 3 r, G) * W 4JPP(s. G j


*
Receiver Source

mSSa S B 5 ^^ ^
Receiver

2 (*,G) ^ (S.G)

(A) Snared tree from RP

'M Source tre

Figure 7-2. Unidirectional Shared Tree and Source Tree

Chapter 7: IP Multicast Figure 7-3 shows a bidirectional shared tree.


RP *] C G)
'/
* . \

320

*0-,O)

.Nf.et

Receiver

I'.GI * |

('.&} *

_iL
Receiver

; I

Source

Figure 7-3. Bidirectional Shared Tree Traffic under PIM-SM cannot be forwarded in the upstream direction of a tree, because PIM-SM only accepts traffic from one reverse path forwarding (RPF) interface. This interface (for the shared tree) points toward the RP, and consequently allows only downstream traffic flow. PIM-SM upstream traffic is first encapsulated into unicast register messages, which are then passed from the designated router (DR) of the source toward the RP. In a second step, the RP joins an SPT that is rooted at the source.

The result iunder PIM-SM is that traffic from sources traveling toward the RP does not flow upstream in the shared tree, but downstream along the SPT of the source until it reaches the RP. From the RP, traffic flows along the shared tree toward all receivers. Designated Forwarder (DF) Election On each network segment and point-to-point link, evrey PIM router participates in a procedure called DF election. The DF election procedure assigns one router as the designated forwarder for every RP of bidirectional groups. This router then becomes responsible for forwarding upstream to the RP all multicast packets that are received on that network. DF election uses the PIM assert process tie-breaking rules, and depends on unicast routing metrics. The router with the preferred unicast routing metric to the RP is elected as the DF. This approach ensures that multiple copies every packet will not be sent to the RP, even when parallel equal cost paths to the RP exist.

321 Chapter 7: IP Multicast Each RP has an elected DF. Multiple routers may be elected as DF on any network segment, one for each RP. Any particular router may be elected as DF for more than one interface. Bidirectional Group Tree Building Joining the shared tree of a bidirectional group is almost identical to that used in PIM SM. One difference is that, for bidirectional groups, the role of the DR is assumed by the DF for the RP. On a network with local receivers, only the router elected as the DF populates the outgoing interface list (olist) upon receiving Internet Group Management Protocol (IGMP) join messages, and sends (*,G) join or leave messages upstream toward the RP. When a downstream router joins the shared tree, the RPF neighbor in the PIM join/leave message is always the DF elected for the interface leading to the RP. A join or leave message received by any router is always ignored unless the router is the DF for the receiving interface. The receiving interface DF router updates the shared tree in the same way as in sparse mode. When all routers in a shared network support bidirectional shared trees, (S,G) join/leave messages are ignored. There is no need to send PIM assert messages, because the DF election procedure has eliminated parallel downstream paths from any RP. An RP never joins a path back to the source, nor will it send any register stops. Packet Forwarding A router creates (*,G) entries only for bidirectional groups. The outgoing interface list (olist) of a (*,G) entry includes all interfaces for which a router has been elected DF and which have received either an IGMP or PIM join message. If a router is on a sender-only branch, it will also create (*,G) state, but the olist will not include any interfaces. When a packet is received from the RPF interface toward the RP, the packet is forwarded downstream according to the outgoing interface list (olist) of the (*,G) entry. Only a router that is a DF for the receiving interface forwards the packet upstream toward the RP; all other routers simply discard the packet.

Memory, Bandwidth, and CPU Requirements In PIM dense mode (PIM-DM), PIM-SM, and most other multicast routing protocols such as Distance Vector Multicast Routing Protocol (DVMRP) and Multicast Open Shortest Path First (MOSPF), protocol operations and maintenance of packet forwarding state depend on signaling the presence or expiration of traffic.

Chapter 7: IP Multicast "Signaling" refers to:

322

The packet forwarding engine to routing protocol process within the routers The packet exchange part of the routing protocol)

Examples of traffic signaling are triggering PIM assert messages, PIM register messages, and source tree forwarding state. There are advantages to traffic signaling, but problems develop for applications with a large number of sources. The more sources in an application, the less frequently each sender sends traffic. Whenever a source begins to send packets, protocol operations take place and the forwarding state is established. For applications with a large number of sources, the forwarding state can time out before the source sends again, resulting in "bursty sources." This means that applications with a large number of sources can not only create a large amount of forwarding state (requiring memory), but they can also require high CPU usage on the Route Processor due to the accounting required to handle frequently changing state. Moreover, signaling within the router between the Route Processor and forwarding hardware can become a bottleneck if traffic signaling to the Route Processor becomes excessive and equally excessive forwarding state changes must go to the forwarding engines. Bidir-PIM solves these problems. Bidir-PIM avoids maintaining a source-specific forwarding state, thus reducing the amount of memory needed by the number of sources per multicast group. Bidir-PIM also does not require traffic signaling. The result is that using bidir-PIM avoids the "bursty source" problem and lowers the demand on CPUs and routers. Benefits and Drawbacks of PIM Debugging bidir-PIM is easier than PIM-SM Since routers operating in bidirectional mode maintain a single (*,G) entry per group, debugging multicast connectivity for bidir-PIM is easier than for PIM in sparse mode. RP Tree Delivery for All Packets The first few packets of a new source in PIM-SIM are passed encapsulated as register packets before the RP joins a path to the source. These register packets makes PIM-SM unsuitable for applications such as expanding ring searches, because the expanding ring search for those register packets will start at the RP. Bidir-PIM does not have this limitation because there is no registering process.

323 Chapter 7: IP Multicast Bidir-PIM Partial Upgrades Not Allowed Whenever bidir-PIM is enabled in a domain, then bidir-PIM must be supported on for all IP routers for multicast in that domain. You cannot enable groups for bidir-PIM operation in a partially upgraded network. Bidir-PIM Network Redundancy Not Supported Shared tree and SPT forwarding cannot be mixed as in PIM-SM. A group in bidirectional mode has only shared tree forwarding capabilities. Consequently, traffic for a bidirectional group will flow only along the one shared tree and can never simultaneously use multiple paths in a redundant network topology. You need to be careful not to overload parts of the network close to the RP, because all traffic could end up traveling through this area. Bidir-PIM Nonbroadcast Multiaccess Mode Not Supported Bidir-PIM cannot support nonbroadcast multiaccess (NBMA) mode. Bidir-PIM Traffic Forwarding Restrictions Traffic in a bidirectional group is always forwarded to the group RP on the path to the RP. If no receivers are on the path to the RP, traffic will be dropped off only at the RP. Traffic will be forwarded to the RP even when no receivers exist. This restriction results from bidir-PIM's non-use of traffic signaling. Without traffic signaling, the RP cannot identify currently active sources, and therefore does not know where to signal traffic restrictions. In practice, these traffic forwarding restrictions are not limitations for intended bidir-PIM applications. For groups with many sources, the likelihood of having no receivers is small. Different bidirectional groups can use different RPs and thus different shared trees, using different parts of the network topology. Using a few general rules for placing the RP within a network can avoid potential traffic forwarding shortcomings: Where a large number of active sources and a similar amount of receivers exist, the RP should be placed near the center of the network. This allows bandwidth into the RP to be equalized on paths to all sources and receivers, and the aggregate bandwidth into the RP is manageable. It is often useful to place the RP close to or directly neighboring the critical stations in situations where one or several colocated critical stations and a large number of other stations exist. This arrangement will produce the shortest paths between the critical stations and other stations, giving low latency on these paths and good use of potential redundant paths. Because bidir-PIM always forwards traffic from sources toward the RP, it is often helpful to place the RP close to high data rate sources. An example of such an application is a corporate education network, where a classroom is the

Chapter 7: IP Multicast critical sender station, or redundant data collector, and multiple collectors are the crucial receivers. Source Specific Multicast (SSM) SSM uses only a source tree, without flooding of data, because learning the source occurs out of band. SSM is useful for applications such as Internet broadcasting or corporate communications Internet Standard Multicast ( I S M )

324

RFC 1112, Host Extensions for IP Multicasting, describes the Internet Standard Multicast (ISM) service. This service handles delivery of IP datagrams from any source to a group of receivers called the multicast host group. Traffic for the multicast host group is made up of datagrams with an arbitrary IP unicast source address S and the multicast group address G as the IP destination address. Systems receive this traffic by becoming members of the host group. Membership in a host group requires signalling the host group using IGMP versions 1, 2, or 3. SSM datagram delivery depends on (S,G) channels. Traffic for one (S,G) channel has datagrams with an IP unicast source address S and the multicast group address G as the IP destination address. Systems receive this traffic by becoming members of the (S,G) channel. In both SSM and ISM, no signalling is required to become a source. Receivers in SSM must, however, subscribe or unsubscribe to (S,G) channels in order to receive or not receive traffic from specific sources. This means that receivers operating under SSM can receive traffic only from (S,G) channels to which they are subscribed. Contrast this with ISM, in which receivers need not know the IP addresses of sources from which they receive their traffic. The proposed standard approach for channel subscription signalling uses the IGMPv3 include mode membership reports (supported only in IGMPv3). Multicast Forwarding In multicast forwarding, source traffic is sent to the group of hosts represented by a multicast group address. The multicast router has to determine which direction is the upstream direction (toward the source) and which direction is the downstream direction (or directions). If there are multiple downstream paths, the router replicates the packet and forwards it along the appropriate downstream paths (best unicast route metric)not necessarily all paths. Multicast forwarding traffic is handled differently than unicast forwarding. Unicast forwarding traffic is routed through the network along a single path from source to destination host. A unicast router does not use a source addressit considers only the destination address and how to forward the traffic toward that destination. A unicast router scans its routing table for the destination address

325 Chapter 7: IP Multicast and then forwards a single copy of the unicast packet via the correct interface in the direction of the destination. RPF Check A router performs a reverse path forwarding (RPF) check on an arriving multicast packet. If the RPF check succeeds, the packet is forwarded. Otherwise, it is dropped. For traffic flowing down a source tree, the RPF check procedure works in tis way: The router looks up a source address in the unicast routing table to determine if the packet has arrived on the interface that is on the reverse path back to the source. If the packet has arrived on the interface leading back to the source, the RPF check has succeeded and the packet is forwarded. If the RPF check fails, the packet is dropped.

Multicast Source Discovery Protocol (MSDP) The Multicast Source Discovery Protocol (MSDP) supports connection of multiple PIM sparse-mode (PIM-SM) domains. MSDP allows multicast sources for a group to be identified to all rendezvous point(s) in different domains. Each PIM-SM domain uses its own RPs and does not depend on RPs in other domains An RP runs MSDP over TCP to discover multicast sources in other domains. MSDP is also used to announce sources sending to a group. These announcements originate at the domain's RP. MSDP relies on MP-BGP for interdomain operation.

IP Multicast Terms Basic MulticastSupports multicast applications within an enterprise campus, and together with a reliable multicast transport, PGM, provides additional network integrity. Bi-dir PIMExtension of the PIM protocols that supports shared sparse trees with bi-directional flow of data. In contrast to PIM-SM, bidir-PIM avoids keeping source specific state in routers and thus allows trees to scale to any number of sources. BroadcastOne-to-all transmission where the source sends a single copy of the message to all nodes, regardless of whether they need to receive it. Cisco Group Management ProtocolCisco protocol that allows Layer 2 switches to leverage IGMP information on Cisco routers in order to make Layer 2 multicast control forwarding decisions.

Chapter 7: IP Multicast Designated RouterRouter in a PIM-SM tree that initiates a Join or Prune message cascade upstream to the RP in response to membership information received from IGMP hosts.

326

Distribution TreeMulticast traffic flows from a source to the multicast group over a distribution tree connecting all sources to all receivers in the group. This tree may be shared by all sources (a shared-tree), or there nay be a separate distribution tree built for each source (a source tree). The shared-tree may be one-way or bi-directional. Enhanced MulticastSupports inter-domain routing and source discovery across the Internet or across multiple domains in an enterprise. Internet Group Management Protocol v2 (IGMPv2)Protocol used by IP routers and their immediately connected hosts to communicate multicast group membership states. QueryIGMP messages originating from a router to obtain multicast group membership information from its connected hosts. ReportIGMP messages originating from hosts that are joining, maintaining or leaving membership in a multicast group. IGMP SnoopingOccurs when a LAN switch examines, or "snoops," some Layer 3 information in the IGMP packet sent from the host to the router. When the switch detects an IGMP Report from a host for a particular multicast group, the switch adds the host's port number to the associated multicast table entry. When it detects an IGMP leave group message from a host, it removes the host's port from the table entry. Internet Group Management Protocol version 3 (IGMPv3)The protocol used by IPv4 systems to report IP multicast group memberships to neighboring multicast routers. Version 3 of IGMP adds support for "source filtering", the ability of a system to report a particular multicast address's interest in receiving packets only from specific source addresses, or from all but specific source addresses. IGMP MessagesIGMP messages are encapsulated in standard IP datagrams with an IP protocol number of 2 and the IP Router Alert option (RFC2113). Multicast Routing Monitor (MRM)A management diagnostic tool for network fault detection and isolation in a large multicast routing infrastructure. MRM is designed to allow a system administrator to identify multicast routing problems in near real time. MulticastA routing technique that allows IP traffic to be sent from a single source or multiple sources and delivered to multiple destinations. Rather than sending individual packets to each destination, a single packet is sent to a group of destinations known as a multicast group identified by a single IP destination

327 Chapter 7: IP Multicast group address. Multicast addressing supports transmission of a single IP datagram to multiple hosts. Multicast-LiteMinimum implementation level supporting one-to-many multicasting across the network, such as a non-interactive one-way broadcast to many listeners. Multicast Source Discovery Protocol (MSDP)Protocol for connecting multiple PIM sparse-mode (SM) domains. MSDP enables multicast sources for a group to be known to all RPs in different domains. Each PIM-SM domain uses its own RPs and does not depend on RPs in other domains. An RP runs MSDP over TCP to discover multicast sources in other domains. MSDP is also used to announce sources sending to a group. These announcements originate at the domain's RP. MSDP relies on MP-BGP for interdomain operation. Multi-protocol Extensions for Border Gateway Protocol (MP-BGP)Also known as BGP+, MP-BGP consists of multicast extensions to the BGP Unicast inter-domain protocol. MP-BGP adds capabilities to BGP to support multicast routing policy through the Internet and for connecting multicast topologies within and between BGP autonomous systems. MP-BGP is essentially an enhanced BGP that carries IP multicast routes. MP-BGP carries two sets of routes, one set for unicast routing and one set for multicast routing. Multicast routes are used by PIM to build multicast data distribution trees. Pragmatic General Multicast (PGM)PGM is a reliable multicast transport protocol for applications requiring ordered, duplicate-free, multicast data delivery from multiple sources to multiple receivers. PGM guarantees that a receiver in a multicast group will receive all data packets from transmissions and retransmissions, or will be able to detect unrecoverable data packet loss. PGM is used for multicast applications with high basic reliability requirements. Protocol Independent Multicast (PIM)An IETF-defined multicast routing architecture that supports IP multicast routing on existing IP networks. PIM's key feature is its independence from any underlying Unicast protocol such as BGP or OSPF. SM = Spare Mode (RFC 2362)Relies upon explicit joining before send multicast data is transmitted to receivers in a multicast group. DM = Dense Mode (Internet Draft Spec)Active transmition of multicast data to all potential receivers (flooding), relying on their self-pruning (removal from group) to limit desired distribution. Prunethe act of a multicast-enabled router sending the appropriate multicast messages to remove itself from the multicast tree for a particular multicast group. The router will stop receiving multicast data addressed to that group, and thus will stop delivering the multicast stream to any connected hosts until the router rejoins the group.

Chapter 7: IP Multicast Rendezvous PointA multicast router that is designated or selected as the root of a PIM-SM shared multicast distribution tree.

328

Source TreeA multicast distribution path directly connecting the source and receiver designated routers (or the rendezvous point) to obtain the shortest path through the network. The source tree efficiently routes data between source and receivers, but may produce unnecessary data duplication in network if it is built by other than the RP. UnicastA point-to-point transmission method in which the source sends an individual copy of a message to each requester. Unidirectional Link Routing Protocol (UDLR)A routing protocol that supports forwarding of multicast packets over a physical unidirectional interface (such as a high bandwidth satellite link) to stub networks with back channels. UDLR TunnelUses a back channel (a separate link) so that routing protocols treat the one-way link is bi-directional. The back channel is a special unidirectional, generic route encapsulation (GRE) tunnel through which control traffic flows in the opposite direction to user data flow. A UDLR tunnel allows IP and associated unicast and multicast routing protocols to treat the unidirectional link as logically bi-directional. The purpose of a unidirectional GRE tunnel is to move control packets to an upstream node from a downstream node. This solution serves to accommodate all IP unicast and multicast routing protocols. However, it does not scale well, and 20 tunnels is a practical limit on tunnels feeding into an upstream router. IGMP Unidirectional Link RoutingUses IP multicast routing with IGMP, which has been enhanced to accommodate UDLR. This solution scales very well for use in satellite links. URL Rendezvous Directory (URD)URD is a multicast-lite solution which provides the network with information about the specific source of a content stream. URD allows rapid network establishment of the most direct distribution path from a source to the receiver, reducing the time for receiving streaming media. With URD, an application can identify the source of the content stream through a Web page link or through the Web directly. When this information is sent back to the application, it is subsequently conveyed back to the network using URD. Here's how this worksa Web page that is URD-capable information about the source, the group, and the application (via media-type) on the Web page. An interested host will pull information from the Web page via an http transaction. The last-hop router to the receiver will intercept this transaction, sent to a special port allocated by IANA. An URD-capable last-hop router uses this information to initiate the PIM source, group (S,G) join on behalf of the host. Reference: http://www.cisco.com/en/US/tech/tk828/tech_brief0900aecd801bca26.html

329 Chapter 7: IP Multicast Chapter 7 Questions 7-1. In what class do multicast addresses fall? a) A b) c) C d) D e)E Which of the following is a multicast address? a) 224.0.0.1 b) 192.168.1.1 c) 224.0.20.144 d) 240.24.53.107 Which version of IGMP would send a "leave group" message? a) VI b) V2 c) V3 d) V4 What is CGMP? a) Maintains router multicast groups. b) Communicates with multicasts devices connected to the swtich c) Maintains and controls multicast member on a Cisco switch d) Cisco Group Multicast Protocol Which of these multicast addresses are used by OSPF? a) 224.0.0.5 b) 224.0.0.40 c) 224.0.0.2 d) 224.0.0.10 e) 224.0.0.6 PIM-DM uses what? a) PIM-SM b) DVMRP c) RPF d) CMGP What is the designated querier? a) The router that sends CGMP messages b) The host that sends IGMP messages c) The switch that sends CGMP messages d) The switch that sends IGMP host query messages

7-2.

7-3.

7-4.

7-5.

7-6.

7-7.

Chapter 7: IP Multicast 7-8. What do all multicast MAC addresses begin with? a) 01-00-6E b) 01-01-6E c) 01-00-5E d) 10-00-5E The first command in enabling multicast on a router should be what? a) ip routing multicast b) multicast routing c) ip multicast routing d) ip pirn sparse-mode 7-10. e) ip igmp What uses Auto-RP? a) DVMRP b) RPF c) PIM-SM version 2 d) PIM-SM version 1 e) IGMP f) CGMP

330

7-9.

7-11. IGMP is used to dynamically register individual hosts in a multicast group on a particular LAN. Hosts identify group memberships by sending IGMP messages to their local multicast router. Under IGMP, routers listen to IGMP messages and periodically send out queries to discover which groups are active or inactive on a particular subnet. Hosts need to actively communicate to the local multicast router that they intend to leave a group. If there are no replies, the router rimes out the group and stops forwarding the traffic. In order for this to work, what needs to be implemented? a) IGMPvl b) IGMPv2 c) IGMPv3 d) IGMPv4 e) CGMP 7-12. In the EnableMode network, hosts need to actively communicate to the local multicast router that they intend to leave a group. The router then sends out a group-specific query and determines if any remaining hosts are interested in receiving the traffic. If there are no replies, the router times out the group and stops forwarding the traffic. In order for this to work, what needs to be implemented? a) IGMPvl b) IGMPv2 c) IGMP snooping d) DVMRO

331 Chapter 7: IP Multicast e) CGMP f) RGMP 7-13. Choose the correct protocols to handle IP multicast efficiently in the EnableMode Layer 2 switched IP network. (Select the best choice). a) Use Router-Port Group Mangement Protocol (RGMP) on subnets that include end users or receiver clients. Use Cisco Group Management Protocol (CGMP), IGMP Snooping on routed segments that contain only routers, such as in a collapsed backbone. b) Use Router-Port Group Mangement Protocol (RGMP) on subnets that include end users or receiver clients and routes segments that contain only routers, such as in a collapsed backbone. c) Use Cisco Group Management Protocol (CGMP), IGMP Snooping on subnets that include end users or receiver clients. Use Router-Port Group Management Protocol (RGMP) on routed segments that contain only routers, such as in a collapsed backbone. d) Use Cisco Group Management Protocol (CGMP) on subnets that include end users or receiver clients and routed segments that contain only routers, such as in a collapsed backbone. e) Use IGMP Snooping on subnets that include end users or receiver clients and routed segments that contain only routers, such as in a collapsed backbone. 7-14. How are Layer 3 multicast IP addresses mapped to Token Ring MAC addresses? (Choose all that apply). a) All IP Multicast addresses are mapped to broadcast MAC address FFFF.FFFF.FFFF. b) All IP Multicast addresses are mapped to network MAC address 0000.0000.0000. c) All IP Multicast addresses are mapped to Functional Address C000.0004.0000. d) In the same method as is used in Ethernet networks. e) Token ring MAC addresses are not mapped to IP multicast addresses. 7-15. The IANA owns a block of Ethernet MAC addresses that start with 01:00:5E in haxadecimal formal. Half of this block is allocated for multicast addresses. The range from 0100.5e00.0000 through 0100.5e7f.ffff is the available range of Ethernet MAC address for IP multicast. This allocation allow for 23 bits in the Ethernet address to correspond to the IP multicast group address. The mapping places the lower 23 bits of the IP mlticast group address into these available 23 bits in the Ethernet address.Because the upper five bits of the IP multicast address are dropped in this mapping, the resulting address is not unique. In fact, 32 different multicast group IDs map to the same Ethernet address. 225.1.1.1 and 237.1.1.1 have been assigned to map to the same multicast MAC address on a Layer 2 switch. What will occur? a) If one user is subscribed to Group A (as designated by 225.1.1.1) and the other user is subscribed to Group (as designated by 237.1.1.1),

Chapter 7: IP Multicast

332

they would both receive only the streams meant for them. Group A would go to 225.1.1.1 and group would go to 237.1.1.1 b) If one user is subscribed to Group A (as designated by 237.1.1.1) and the other user is subscribed to Group (as designated by 225.1.1.1), they would both receive only the first stream that reached the network. c) If one user is subscribed to Group A (as designated by 225.1.1.1) and the other user is subscribed to Group (as designated by 237.1.1.1), they would both receive both streams, A and streams. d) If one user is subscribed to Group A (as designated by 225.1.1.1) and the other user is subscribed to Group (as designated by 237.1.1.1), both of them would not receive A and streams. e) None of the above 7-16. Which IP address maps to the Ethernet multicast MAC address of 01-005e-10-20-02? (Choose all that apply) a) 224.128.10.2 b) 225.128.10.2 c) 224.10.20.2 d) 225.10.20.2 e) 239.144.32.2 f) 224.16.32.2 g) All of the above h) None of the above 7-17. What is the class D IP address range 239.0.0.0-239.255.255.255 used for? a) Administratively Scoped multicast traffic meant for internal use. b) Link-local multicast traffic made up of network control messages meant to stay in the local subnet. c) Global Internet multicast traffic meant to travel throughout the Internet. d) Any valid multicast data stream for use with multicast applications. e) Routing protocol use. 7-18. You wish to implement a multicast video application over your private, internal network. To do this, you need to use a private multicast range of IP addresses across your network. Which IP range should you use? a) 224.0.0.0 - 224.255.255.255 b) c) d) e) 226.0.0.0 241.0.0.0 239.0.0.0 240.0.0.0 226.255.255.255 241.255.255.255 239.255.255.255 254.255.255.255.

333 Chapter 7: IP Multicast 7-19. The EnableMode network is using IP multicast within to conserve bandwidth during the training video seminars. In this IP multicast network, which of the following correctly describes scoping? a) Scoping is the restriction of multicast data transport to certain limited regions of the network. There are two types: TTL scoping and administrative scoping. b) Scoping is used by SSM to locate the sources and receivers in certain limited regions of the network. There are two types: TTL scoping and administrative scoping. c) Scoping is a process used in MSDP to locate the sources and receivers in different AS. d) PIM dense mode uses scoping to locate the sources and receivers in order to built shared trees. 7-20. The EnableMode network is implementing IP multicast and they want to ensure that the IP addresses they used are contained within the EnableMode autonomous system. What is the range of limited scope/administrative scope addresses that should be used? a) Addresses in the 232.0.0.0/8 range b) Addresses in the 239.0.0.0/8 range c) Addresses in the 224.0.0.0/8 range d) Addresses in the 229.0.0.0/8 range e) Addresses in the 234.0.0.0/8 range f) None of the above 7-21. In an IP multicast network, the more sources an application has, the less frequently traffic is sent from each end. Each time a source starts to send packets, protocol operations take place and a forwarding state is established. For applications with a large number of sources, this state can time-out before the source would to only create a large number of sources, this state can time-out before sources would not only create a large amount of forwarding state (requiring memory), but they could also require high CPU usage o the routing processor due to the accounting of frequently changing state. In addition, the signaling within the router between the routing processor and forwarding hardware can become another potential bottleneck of continuously large amount of traffic signaling must go to the routing processor and equally large amounts of forwarding state changes must go to the forwarding engine(s). The EnableMode network is implementing IP multicast, and they wish to avoid the problems described above. Using this information, what IP multicast technology would you recommend? Caution: This protocol should avoid maintaining source-specific forwarding state, thereby reducing the amount of memory needed by the number of sources per multicast group, requiring much less traffic signaling in the protocol, preventing the "bursty source" problem, saving on CPU requirements for protocol operations and avoiding potential internal performance limits. a) PIM Dense Mode (PIM DM)

Chapter 7: IP Multicast b) PIM Sparse Mode (PIM SM) c) Distance Vector Multicast Routing Protocol (DVMRP) d) Multicast Open Shortest Path First (MOSPF) e) Bi-directional PIM 7-22. You are a technician at EnableMode. Your newly appointed EnableMode trainee wants to know which IP protocol is used to send PIMv2 control messages. What would your reply be? a) UDP b)TCP c) BGP d) Protocol number 107

334

e) Protocol number 103 7-23. IP multicast addresses in the range of 224.0.0.0 through 224.0.0.255 are reserved for what purpose? a) It is reserved for Administratively Scoped multicast traffic intended to remain inside a private network. b) It is reserved for Administratively Scoped multicast traffic that is not supposed to be transmitted onto the Internet. c) It is reserved for link-local multicast traffic consisting of network control messages that is not supposed to leave the local subnet. d) Any valid multicast data stream used by multicast applications. e) Global Internet multicast traffic intended to travel throughout the Internet. 7-24. Which of the following PIMv2 Sparse mode control messages are also used in PIM Dense mode? (Choose all that apply.) a) Graft b) Join c) Prune d) Register e) Assert f) Hello g) Register 7-25. What best describes the Source Specific Multicast (SSM) functionality? a) SSM is an extension of the DVMRP protocol that allows for an efficient data delivery mechanism in one-to-many communications. b) SSM requires MSDP to discover the active sources in other PIM domains. c) In SSM routing of multicast traffic is entirely accomplished with source trees. The RP is used to direct receivers to the appropriate source tree. d) Using SSM, the receiver application can signal its intention to join a particular source by using the INCLUDE mode in IGMPv3. e) None of the above

335 Chapter 7: IP Multicast 7-26. The EnableMode network is setting up a VPN for the IP multicast traffic. What best describes the MDT role in MVPN operations? a) PE routers that have CE routers who are intended recipients of the data only join data MDT. PE routers signal use of data-MDT via a UDP packet on port 3232, which is sent via the default MDT: This packet contains an all-PIM routers message, indicating the group is joined if required. b) CE routers do not have a PIM adjacency across the provider network with remote CE routers, but rather have an adjacency with their local routes an the PE router. When the PE router receives an MDT packet. It performs an RPF check. During the transmission of the packet through the Provider network, the normal RPF rules apply. However, at the remote's PE, the router needs to ensure that the originating PE router was the correct one for that CE. It does this by checking the BGP next hop address of the customer's packet's source address. This next hop address should be the source address of the MDT packet. The PE also checks that there is a PIM neighbor relationship with the remote PE. c) A unique Group address is required to be used as MDT for each particular customer. A unique source address for the Multicast packet in the provider network is also required. This source address is recommended to be the address of the loopback interface, which is used as the source for the IBGP, as this address is used for the RPF check at remote PE. d) PE routers are the only routers that need to be MVPN aware and able to signal to remote PE's information regarding the MVPN. It is therefore fundamental that all PE routers have a BGP relationship with each other. Either directly or via a Route Reflector. The source address of the Default-MDT will be the same address used to source the IBGP sessions with the remote PE routers that belong to the same VPN and MVRF. e) All of the above. 7-27. An enterprise customer runs their core network as an ISP network where they have different Autonomous Systems (AS). The BGP core runs OSPF for Intra-connection only. Data center A is in AS 1, data center is in AS 2, and data center C is in AS 3. The remote locations will be running an IGP and redistribute their routes into BGP core. They would like to enable multicast throughout their network to support multicast applications. Based upon the scenario, what would be the LEAST EFFECTIVE way to implement IP multicast? a) This network runs essentially as an ISP's network with a BGP core and differentAS. To implement multicast in this network they can enable MBGP over the BGP backbone. b) This is customer's internal network and not a transit provider in the inter-domain SP routing. As log as there is no incongruence (between multicast and unicast topologies), there is no need to run MBGP. They simply run PIM-SM and MSDP for redundancy. c) Running MBGP, besides BGP, should present negligible overhead and if done together with the introduction of IP multicast will help to

Chapter 7: IP Multicast

336

avoid problems later on when the network has grown and some incongruence needs to be supported. At that point, the customer may need to upgrade to MBGP throughout the network to have the transitive nature of incongruence supported correctly, and this may then become an obstacle in deployment. Therefore, MBGP should be implemented. d) It should be determined what IP multicast applications the customer is intending to run. Source Specific Multicast (SSM) should be recommended to the customer, since it would allow them to overcome MSDP and thus reduce the complexity of IP multicast in their deployment. e) PIM uses the unicast routing information to perform the multicast forwarding function. They can simply implement Inter AS PIM (IAPIM) to exchange the multicast routing information. This would be the easiest way to implement multicast in the current network where they leverage all the current unicast routing protocol information to populate the multicast routing table, including Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest path First (OSPF), Border Gateway Protocol (BGP), and static routes. This approach would also cause less processing on the routers as PIM does not send and receive routing updates between routers. 7-28. Which of the following is used to calculate the upstream neighbor interface for a multicast route entry in a PIMv2 Sparse Mode network? a) The address of the Mapping Agent. b) The address of a directly connected member of the multicast group. c) The address of the currently active Rendezvous Point for the multicast group. d) The address of the PIM neighbor that sent the PIM Join message. e) The address of the PIM neighbor that sent the PIM Hello message. 7-29. What best describes PIM functionality? a) PIM uses the multicast routing information to perform the multicast forwarding function. PIM is a multicast routing protocol, and uses the multicast routing table to perform the RFP check. Like other routing protocols, PIM sends and receives routing updates between routers. b) PIM uses unicast routing protocol information that populates the unicast routing table, including EIGRP, OSPF, BGP, and static routes. c) PIM uses the multicast and unicast routing information to perform the multicast forwarding function. PIM uses the multicast routing table to perform the RPF check. Like other routing protocols, PIM does not send and receive routing updates between routers. d) PIM uses multicast routing protocols to populate the multicast routing table, including Distance Vector Multicast Routing Protocol (DVMPR); Multicast OSPF (MOSPF), Multicast BGP 7-30. What interface command must be configured for auto-rp to function properly? a) ip pirn dense-mode

337 Chapter 7: IP Multicast b) ip pirn sparse-dense-mode c) ip pirn sparse-mode d) ip multicast helper 7-31. The EnableMode network is using multicasting for corporate video training sessions. All routers in the EnableMode network are enabled for IP multicast. How are these video streaming multicast packets forwarded by these routers? (Choose all that apply) a) When a multicast packet arrives at a router, the router performs a Reverse Path forwarding (RPF) check on the packet. If the RPF check succeeds, the packet is forwarded, otherwise, it is dropped. b) When traffic is flowing down the source tree the router looks up the source address in the unicast routing table to determine if the packet has arrived on the interface that is on the reverse path back to the source, if the packet has arrived on the interface leading back to source, the PRF check succeeds and the packets is forward. Otherwise, it is dropped. c) When traffic is flowing down the source tree the router looks up the source address in the multicast routing table to determine if the packet has arrived on the interface that is on the reverse path back to the source. If the packet has arrived on the interface leading back to the source, the PRF check successfully the packets is forwarded. Otherwise, it is dropped. d) When traffic is flowing down the source tree the router looks up the source address in the multicast routing table to determine if the packet has arrived on the interface that is on the reverse path back to the source and forward path to the receiver. If the reverse path and forward path is found successfully the packet is forwarded. Otherwise, it is dropped. e) When a multicast packet arrives at a router, the router does not have to perform an RPF check on the packet. The router looks up the source address in the unicast routing table to determine if the destination path is present. If this succeeds the packet is forwarded. Otherwise, it is dropped. 7-32. While troubleshooting an IP multicast issue, you issue the show ip mroute command: Router#show ip mroute 236.2.3.23 IP Multicast Routing table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set, J - JOIN SPT X - Proxy Join Timer Running Timers: uptime/Expires Interface state: Interface, next-hop or VCD, State/Mode (*, 236.2.3.23), 00:09:49/00:04:23 RP 10.1.24.1, flags: SC Incoming interface: Seriall.708, RPF nbr 10.1.20.2 Outgoing interface list: EthernetO, Forward/Sparse, 00:09:50/00:04:12

Chapter 7: IP Multicast You are trying to trace this multicast address back to the source of this multicast shared tree. Using the information above, what is the IP address of the upstream neighbor?

338

a) b) c) d) e)

10.1.20.2 10.1.20.3 10.1.24.1 10.1.24.2 236.2.3.23

7-33. What is the primary purpose for the RPF check in IP multicast networks? a) To prevent the movement of unauthorized multicast traffic. b) To prevent multicast traffic looping through the network. c) To determine interfaces inclusion in the outgoing interface list. d) To establish reverse flow path of multicast traffic from the receiver to the source. 7-34. Which Multicast Protocols use Reverse Path Forwarding (RPF) information when sending multicast traffic streams to the receivers within the EnableMode network? (Select two) a) DVMRP b) PIM Sparse Mode c) PIM Dense Mode d) Multicast OSPF e) PIM Sparse-Dense Mode 7-35. he EnableMode network is utilizing IP multicast technology. Along with this, router EMI is configured as an anycast Rendezvous Point (RP). What best describes the functionality of Anycast RP? a) Anycast RP is a useful application of MSDP, MBGP and SSM that configures a multicast sparse mode network to provide for fault tolerance and load sharing within a single multicast domain. b) Only a maximum of two RPs are configured with the same IP address (for example, 10.0.0.10) on loopback interfaces. The loopback address should not be configured as a host address (with a 32-bit mask). All the downstream routers are configured so that they know that 10.0.0.10 is the IP address of their local RP. c) IP routing automatically selects the topological^ closest RP for each source and receiver. Because some sources use only one RP and some receivers a different RP, MBGP enables RPs to exchange information about active sources. All RPs are configured to be MSDP peers of each other. d) Each RP will know about the active sources in its own area. If RP fails, IP routing converges and backup RP would become the active RP os this areas using HSRP. e) Anycast RP is an implementation strategy that allows load sharing and redundancy in PIM sparse mode (PIM-SM) networks by configuring two

339 Chapter 7: IP Multicast or more RPs that have the same IP address and are using Multicast Source Discovery Protocol to share active source information.

Chapter 7: IP Multicast

340

Chapter 7 Answers 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-13 7-14 7-15 7-16 7-17 7-18 7-19 7-20 7-21 7-22 7-24 7-25 7-26 7-27 d a, c b c a, e c d c c c b b c c a, c c

e, f a d a b b, c, e, and f d b 7-28 c 7-29 b 7-30 b 7-31 a, b 7-32 a 7-33 b 7-34 ia, c 7-35 (

Chapter 8

Security
Bridge/Switch Security Even if you are only bridging, you should have the knowledge to control who can and cannot communicate with your device. This is what bridge security is about. We will cover two methods of bridging securityMAC address access-lists (including vendor-code) and protocol-type access-lists. Understanding How Port Security Works Port security works in two ways. You can use port security to block input to an Ethernet, Fast Ethernet, or Gigabit Ethernet port when the MAC address of the station attempting to access the port is different from any of the MAC addresses specified for that port. Alternatively, you can use port security to filter traffic destined to or received from a specific host using the host MAC address. This section describes the following traffic filtering methods to allow and restrict traffic, base on the host MAC address: Allowing traffic Restricting traffic

Allowing Traffic Based on Host MAC Address Total MAC addresses per port cannot exceed the global resource limit of 1024 plus one default MAC address. Thus the total number of MAC addresses on any port cannot exceed 1025. How allocation of the maximum number of MAC addresses for individual ports depends on your network configuration. The following combinations are valid allocations: 513 (1 + 512) each on 2 ports in a system and 1 address each on the rest of the ports. 901 (1 + 900) on one port, 101 (1 + 100) on another port, 25 (1 + 24) on the third port, and 1 address each on the rest of the ports. 1025 (1 + 1024) addresses on 1 port and 1 address each on the rest of the ports.

Chapter 8: Security After allocating the maximum number of MAC addresses for a port, you can: < Manually specify the secure MAC address for the port Let the port dynamically configure the MAC address for connected devices.

342

You can manually configure all MAC addresses, allow all to be autoconfigured (learned), or configure some manually and the rest to be autoconfigured. Once you manually configure or autoconfigure addresses, they are stored in NVRAM and maintained after a reset. After you allocate a maximum number of MAC addresses on a port, you have the option of specifying how long addresses on the port will remain secure. By default, all addresses on a port are secured permanently. After the age time expires, MAC addresses on the port become insecure. When a security violation occurs, you can configure the port to go into restrictive mode or shutdown mode: In restrictive mode you configure the port to remain enabled during a security violation and drop only packets coming in from insecure hosts. In shutdown mode, you to specify whether the port is to be permanently disabled or disabled only for a specified time. The default is permanent shut down.

The source MAC address of a packet received by a secure port is compared to the list of secure source addresses on the port that were manually configured or autoconfigured. Where the MAC address of a device attached to the port differs from the list of secure addresses, the port either shuts down permanently (default mode), shuts down for the time you have specified, or drops incoming packets from the insecure host. The response port responsedepends on how it has been configured to respond to the security violation. After a security violation occurs, the Link LED for the port turns orange, and a link-down trap is sent to SNMP manager. An SNMP trap is sent only if the port is configured to shut down during a security violation. A trap is not sent if the port is configured in the restrictive violation mode. Restricting Traffic Based on Host MAC Address Any packets tagged with a specific source MAC address can be discarded. Thus you can filter traffic received from a specific host MAC address. Guidelines for Port Security Configuration These guidelines apply to port security configuration: Port security cannot be configured on a trunk port. Port security cannot be enabled on a SPAN destination port.

343 Chapter 8: Security Dynamic, static, or permanent CAM entries cannot be configured on a secure port. Port security, when enabled, clears all static or dynamic CAM entries associated with the port; all currently configured permanent CAM entries are treated as secure.

MAC Address Access-Lists (ACL) You can configure a MAC address ACL using either of the following: Access-list 700-799 48-bit MAC address access-list or the extended version of the 48-bit MAC address access-list is 1100-1199

To filter using the MAC address access-list, first you would define your accesslist. Say that you wanted to allow only a host with the MAC address of 0800001234567 to access-list Ethernet0/0 interface. You would define the access-list like this: Router(config)# access-list 700 permit 0800.0123.4567 You can also include a mask after the MAC address but this is optional. If you do not specify the mask, it is assumed that the mask is 0000.0000.0000. This would be the equivalent of an IP access-list that specifies a host address, like this: Router(config)# access-list 1 permit 199.199.199.199 0.0.0.0 Or Router(config)# access-list 1 permit host 199.199.199.199 To see the access-list you have created, do: Router* show access-lists Bridge address access list 700 permit 0800.0123.4567 0000.0000.0000 Just like a regular access-list, there is an assumed "deny any" at the end of the list so our list above is only permitting the host with the MAC address specified. Now, to apply the access-list to the interface Ethernet0/0: Router(config)# interface ethernet0/0 Router(config-if)# bridge-group 5 Router(config-if)# bridge-group 5 input-address-list 700 While the conventional way to do this is the method shown above, interestingly enough, this can also be done at the global config mode with this command: Router(config)# bridge 5 address 0800.0123.4567 discard EthemetO/0

Chapter 8: Security

344

You can use these same methods to filter by "vendor code". All companies who create Ethernet devices are designated a block of MAC addresses and all of these blocks begin with a specific string. This prefix for each vendor is known as the "vendor code". For instance, all Cisco 7500 series routers use the Ethernet vendor code of 001011. Thus, if you wanted to make an access-list to only permit Cisco 7500 series routers, you could do this: Router(config)# access-list 700 permit 0010.1100.0000 0000.00ff.ffff Here is what it looks like: Router* show access-lists Bridge address access list 700 permit 0010.1100.0000 0000.00ff.ffff When permitting only certain traffic, you must make sure that you allow for other necessary routing protocols and administrative protocols. To be sure that you get this traffic, you may want to deny what you do not want and permit everything else. You will notice that there is no "permit any" or "deny any" on a 700 series access-list. To permit any, use the following: Router(config)# access-list 700 permit 0000.0000.0000 ffff.ffff.ffff When creating MAC address access-lists, keep in mind that Ethernet uses Canonical addresses and Token Ring uses Non-Canonical addresses. If you do a show bridge, these addresses are canonical. If you do a show dlsw reachability, these addresses are in non-canonical format. The Ethernet canonical format is used on Serial links as well as Ethernet. When creating a 700 series bridge access-list, you should always use the canonical format. When using the 1100-1199 extended version of the 48-bit MAC address accesslist, you will apply it to the bridge-group using the "input-pattern-list" parameter instead of the "input-address-list" that was used with the standard access-list. Using extended bridging access-lists can be tricky so you should use care with your configuration. Protocol Type-Code Access-Lists (ACL) The Cisco IOS supports many different options for filtering traffic. In this section, we will cover how to filter traffic by packet type. Cisco calls these "administrative filters". Protocol Type-Code access-lists are also known as Service Access Point (SAP) access control lists. These access-lists are created in the 200-299 range and can be used to filter Netbios, DLSW, and Source Route Bridge (SRB) traffic. Just like other access-lists, the format of protocol type-code access-list are:

345 Chapter 8: Security access-list 200 permit/deny (address) (wildcard mask) What you need to understand is what you would use as the SAP address (protocol-type code) and its mask. SAP values are particular to each type of traffic. For instance, IPX traffic is represented by the SAP OxEOEO and a mask of 0x0101. IBM SNA command and response traffic is represented by the SAP 0x0404 with a mask of 0x0001. NETBIOS command and response traffic is represented by OxFOFO with a mask of 0x0001. Say that you were asked to create an access-list that would permit all IPX, deny all NETBIOS, and permit everything else. This access-list would fulfill those requirements: access-list 200 permit OxEOEO 0x0101 access-list 200 deny OxFOFO 0x0101 access-list 200 permit 0x0000 Oxffff To apply this list to an interface, you would do: Router(config-if)# bridge-group 5 input-lsap-list 200 VLAN Access-list (VACL) A very useful switch feature is VLAN access-lists. These are essentially accesslists, on a switch, that can control traffic between switch ports. Thus, you could filter traffic between two hosts without that traffic ever going through a router. VACL's work like a route-map. You can filter either on MAC address or IP traffic. Assuming you are going to filter IP traffic you 1) create an access-list that defines your traffic, 2) create a vlan access-map that tells the switch what to do with that traffic (forward it or drop it) and 3) apply it to the vlan (or list of vlans) that you want to filter your traffic in. Here is an example: Say that you wanted to allow only TCP traffic from Host A (at IP address 192.168.1.1) to Host (at IP address 192.168.1.2). Here are the steps you would take: 1. Create the access-list that defines the traffic: Switch(config)# access-list 101 permit tcp host 192.168.1.1 host 192.168.1.2 Switch(config)# access-list 101 permit tcp host 192.168.1.2 host 192.168.1.1 Note that if this is a two way access-list, it does not have an "inbound" or "outbound", it is bidirectional. Because it is bidirectional, if you want two-way traffic, you must define that. 2. Create the VLAN access-map: Switch(config)# vlan access-map getccie Switch(config-access-map)# match ip address 101 ! permit IPX ! deny NETBIOS ! permit any

Chapter 8: Security Switch(config-access-map)# action forward 3. Apply the filter (access-map) to the intended VLAN(s): Switch(config)# vlan filter getccie vlan-list 1

346

This applies the access-map called "getccie" to only VLAN 1. On this command line, you could list other VLANs or enter a range of vlans. To see what our vlan access-map looks like do a show vlan access-map. Here is an example: Vlan access-map "getccie" 10 Match clauses: ip address: 101 Action: forward Note the line number 10. This is just like in a route-map. Using line numbers, you could go back and add another statement to the VLAN access-map by doing vlan access-map getccie 20. Another important note is that, just like in an access-list, by default, all traffic that does not match the access-map, is denied (dropped). You can do numerous creative things using VLAN access-maps (filters) as the filtering occurs between two Ethernet devices. IP Receive Access-list (rACL) Receive access-lists are currently only available on Cisco 7500 and 12000 platforms. These access-lists are used, primarily, as a security measure to make sure that traffic that is destined for the router is given the highest priority and arrives at its destination. The important traffic that is destined for the router is usually routing traffic (routing protocols). This filtering happens after the input accesslist on the ingress interface. To configure and apply an IP rACL, you first configure a regular access-list then apply it with the special command: Router(config)# ip receive acl {access-list number} At the time of this writing, using a named access-list with a rACL is not permitted.

347 Chapter 8: Security Private VLANs Private VLAN is a feature that is not available on all models of Cisco switches or routers. This feature allows for devices on a switch to be isolated into their own Layer 2 networks while still having Layer 3 IP addresses on the same subnets. This can be configured such that certain ports could be allowed to reach a default gateway, if desired. There are three types of Private VLANs: 1) Community portscan communicate within their community and with a promiscuous port. 2) Isolated portsare completely isolated at Layer 2 from all other isolated ports (and all other ports on the switch). Broadcasts from isolated ports are forwarded to all promiscuous ports. 3) Promiscuous portscommunicates with all other private vlan ports on the same switch You cannot configure a Private VLAN using the numbers 1 or 1002-1005. Configuration of private VLANs is simple. Designing the intended configuration of Private VLANs is more difficult.

802.Ix
The IEEE standard, 802.Ix performs port-based authentication. What this means is that the switch can actually request authentication of the user connected to the switch before providing connectivity to the network. Just like a network access server (NAS) would do to a dial-up user, the switch requests the user's credentials, relays those to an authentication server, and verifies their validity before granting permission to access the network. The device/user connected to the switch must use 802.Ix client software for this authentication to work. This type of client is included in the Windows XP operating system. Prior to successful authentication, the only traffic that can communicate across the port on the switch is the Extensible Authentication Protocol (EAP) over LAN (or EAPOL). The switch acts as an authentication proxy for the client as it is just passing the authentication credentials along to the authentication server by encapsulating and deencapsulating the EAP packets. The switch uses the RADIUS protocol to communicate with the authentication server by passing the EAP packets in RADIUS packets. To configure the switch for this process to work, you must configure the following on the switch: AAA RADIUS dotlx port-control auto (on each interface)

Chapter 8: Security Access Lists

348

An Access List is an ordered set of statements that permit or deny the flow of packets through an interface. These statements define the criteria on which decisions are made with regard to information contained within the packets. Decisions rely on the source and/or destination network/subnet/host addresses of the packets. They are used for security purposes, to provide QoS, or to define types of traffic for purposes of filtering, queuing or prioritizing. The basic concept of the access list wildcard mask is that any " 0 " in the wildcard mask means the corresponding bit in the address has to match, and any " 1 " in the wildcard mask means the value isn't checked. You can only append to an access list, you cannot add lines to the middle of it. To make changes, copy your access list to notepad, and make your changes there. Then from the Cisco router console type no access-list and the number, then paste the updated access list into the configuration. Things to know: The wildcard mask, which looks like a reversed subnet mask, defines which bits of the address are used for the access list decision-making process. Lists are processed top-down. In other words, the first matching rule preempts further processing. Only one access list is allowed per port/per direction/per protocol. Remember that there is an implicit deny at the end of all access lists. The last configured line should always be a permit statement. If you apply an access number that does not exist, all traffic will be passed. Standard ACLs (access-control lists) will most likely be placed close to the destination while extended ACLs will most likely be placed close to the source. If the access-group command is configured on an interface and there is no corresponding access-list created, the command will be executed and permit all traffic in and out. An Access Class limits VTY (telnet) access. A Distribution List filters incoming or outgoing routing updates.

Different access list types are designated by the list numbers: 1-99 100199 I 200- : 299 i 399 IP standard Extended IP Protocol typecode

1 --5^^"J

349 Chapter 8: Security I 400499 500599 600- , 699 700- ; 799 ] 800-" i XNS standard XNS extended AppleTalk 48-bit Mac Address IPX standard IPX extended IPX Sap >

; ' I j" '

899

Sample Standard Access List Block the range of host addresses of 100 and above on network 192.168.75.0/24 access-list 10 permit 192.168.75.255 0.0.0.0 < allow broadcast access-list 10 deny 192.168.75.128 0.0.0.127 < denies 128 - 254 access list 10 deny 192.168.75.112 0.0.0.15 < denies 112 - 127 access-list 10 deny 192.168.75.104 0.0.0.7 < denies 104 - 111 access-list 10 deny 192.168.75.100 0.0.0.3 < denies 100 - 103 access-list 10 permit any any <--- allow everything else (implicit deny) Then you apply the access list to the interface using the access-group command: interface EthernetO access-group 10 < apply access-list 10 (outbound by default) Here's another way to do the same list:

.. .

o en o en en en

s "

1000 ] 1099 1100 i ! 1199 : 1200 ; 1299 1300 j _ | 1399 i 2000 i 2699 1

',
1

Extended 48-bit Mac Address IPX Summary Address IP standard, additional range Extended IP, additional range

I J ;

Chapter 8: Security

350

access-list 10 permit 192.168.75.255 0.0.0.0 <--- allow broadcast access-list 10 permit 192.168.75.0 0.0.0.63 < permits 0 - 63 access-list 10 permit 192.168.75.64 0.0.0.31 < permits 64 - 95 access-list 10 permit 192.168.75.96 0.0.0.3 < permits 96 - 99 access-list 10 deny 192.168.75.0 0.0.0.255 < denies 100 - 255 (1-99 are not blocked because of earlier permit statements) access-list 10 permit any any <--- allow everything else (implicit deny) Sample Extended Access-list Extended access-lists allow you to match on both the source and destination. This is in comparison to a standard access-list that can only match on the source of the packet. IP standard access-lists also limit filtering to Layer 3 (Network), the IP protocol. Extended IP access-lists allow more granular filtering to Layer 4 (Transport), or TCP and UDP. Also, with extended access-lists, you can filter traffic depending on application layer protocol (like HTTP or telnet), certain bits of the packet (like the TCP SYN bit), and a long other list of criteria. For a sample of what an extended access-list would look like, let's challenge ourselves to create an access-list that will fulfill the following requirements: Permit traffic from host 14.13.12.11 to host 11.12.13.14 only if it is ftp traffic. Permit any OSPF from network 14.13.12.0/24 but log all this traffic Permit Generic Route Encapsulation (GRE) traffic inbound to network 11.12.13.0/24 from any host Permit ICMP echo reply from any host to our webserver, host 11.12.13.2 Deny all other traffic and log it.

From the Router(config)# prompt: Access-list 101 Access-list 101 ! FTP uses one Access-list 101 Access-list 101 Access-list 101 permit tcp host 14.13.12.11 host 11.12.13.14 eq ftp permit tcp host 14.13.12.11 host 11.12.13.14 eq ftp-data port for control traffic and one for data transmission permit ospf 14.13.12.0 0.0.0.255 any log permit gre any 11.12.13.0 0.0.0.255 permit icmp any host 11.12.13.2 echo-reply

! Note that ICMP statements do not use the "equals" (eq) to specify the type of ICMP traffic Access-list 101 deny any any log

351 Chapter 8: Security Sample Named Access-list When you begin to have many access-lists, all identified by their number, it can become quite confusing to new system administrators or even when it comes for an experienced, but rusty, admin to make a change. Named access-lists allow you to identify an access-list with a name, instead of a number. This can only be done with either IP standard access-lists or extended access-lists. An example of an IP standard named access-list is below: Router(config)# ip access-list standard myaccess-list Router(config-std-nacl)# permit host host-IP-address Router(config-std-nacl)# permit host-network-IP-address wildcard mask Router(config-std-nacl)# exit Showing a configured named access-list might look like this: Router* show ip access-lists Standard IP access list myaccess-list permit 1.1.1.1 Extended IP access list 101 permit icmp any host 11.12.13.2 echo Security Criteria for Deploying Wireless VLANs Use these criteria for deploying wireless VLANs: Use IEEE 802.lx to manage user access to VLANs, using RADIUS-based VLAN assignment or based SSID access control. Use separate VLANs to support different classes of service. Use policy groups (a set of filters) to map wired polices to the wireless VLAN. Conform to other criteria specific to the organization's network infrastructure.

Using these criteria, wireless VLANs can be deployed using the following strategies: Segmentation by user group typeWLAN user community can be segmented, and different security policies enforced for different user groups. You could implement separate wired and wireless VLANs in an enterprise network for different departments and for guest access. Segmentation by device typeWLAN can be compartmentalized to support access to the network by different devices with different security levels. For example, handheld devices that support 40- or 128-bit static WEP could coexist with devices using IEEE 802.lx and dynamic WEP in one ESS. Each class of device would have a separate VLAN.

Guest VLAN traffic should be segmented at the network edge, and should not be allowed to reach the core of the network. This segmentation is typically done at the VLAN level, before the guest traffic even reaches a router access list or firewall.

Chapter 8: Security

352

RADIUS and TACACS+ TCP and UDP


TACACS+ employs TCP while RADIUS employs UDP. TCP offers a number of advantages over UDP. TCP is a connection-oriented transport, while UDP offers only best-effort delivery. RADIUS depends on additional variables to be defined such as re-transmit attempts and time-outsin order to compensate for besteffort transport. RADIUS lacks the level of built-in support that TACACS+, with TCP transport offers The advantages of using TCP are: TCP is more scalable and adapts better to both congested and growing networks. TCP supports a separate acknowledgment that a request has been received, within (approximately) the network round-trip time (RTT), without regard to how loaded and slow the backend authentication mechanism (a TCP acknowledgment) is. UDP cannot tell the difference between a server that is down, a slow server, and a non-existent server. With TCP you can determine when a server crashes and returns to service if you use long-lived TCP connections. With TCP, connections to multiple servers can be maintained simultaneously, and you only need to send messages to the ones that are known to be up and running. Using TCP keepalives, server crashes can be detected out-ofband with actual requests.

Packet Encryption Differences TACACS+ encrypts the body of the packet but leaves a standard TACACS+ header, which indicates whether the body of the packet is encrypted. During debugging the body of the packets may be left unencrypted. During secure communication in normal operation the body of the packet is fully encrypted. RADIUS encrypts only the password in the access-request packet sent from the client to the server. The rest of the packet is unencrypted. Information such as username, authorized services, and accounting, is transmitted in the clear and can be captured by a third party. Authentication and Authorization TACACS+ uses the AAA architecture. This allows separate authentication for applications that can use TACACS+ for authorization and accounting. For example, with TACACS+ you can use Kerberos authentication and TACACS+ authorization and accounting. After a NAS authenticates on a Kerberos server, the Kerberos server requests authorization information from the TACACS+ server without having to re-authenticate. The NAS informs the TACACS+ server that it has successfully authenticated on the Kerberos server, and the TACACS+ server then provides the authorization information.

353 Chapter 8: Security During a session, whenever additional authorization checking is needed, the access server checks with a TACACS+ server to determine if the user has been granted permission to use a particular command. This provides greater control over commands that can be executed on the access server, while at the same time decoupling from the authentication mechanism. RADIUS combines authentication and authorization. The access-accept packets sent from the RADIUS server to the client contain authorization information. This makes it difficult to decouple authentication and authorization under RADIUS. Multiprotocol Support TACACS+ offers multiprotocol support. RADIUS does not support these protocols: AppleTalk Remote Access (ARA) protocol NetBIOS Frame Protocol Control protocol Novell Asynchronous Services Interface (NASI) X.25 PAD connection

Router Management TACACS+ has two methods for controlling the authorization of router commands on a per-user or per-group basis. The first method is assignment of privilege levels to commands, and allow the router verify with the TACACS+ server whether or not the user is authorized at the specified privilege level. The second method is to explicitly specify the commands that are allowed in the TACACS+ server, either on a per-user or on a per-group basis.

Under RADIUS, system administrators cannot control which commands can be executed on a router. Consequently RADIUS is not as useful for router management or as flexible for terminal services. Interoperability Compliance with the IETF RADIUS RFCs does not guarantee interoperability, since there are different interpretations of the RADIUS RFCs. Cisco implements most RADIUS attributes and has added some extensions. Other vendors implement RADIUS clients, but these implementations are not always interoperable. Organizations using standard RADIUS attributes in their servers can often interoperate as long as the different vendors implement the same attributes.

Chapter 8: Security Traffic

354

The traffic generated between the client and server differs between TACACS+ and RADIUS. Several examples illustrate traffic differences between the client and server for TACACS+ and RADIUS when used for router management with authentication, exec authorization, exec accounting,, command authorization (which only TACACS+ supports) and command accounting (also supported only byTACACS+). TACACS+ Traffic Example Figure 8-1 illustrates a user who telnets to a router, performs a command, and exits the router. TACACS+implements these steps: Login authentication Exec authorization Command authorization Start-stop exec accounting Command accounting:

355 Chapter 8: Security

/
Client TACACS+

Seruer
START (authentication) - User trying to corned eREPLY (authentication) to ask client to get username

-<
CONTINUE (authentication) to give server username REPLY (authentication) to ask client to get password
Ml

CONTINUE (authentication) to give server password

REPLY (authentication) to indicate pass/fail stslus REQUEST (authorization) for service=shel

!*

RESPONSE (authorization) to indicate pass/fail status REQUEST (accounting) for start-exec RESPONSE (accounting) that record was received REQUEST (authorization) for command & command-argument RESPONSE (authorization) to indicate pass/fail status REQUEST (accounting) for command RESPONSE (accounting) that record was reca^ed REQUEST (accounting) for stop-exec RESPONSE (accounting) that record was recewed

Figure 8 - 1 . TACACS+ traffic example RADIUS Traffic Example This example assumes that when a RADIUS user telnets to a router, performs a command, and exits the router, that login authentication, exec authorization, and start-stop exec accounting are implemented (other management services are not available):

Chapter 8: Security

356

<
^
Client

Si
access-request

RADIUS Server

access-accept (wih exec autfiarizatkm'm attributes) accounting request (start) accounting-response 1o cfert accounting request (step) accounting-response to ciert

Figure 8-2. Radius traffic example AAA Security Services Authentication Authorization and Accounting (AAA) is a modular architectural framework for configuring a set of three independent security functions in a consistent manner. AAA supports these services: AuthenticationA method for identifying users, including login and password dialog, challenge and response, messaging support, and (depending on the security protocol selected) encryption. Authentication identifies a user prior to the user being granted access to the network and network services. All authentication methods, except for local, line password, and enable authentication, must be defined through AAA. AAA authentication is configured by defining a named list of authentication methods, and then applying that list to various interfaces. The method list defines the types of authentication to be performed and the sequence in which they will be performed; it must be applied to a specific interface before any of the defined authentication methods will be performed. The only exception is the default method list (which, by coincidence, is named "default"). The default method list is automatically applied to all interfaces if no other method list is defined. A defined method list overrides the default method list. AuthorizationSupports remote access control, including one-time authorization or authorization for each service, per-user account list and profile, user group support, and support of IP, IPX, ARA, and Telnet. AAA authorization works by assembling a set of attributes describing what the user is authorized to perform. These attributes are compared to the information for a given user in a database.

357 Chapter 8: Security The result is returned to AAA to assign the user's actual capabilities and restrictions. The database can either be located locally on the access server or router or can be hosted remotely on a RADIUS or TACACS+ security server. Remote security servers, such as RADIUS and TACACS+, authorize specific user rights by associating attribute-value (AV) pairs defining those rights for each individual user. All authorization methods are defined through AAA. As with authentication, AAA authorization is configuredby defining a named list of authorization methods, and then applying the list to different interfaces. AccountingProvides a method for collecting and sending security server information necessary for billing, auditing, and reportingsuch as user identity, start and stop time, executed commands (such as PPP), number of packets, and number of bytes. Accounting allows tracking of the services users are accessing as well as the amount of network resources consumed. After AAA accounting is activated, the network access server reports user activity to the TACACS+ or RADIUS security server (depending on the security method in use) in the form of accounting records. Each accounting record is made up of accounting AV pairs and is stored on the access control server. The information can then be analyzed for network management, client billing, and/or auditing. All accounting methods are defined through AAA. Similar to authentication and authorization, AAA accounting is configured by defining a named list of accounting methods, and subsequently applying the list to various interfaces. In many circumstances, AAA uses protocols such as TACACS+, RADIUS, and Kerberos to administer security functions. If a router or access server is acting as a network access server, AAA is the means for communication between the network access server and the TACACS+, RADIUS, or Kerberos security server. Although AAA is the recommended primary access control method, Cisco IOS supports some features for simple access control that are outside the scope of AAA, such as local username authentication, enable password authentication, and line password authentication. These features do not provide the same degree of access control as AAA. AAA Philosophy AAA allows dynamic configuration of the type of authentication and authorization on a per-line or per-service basis. Method lists are used to define the type of authentication and authorization, and are applied to specific services or interfaces. Benefits of Using AAA AAA provides the following benefits:

Chapter 8: Security Scalability Increased flexibility and control Standardized authentication methods, such as TACACS+, RADIUS, and Kerberos Multiple backup systems

358

Method Lists A method list defines the sequence of methods to authenticate a user. Cisco IOS software uses the first method listed to authenticate a user; if the first method does not respond, then Cisco IOS software selects the next authentication method listed in the method list. This process continues until there is successful communication with a listed authentication method or the authentication method list is exhausted, in which case authentication fails. Method lists thus enable designation one or more security protocols to be used for authentication, allowing a backup system for authentication in case the initial method fails. Figure 8-3 shows a representative AAA network configuration with four security servers: Rl and R2 are RADIUS servers, and Tl and T2 are TACACS+ servers.

Woftatallor.

Figure 8-3. Typical AAA network configuration Assume that the system administrator has defined a method list where Rl will be contacted first for authentication information, then R2, T l , T2, and then the local username database will be accessed on the access server. After a remote user attempts to dial into the network, the network access server queries Rl for authentication information. If Rl authenticates the user, it issues a PASS response to the network access server and the user is allowed access to the network. If Rl returns a FAIL response, the user is denied access and the session is terminated. If Rl does not respond, then the network access server assumes an ERROR and queries R2 for authentication information.

359 Chapter 8: Security This pattern continues through each of the remaining designated methods until the user is either authenticated or rejected, or until the session is terminated. If all authentication methods return errors, which the network access server would process this as a failure, the session would be terminated. The First Step, or Where to Begin The first step is to decide on the security solution to be implemented. You need to assess the security risks in the network and decide on an appropriate method to prevent unauthorized entry and attack. As a general recommendation, you should begin by considering AAA. Overview of the AAA Configuration Process To configure security using AAA on a Cisco router or access server: o Enable AAA through the aaa new-model global configuration command. For a separate security server, configure security protocol parameters, such as TACACS+, RADIUS, or Kerberos. Use an AAA authentication command to define the method lists for authentication. If required, apply the method lists to a particular interface or line. Optionally configure authorization using the aaa authorization command. Optionally configure accounting using the aaa accounting command.

Unicast RPF When Unicast RPF is enabled on an interface, the router examines all packets received on the interface. The router checks to ensure that the source address appears in the routing table and matches the interface on which the packet was received. This "look backwards" capability is available only when Cisco Express Forwarding (CEF) is enabled on the router, since the lookup relies on the presence of the Forwarding Information Base (FIB). As part of its operation, CEF generates the FIB. RPF helps avoid problems caused by introduction of malformed or forged (spoofed) IP source addresses into a network, by discarding IP packets without verifiable IP source addresses. Unicast RPF checks whether a packet received at a router interface arrives on one of the best return paths to the source of the packet. Unicast RPF does this by doing a reverse lookup in the CEF table. When Unicast RPF does not find a reverse path for the packet, Unicast RPF can drop or forward the packet, depending on whether an ACL has been specified in the Unicast RPF command. If an ACL has been specified in the command, when (and only when) a packet fails the Unicast RPF check, the ACL is checked to see if the packet should be forwarded (using a permit statement in the ACL), or dropped

Chapter 8: Security

360

(using a deny statement in the ACL). Regardless of whether a packet is forwarded or dropped, the packet is counted in the global IP traffic statistics for Unicast RPF drops and in the interface statistics for Unicast RPF. If no ACL has been specified in the Unicast Reverse Path Forwarding command, the router drops the forged or malformed packet immediately and no ACL logging occurs and updates the router and interface Unicast RPF counters.

Unicast RPF events can be logged by specifying the logging option for ACL entries used by the Unicast RPF command. Log information can be used to analyze information about the attack, such as source address, time, etc. To use Unicast RPF, you must have enabled CEF switching or distributed CEF (dCEF) switching in the router. You do not need to configure the input interface for CEF switching. While CEF is running on the router, individual interfaces can be configured with other switching modes. Unicast RPF should not be used on interfaces internal to the network. Internal interfaces will likely have routing asymmetry (multiple routes to the source of a packet). Unicast RPF should be used only where there is natural or configured symmetry. Routers at the edge of the network of an Internet service provider (ISP) are more likely to have symmetrical reverse paths than routers that are in the core of the ISP network. Core routers have no guarantee that the best forwarding path out of the router will be the path selected for packets returning to the router. In this situation there is a likelihood of asymmetric routing and Unicast RPF is not recommended. The most straightforward approach is to place Unicast RPF at the edge of a network (or for an ISP, at the customer edge of the network). SMURF Attack In a SMURF attack (and the similar UDP "fraggle" attack, named after the programs used to perform the attacks) an attacker sends a moderate amount of traffic and causes an explosion of traffic at the intended target. The method used is this: The attacker sends ICMP echo request packets (UDP echo request packets in the case of a fraggle attack) in which the source IP address has been forged to be that of the target of the attack. The attacker sends these ICMP datagrams to addresses of remote LAN broadcast addresses, using so-called directed broadcast addresses. These datagrams are broadcast out on the LANs by the connected router. Hosts on the LAN individually pick up a copy of the ICMP Echo Request datagram (as they should), and send an ICMP Echo Reply datagram back to what they identify as the source. If many hosts are alive on the LAN, the amplification factor can be considerably (a factor of 100 is not uncommon).

361 Chapter 8: Security The attacker can use large packets (typically up to Ethernet maximum) to increase the impact of the attack. Thhe faster network connection the attacker has, the more damage the attacker can inflict on the target network. In addition to problems caused for the target host, the influx of traffic can in fact be so great as to have a serious negative effect on networks upstream of the target. The institutions being abused as amplifier networks can also be similarly affected, as their network connections can be overloaded by the Echo Reply packets destined for the target. The Cisco IOS command no ip directed-broadcasts can serve as an effective way to prevent smurf and fraggle attacks on the network. Internet Key Exchange (IKE) The IETF IP Security Protocol (IPSec) standard includes IKE as a key management protocol standard. The term IPsec refers to a set of protocols designed to protect the traffic at the IP level (IPv4 or IPv6). The services provided by IPsec are connectionless integrity, data origin authentication, protection against replays and confidentiality (data confidentiality and partial protection against traffic analysis). These are provided at the IP layer, thus offering protection for IP and upper layer protocols. Optional in IPv4, IPsec is mandatory in any implementation of IPv6. Once IPv6 is generally adopted, it will be possible for any user wishing to use security functions to use IPsec. In the meantime, IPsec is a Cisco-supported standard under IPv4. IKE is a hybrid protocol, which implements the Oakley key exchange and Skeme key exchange inside the ISAKMP framework. (ISAKMP, Oakley, and Skeme are security protocols implemented by IKE.) IKE automatically negotiates IPSec security associations and enables IPSec secure communications without manual preconfiguration. IPSec can be configured without IKE, but IKE enhances IPSec through additional features, flexibility, and ease of configuration. IKE provides these benefits: o Eliminates the manual need to specify all IPSec security parameters in the crypto maps at both peers. Allows encryption keys to change during IPSec sessions. Allows specification of a lifetime for the IPSec security association. Allows IPSec to provide anti-replay services. Permits CA support for a manageable, scalable IPSec implementation. Allows dynamic authentication of peers

Chapter 8: Security

362

IKE policies must be created for each peer. An IKE policy defines a combination of security parameters applied during IKE negotiation. IKE negotiations must be protected, so each IKE negotiation begins with each peer agreeing on a common (shared) IKE policy. This policy states the security parameters to be used to protect subsequent IKE negotiations. After two peers agree upon a policy, the security parameters of the policy are identified by a security association established for each peer. These security associations apply to all subsequent IKE traffic during the negotiation. Multiple, prioritized policies can be defined for each peer to ensure that at least one policy will match a remote peer's policy. Five parameters define an IKE policy. These parameters apply to IKE negotiations when IKE security associations are established. Table 8-1 shows the five IKE policy parameters and their accepted values. Table 8 - 1 . IKE Policy Parameters
Parameter encryption a l g o r i t h m Accepted Values 56-bit DES-CBC 168-bit Triple DES hash a l g o r i t h m SHA-1 (HMAC v a r i a n t ) MD5 (HMAC v a r i a n t ) authentication m e t h o d RSA signatures pre-shared keys Diffie-Hellman g r o u p identifier 768-bit Diffie-Hellman or 1024-bit Diffie-Hellman security association's lifetime can specify any n u m b e r of seconds Keyword des 3des sha md5 rsa-sig pre-share 1 2 7 6 8 - b i t DiffieHellman RSA signatures SHA-1 Default Value 5 6 - b i t DES-CBC

86,400 seconds (one day)

At the start of the negotiation, IKE looks for a policy that is identical for both peers. The peer that initiates the negotiation will send its policies to the remote peer, and the remote peer will try to match. The remote peer looks for a match by comparing its highest priority policy against the other peer's received policies The remote peer checks each policy in order of its priority (highest priority first) until a match is found.

363 Chapter 8: Security A match is made when the policies from the two peers contain the same encryption, hash, authentication, and Diffie-Hellman parameter values, and when the remote peer's policy specifies a lifetime less than or equal to the lifetime in the compared policy. If the lifetimes are not identical, the shorter lifetime will be used. If a match is found, IKE will complete negotiation, and IPSec security associations will be created. If an acceptable match is not found, IKE refuses negotiation and IPSec will not be established. The IKE standard allows selection of parameter values. If you are interoperating with a peer that supports only one of the values for a parameter, your choice is limited to the other peer's supported value. There is often a trade-off between security and performance, and many of the IKE parameter values represent such a trade-off. The following suggestions may help in the selection of parameter values:: The encryption algorithm has two options: 56-bit DES and 168-bit Triple DES. The hash algorithm has two options: SHA-1 and MD5. MD5 has a smaller digest and is considered to be slightly faster than SHA-1. There has been a demonstrated successful (but extremely difficult) attack against MD5; however, the HMAC variant used by IKE prevents this attack. The authentication method has two optionRSA signatures and pre-shared keys: o RSA signatures provide non-repudiation for the IKE negotiation (you can prove after the fact that you had an IKE negotiation with a specific peer). RSA signatures requires use of a CA. Using a CA can significantly improve the manageability and scalability of your IPSec network. o Pre-shared keys do not scale in a growing network and are awkward to use whena secured network is large. However, pre-shared keys do not require use of a CA, as do RSA signatures, and can be simpler to set up in a small network with less than 10 nodes.

The Diffie-Hellman group identifier has two options: 768-bit or 1024-bit Diffie-Hellman. 1024-bit Diffie-Hellman is more secure than the 768-bit option, but requires more CPU time. The SA's lifetime can be set to any value. As a general rule, the shorter the lifetime, the more secure will IKE negotiations will. With longer lifetimes, future IPSec security associations can be set up more rapidly.

Multiple IKE policies are supported, each with a different combination of parameter values. For each policy, you assign a unique priority (1 through 65,534, with 1 being the highest priority). Each authentication method requires additional configuration. Depending on the authentication method you specify in your IKE policies, you need to do certain

Chapter 8: Security additional configuration before IKE and IPSec can successfully use the IKE policies:

364

You can configure multiple policies on each peerbut at least one of these policies must contain exactly the same encryption, hash, authentication, and Diffie-Hellman parameter values as one of the policies on the remote peer. The lifetime parameter does not need to be the same, since the lower two compared lifetimes will be used. If no policies are configured, your PIX Firewall will use the default policy, which is always set to the lowest priority, and which contains each parameter's default value. As you would expect, if you do not specify a value for a parameter, the default value is assigned.

RSA Signatures Method If RSA signatures are the authentication method in a policy, you must configure peers to obtain certificates from a certification authority (CA). The CA must be properly configured to issue the certificates. RSA signatures require that each peer must have the remote peer's public signature key. The certificates are used by each peer to securely exchange public keys. When each peer has a valid certificate, the two peers will automatically exchange public keys with each other as part of any IKE negotiation in which RSA signatures are used. IKE Pre-Shared (Authentication) Keys If pre-shared keys are specified as the authentication method in a policy, these pre-shared keys must be configured. Each peer's Internet Security Association and Key Management Protocol (ISAKMP) identity is set either to its host name or its IP address. When two peers use IKE to establish IPSec security associations, each peer sends its identity to the other peer. The default peer identity is its IP address. You could set the identity to be the peer's host name instead. As a general rule, set all peer identities in the same wayall peers should either use IP addresses or host names. If some peers use host names and some peers use IP addresses to identify themselves, IKE negotiations could fail if a peer's identity is not recognized and a DNS lookup is cannot resolve the identity. You should specify the shared keys at each peer. A pre-shared key is shared between two peers. For a given peer you could specify the same key as shared with multiple peersbut a more secure approach would be to specify different keys to be shared between different pairs of peers.

365 Chapter 8: Security IP Security Protocol (IPSec) The IP Security Protocol (IPSec) is a framework of open standards developed by the Internet Engineering Task Force (IETF). IPSec provides security for transmission of sensitive information over unprotected networks such as the Internet. IPSec acts at the network layer, protecting and authenticating IP packets between participating IPSec devices ("peers"), such as Cisco routers. IPSec is a framework that provides data confidentiality, data integrity, and data authentication. IPsec can be applied either to a terminal host or on a security gateway. IPSec can be used to protect one or more data flows between a pair of hosts, between a pair of security gateways, or between a security gateway and a host. Thus IPSec allows for both link-by-link and end-to-end security. IPsec can thus be used in virtual private networks (VPNs) or for remote access protection. IPSec is a network-layer authentication and encryption security protocol which uses: An encryption key exchange to build a secure connection Authentication and encryption protocols that two peers negotiate and then use throughout the lifetime of the encrypted connection.

IPsec relies on two protocols: Authentication Header (AH) Encapsulating Security Payload (ESP)

The parameters necessary under these protocols are managed through security associations (SAs), which represent the parameters used to protect a given part of the traffic. SAs are stored in the Security Association Database (SAD) and are managed using the IKE protocol. The protection offered by IPsec relies on parameters defined in the Security Policy Database (SPD). This database allows a decision to be made for individual packets, as to whether they will be afforded some security services, will be authorized to pass by or will be rejected. IPsec has two modes: Transport mode, which protects only the transported data Tunnel mode, which also protects the IP header.

IPSec provides security services at the IP layer. It uses IKE to handle negotiation of protocols and algorithms depending on local policy, and to generate encryption and authentication keys for by IPSec. Under IPSec, Internet Security Association and Key Management Protocol (ISAKMP) negotiates encryption policy and provides a common framework to

Chapter 8: Security generate the keys shared by IPSec peers. The result of ISAKMP negotiations is asecurity association (SA). IPSec provides a number of network security services. These services are optional. In general, local security policy will dictate the use of one or more of these services: Data ConfidentialityAn IPSec sender can encrypt packets before transmitting them across a network.

366

Data IntegrityAn IPSec receiver can authenticate packets sent by the IPSec sender to ensure that the data has not been altered during transmission. Data Origin AuthenticationAn IPSec receiver can authenticate the source of the IPSec packets sent. This service is dependent upon the data integrity service. Anti-ReplayAn IPSec receiver can detect and reject replayed packets.

IPSec enables data to be transmitted across a public network without observation, modification, or spoofing. This enables virtual private networks (VPNs), including intranets, extranets, and remote user access. For example, under IPSec, a mobile user can establish a secure connection to the office: A user can establish an IPSec "tunnel" with a corporate firewallrequesting authentication services to gain access to the corporate network All traffic between the user and the firewall will then be authenticated The user can then establish an additional IPSec tunnelrequesting data privacy serviceswith an internal router or network.

IPSec services are similar to those provided by Cisco Encryption Technology (CET), a proprietary Cisco security solution introduced in Cisco IOS Software Release 11.2. (The IPSec standard was not available at the time of Release 11.2.) The advantage of IPSec over CET is that IPSec offers a more robust security solution and is standards-based. IPSec provides data authentication and anti-replay services in addition to data confidentiality services, while CET provides only data confidentiality services. IPSec Benefits IPSec security services are supported at the network layer, so you do not have to configure individual workstations, PCs, or applications. This can provide significant cost savings. Instead of supporting security services you do not need and having to coordinate security on a per-application, percomputer basis, you allow the network infrastructure to provide the needed security services under IPSec. IPSec provides support for the IKE protocol and for digital certificates. IKE provides negotiation services and key derivation services for IPSec. Digital

367 Chapter 8: Security certificates allow devices to be automatically authenticated to each other without the manual key exchanges that CET requires. Because IPSec is standards-based, Cisco devices can interoperate with other IPSec-compliant networking devices. IPSec-compliant devices include both Cisco devices and non-Cisco devices. IPSec solutions scale better than CET solutions, making IPSec preferable where secure connections between many devices is required.

The component technologies implemented for IPSec include: DESThe Data Encryption Standard (DES) is used to encrypt packet data. Cisco IOS implements the mandatory 56-bit DES-CBC with Explicit IV. Cipher Block Chaining (CBC) requires an initialization vector (IV) to start encryption. The IV is explicitly given in the IPSec packet. For backwards compatibility, Cisco IOS IPSec also implements the RFC 1829 version of ESP DES-CBC. MD5 (HMAC variant)MD5 (Message Digest 5) is a hash algorithm. HMAC is a keyed hash variant used to authenticate data. SHA (HMAC variant)SHA (Secure Hash Algorithm) is a hash algorithm. HMAC is a keyed hash variant used to authenticate data. AHAuthentication Header. A security protocol which provides data authentication and optional anti-replay services. AH is embedded in the data to be protected (a full IP datagram). Both the older RFC 1828 AH and the updated AH protocol are implemented in IPSec. The updated AH protocol is per the latest version of the "IP Authentication Header" Internet Draft (draft ietf-ipsec-auth-header-xx.txt).

RFC 1828 specifies the Keyed MD5 authentication algorithm; it does not provide anti-replay services. The updated AH protocol allows for the use of various authentication algorithms; Cisco IOS has implemented the mandatory MD5 and SHA (HMAC variants) authentication algorithms. The updated AH protocol provides anti-replay services. Terms associated with IPSec anti-replayA security service under which the receiver can reject old or duplicate packets in order to protect itself against replay attacks. IPSec provides this optional service through a sequence number combined with data authentication. Cisco IOS IPSec supports this service whenever it provides the data authentication service, except where RFC 1828 does not support the service, and where the service is not available for manually established security associations (that is, security associations established by configuration and not by IKE). data authenticationIncludes two concepts: Data integrity (verification that data has not been altered). Data origin authentication (verification that the data was actually sent by the claimed sender).

Chapter 8: Security Data authentication can refer to integrity alone or to both of these concepts (although data origin authentication depends on data integrity).

368

data confidentialityA security service under which protected data cannot be observed. data flowA grouping of traffic, identified by a combination of source address/mask, destination address/mask, IP next protocol field, and source and destination ports, where the protocol and port fields can have the values of any. All traffic matching a specific combination of these values is logically grouped into a data flow. A data flow can represent a single TCP connection between two hosts, or can represent all of the traffic between two subnets. IPSec protection applies to data flows. ESPEncapsulating Security Payload (ESP) is a security protocol which provides data privacy services and optional data authentication, as well as anti-replay services. ESP encapsulates data to be protected. peerA peer refers to a router or other device that participates in IPSec. perfect forward secrecy (PFS)A cryptographic characteristic associated with a derived shared secret value. With PFS, subsequent keys are not derived from previous keys, so if one key is compromised, previous and subsequent keys are not compromised. security associationAn IPSec security association (SA) describes how two or more entities will use security services in the context of a particular security protocol (AH or ESP) to communicate securely on behalf of a particular data flow. It includes the transform and the shared secret keys used for protecting the traffic. The IPSec security association is established by IKE or by manual user configuration. Security associations are unidirectional, and are unique to a security protocol. When security associations are established for IPSec, the security associations (for each protocol) are established at the same time for both directions. When using IKE to establish the security associations for the data flow, security associations are established as needed and expire after a period of time (or volume of traffic). If the security associations are manually established, they are established as soon as the necessary configuration has been completed and do not expire. security parameter index (SPI)This is a number which, together with an IP address and security protocol, uniquely identifies a particular security association. When using IKE to establish the security associations, the SPI for each security association is a pseudo-randomly derived number. Without IKE, the SPI is manually specified for each security association. transformA transform lists a security protocol (AH or ESP) with its corresponding algorithms. One transform is the AH protocol with the HMAC-MD5

369 Chapter 8: Security authentication algorithm; another transform is the ESP protocol with the 56-bit DES encryption algorithm and the HMAC-SHA authentication algorithm. tunnelA secure communication path between two peers, such as two routers. It does not refer to the use of IPSec in tunnel mode. A Few Points to Note About IPSec: IPSec works with these serial encapsulations: High-Level Data-Link Control (HDLC), Point-to-Point Protocol (PPP), and Frame Relay. IPSec also works with the GRE and IPinIP Layer 3 tunneling protocols; however, multipoint tunnels are not supported. Other Layer 3 tunneling protocols (DLSw, SRB, etc.) are not currently supported for use with IPSec. The IPSec Working Group has not yet addressed the issue of group key distribution, so IPSec cannot currently be used to protect group traffic (such as broadcast or multicast traffic). IPSec works with both process switching and fast switching. IPSec does not work with optimum or flow switching. For now, IPSec can be applied to unicast IP datagrams only. Because the IPSec Working Group has not yet addressed the issue of group key distribution, IPSec does not currently work with multicasts or broadcast IP datagrams. Under Network Address Translation (NAT), you should configure static NAT translations in order for IPSec to work properly. In general, NAT translation needs to occur before the router performs IPSec encapsulationThis means that IPSec should be working with global addresses

Overview of How IPSec Works At the risk of oversimplifying, the primary function of IPSec is to provide secure tunnels between two peers, such as two routers. You define the packets that are considered sensitive and should be sent through these secure tunnels You define parameters used to protect these sensitive packets by specifying tunnel characteristics.

When the IPSec peer recognizes a sensitive packet, it sets up the secure tunnel and sends the packet through the tunnel to the remote peer. The tunnels are sets of security associations established between two IPSec peers. These SAs define the protocols and algorithms to be applied to sensitive packets, and also specify the keying information to be used by the peers. Security associations are unidirectional and are established according to the security protocol in use (AH or ESP). You define traffic to be protected between two IPSec peers by configuring access lists and applying those access lists to interfaces using crypto map sets. Traffic

Chapter 8: Security may be selected depending on source and destination address and port, and optionally Layer 4 protocol.

370

The access lists used for IPSec are only used to determine which traffic should be protected by IPSec, not which traffic should be blocked or permitted through the interface. Separate access lists define blocking and permitting at the interface. Each crypto map set can contain multiple entries, and each entry can have a different access list. The crypto map entries are searched in orderthe router attempts to match the packet to the access list specified in that entry. When a crypto map entry is tagged as ipsec-isakmp, IPSec is triggered. If no SA exists that IPSec can use to protect this traffic to the peer, IPSec uses IKE to negotiate with the remote peer to set up the necessary IPSec SAs on behalf of the data flow. The negotiation uses information specified in the crypto map entry as well as the data flow information from the specific access list entry. When a crypto map entry is tagged as ipsec-manual, IPSec is triggered. If no security association exists that IPSec can use to protect this traffic to the peer, the traffic is dropped. (In this case, the SAs are installed via the configuration, without the intervention of IKE. If the SAs did not exist, IPSec did not have all of the necessary pieces configured.)

SAs established by IKE will have defined lifetimes, so they will periodically expire and require renegotiation. This offers an additional level of security. When a packet matches a permit entry in a particular access list, and the corresponding crypto map entry is tagged as Cisco, then CET is triggered, and connections are established when necessary. Under either IPSec or CET, the router will discard packets if no connection or security association exists. Once established, the set of SAs (outbound to the peer) is then applied to the triggering packet as well as to subsequent applicable packets, as those packets exit the router. "Applicable" packets are packets that match the same access list criteria that the original packet matched. To use an example, all applicable packets could be encrypted before being forwarded to the remote peer. The corresponding inbound security associations would be used when processing the incoming traffic from that peer. Access lists associated with IPSec crypto map entries identify the traffic the router requires to be protected by IPSec. Inbound traffic is also processed against the crypto map entriesif a packet matches a permit entry in an access list associated with an IPSec crypto map entry, the packet is dropped because it was not sent as an IPSec-protected packet. Multiple IPSec tunnels can exist between two peers to secure different data streams. Each tunnel uses a separate set of SAs. For example, some data

371 Chapter 8: Security streams might be might be encrypted and authenticated, while while other data streams might only be authenticated. Nesting of IPSec Traffic to Multiple Peers IPSec traffic can be nested to a series of IPSec peers. For example, in order for traffic to traverse multiple firewalls which have a policy of permitting traffic that they have not authenticated, the router needs to establish IPSec tunnels with each firewall in turn. The "nearest" firewall becomes the "outermost" IPSec peer. As illustrated in Figure 8-4: Router A encapsulates traffic destined for Router C in IPSec. In this case, Router C is the IPSec peer. Before Router A can send this traffic, it must first reencapsulate this traffic in IPSec before sending it to Router B. Here Router is the "outermost" IPSec peer.

^^.:^ flowlBf A

"\ toiler

'*-

^^ 6

^~' -*| S Barter C

Oa!s &*1" between Houtfif A ard Router 6 Oala auiUsnlicaliw are encryption eehveen Homer A are Rooter C

Figure 8-4. Nesting example of IPSec peers Traffic between the "outer" peers can have one kind of protection (such as data authentication) and the traffic between the "inner" peers can have a different protection scheme (such as both data authentication and encryption). Configuration Tasks After completing IKE configuration, configure IPSec by completing the following steps at each participating IPSec peer: Ensure IPSec-Compaible Access Lists Set Global Lifetimes for IPSec Security Associations Create Crypto Access Lists Define Transform Sets Create Crypto Map Entries Apply Crypto Map Sets to Interfaces Monitor and Maintain IPSec

Chapter 8: Security Ensure IPSec-Compatible Access Lists

372

The IPSec ESP and AH protocols use protocol numbers 50 and 5 1 . IKE uses UDP port 500. Ensure that your access lists are configured so that protocol 50, 5 1 , and UDP port 500 traffic will not be blocked at interfaces used by IPSec. You may need to add an access list statement to explicitly permit this traffic. Set Global Lifetimes for IPSec Security Associations You can modify the global lifetime values used when negotiating new IPSec SAs. These global lifetime values can be overridden for a specified crypto map entry. These lifetimes only apply to SAs established via IKE. Manual security associations do not expire. There are two lifetimes: "Timed" lifetime "Traffic-volume" lifetime. An SA association expires after the first of these lifetimes is reached. The default lifetimes are 3,600 seconds (one hour) and 4,608,000 kilobytes (10 Mbytes per second for one hour). When you modify a global lifetime, the new lifetime value will not be applied to currently existing security associations, but will be used in the negotiation of subsequently established security associations. If you need to use the new values immediately, you can clear all or part of the security association database. Refer to the clear crypto sa command for more details. IPSec security associations use one or more shared secret keys. These keys and their security associations time out together. Create Crypto Access Lists These define which IP traffic will be protected by crypto and which traffic will not be protected by crypto. Note that cryto access lists are different from regular access lists, which determine traffic that will be forwarded or blocked at an interface. For example, access lists can be created to protect all IP traffic between subnet A and subnet Y, or all telnet traffic between host A and host B. CET and IPSec have the same access lists. It is the crypto map entry referencing a specific access list that defines whether CET or IPSec processing is applied to the traffic matching a permit in the access list. Crypto access lists associated with IPSec crypto map entries have four main functions: Selection of outbound traffic to be protected by IPSec (permit = protect). Indication of the data flow to be protected by the new SAs (specified by a single permit entry) when initiating negotiations for IPSec security associations.

373 Chapter 8: Security o Processing inbound traffic to filter out and discard traffic that should have been protected by IPSec. Determination of whether to accept requests for IPSec SAs on behalf of the requested data flows when processing negotiation with an IPSec peer. (Negotiation is only done for ipsec-isakmp crypto map entries.) In order to be accepted, when a peer initiates an IPSec negotiation, the peer must specify a data flow that is "permitted" by a crypto access list associated with an ipsec-isakmp crypto map entry.

If you want traffic different levels of IPsec protection, you need to create different crypto access lists to define the two different types of traffic. For example, you might: Want some traffic to be authenticated And other traffic to be both authenticated and encrypted

To do this, you would define different access lists in different crypto map entries specifying separate IPSec policies. Define Transform Sets During the IPSec SA negotiation, the peers agree to use a particular transform set for protecting a particular data flow. A transform set is a combination of security protocols and algorithms. With manually established SAs, there is no negotiation with the peer, so both sides must specify the identical transform set. During IPSec security association negotiations using IKE, the peers search for a transform set that is the identical at both peers. When a matching transform set is found, it will be applied to the protected traffic as part of both peers' IPSec SAs. You can specify multiple transform sets, and then specify one or more of these transform sets in a crypto map entry. The transform set defined in the crypto map entry would then be used in the IPSec security association negotiation to protect the data flows specified by that crypto map entry's access list. When you modify a transform set definition, the change is only made to crypto map entries that reference that transform set. The change will not be applied to existing SAs, but will be used subsequently in negotiations to establish new SAs. To make the new settings take immediate effect, you would need to clear all or part of the SA database by using the clear crypto sa command. Create Crypto Map Entries IPSec crypto map entries consolidate the various parts used to set up IPSec SAs, including:

Chapter 8: Security

374

Traffic which needs to be protected by IPSec (using a crypto access list) IPSec security which should be applied to this traffic (selecting from a list of one or more transform sets) Whether SAs are manually established or are established via IKE Granularity of the flow to be protected by a set of SAs Local address to be used for IPSec traffic

Where IPSec-protected traffic should be sent (the remote IPSec peer) Other parameters necessary to define an IPSec SA Under IKE, the IPSec peers can negotiate the settings they will use for the new SAs. This means that you can specify lists (such as lists of acceptable transforms) within the crypto map entry. The use of crypto maps is as follows: o Crypto map entries with the same crypto map name (but different map sequence numbers) are grouped into a crypto map set These crypto map sets will later be applied to interfaces, when all IP traffic passing through the interface will be evaluated against the applied crypto map set If a static crypto map entry detects outbound IP traffic that should be protected and the crypto map specifies the use of IKE, an SA is negotiated with the remote peer according to parameters included in the crypto map entry Otherwise, if the crypto map entry specifies use of manual SAs, a security association should have already been established via configuration If a dynamic crypto map entry sees outbound traffic that should be protected and no SA exists, the packet is dropped

Crypto maps define the policies used during negotiation of SAs. If the local router initiates the negotiation, it will use the policy specified in the static crypto map entries to create the offer to be sent to its IPSec peer. If the IPSec peer initiates the negotiation, the local router will check the policy from the static crypto map entries, as well as any referenced dynamic crypto map entries to decide whether to accept or reject the peer request. When peers try to establish an SA, for IPSec to succeed they must have at least one crypto map entry compatible with one another's crypto map entries. For two crypto map entries to be compatible, they must at a minimum meet these criteria, their crypto map entries must: Contain compatible crypto access lists (for example, mirror image access lists). In a case where the responding peer is using dynamic crypto maps, the entries in the local crypto access list must be "permitted" by the peer's crypto access list. Identify the other peer (unless the responding peer is using dynamic crypto maps).

375 Chapter 8: Security Have at least one transform set in common.

You should definitely consider configuring dynamic crypto maps, particularly if you are not sure how to configure each crypto map parameter to guarantee compatibility with peers: Dynamic crypto maps are useful when an IPSec peer initiates the establishment of the IPSec tunnels (this would be the case for an IPSec router fronting a server) Dynamic crypto maps are not useful if the establishment of the IPSec tunnels is locally initiated, because dynamic crypto maps are only policy templates, not complete statements of policy

The access lists in any referenced dynamic crypto map entry are used for crypto packet filtering. You can define multiple remote peers using crypto maps to allow for load sharing: If one peer fails, a protected path will still exist The peer that receives packets is determined by the last peer that the router heard fromreceived either traffic or a negotiation requestfor a given data flow If the attempt fails with the first peer, IKE tries the next peer on the crypto map list.

CET crypto maps, used with Cisco Encryption Technology (effective with Cisco IOS Release 11.2), are now expanded to specify IPSec policy. Crypto Maps: How Many Should You Create? You can build multiple crypto map entries for a given interface if you assign the same map-name to all the crypto map entries. Crypto map entries with different map-numbers but the same map-name are considered as part of a single set, and you can apply only one crypto map set to a single interface. A crypto map set can include any combination of CET, IPSec/manual and IPSec/IKE entries. You must have, in any crypto map set, at least one crypto map entry for each interface which will be sending or receiving IPSec-protected traffic. Multiple interfaces can share the same crypto map set if you choose to apply the same policy to multiple interfaces. When you create more than one crypto map entry for a given interface, use the map-number of each map entry to rank map entries. Lower mapnumbers have higher priorities. At the crypto map set's interface, traffic will be evaluated against higher priority map entries first.

Multiple crypto map entries are required for a given interface when any of the following conditions exist:

Chapter 8: Security

376

Different data flows are to be handled by separate IPSec peers. When you are not using IKE to establish a particular set of security associations, and need to specify multiple access list entries, you must create separate access lists (one per permit entry) and specify a separate crypto map entry for each access list. When a need exists for different IPSec security for different types of traffic (to the same or separate IPSec peers). To use an earlier example, you may need traffic between one set of subnets to be authenticated, and traffic between another set of subnets to be both authenticated and encrypted. The different types of traffic should be defined in separate crypto access lists, and you will need to create a separate crypto map for each crypto access list.

Crypto Map Entries for Manual Security Associations For Manual SAs, the configuration information in both systems must be identical for traffic to be processed successfully by IPSec. Manual SAs are used by prior arrangement between the users of the local router and its IPSec peer. The remote system may not currently support IKE, or the two parties might desire to begin with manual SAs and move later to establish SAs via IKE. When IKE is not used for establishing SAs, there is no negotiation of SAs. The local router can simultaneously support manual and IKE-established security associations within a single crypto map set. In nearly all cases there is very little reason to disable IKE on the local router, except in the unlikely case that the router supports only manual SAs. Creating Dynamic Crypto Maps With IKE Dynamic crypto maps are available only under IKE. A dynamic crypto map entry is basically a crypto map entry without all the parameters configured. A dynamic crypto map is a policy template where the missing parameters are later dynamically configured (as the result of an IPSec negotiation) to match a remote peer's requirements.

Dynamic crypto maps are used when a remote peer tries to initiate an IPSec security association with the router. Dynamic crypto maps are also used in evaluating traffic. Dynamic crypto maps are not used by the router to initiate new IPSec security associations with remote peers. Thus remote peers can exchange IPSec traffic with the router, in a situation where the router does not have a prior crypto map entry specifically configured to meet the remote peer's requirements. Dynamic crypto maps can simplify IPSec configuration, and you should consider using them with networks where the peers are not always fixed. This is the case for mobile users, who obtain dynamically-assigned IP addresses:

377 Chapter 8: Security First, mobile clients need to authenticate themselves to the local router's IKE by a means other than an IP address, such as a fully qualified domain name. Once authenticated, the SA request can be processed against a dynamic crypto map which has been set up following local policy to accept requests from previously unknown peers.

A dynamic crypto map set is incorporated by reference as part of a crypto map set. Any crypto map entries that reference dynamic crypto map sets should be the lowest priority crypto map entries in the crypto map set (that is, be assigned the highest sequence numbers) so that other crypto map entries are evaluated first. In that way, the dynamic crypto map set is examined only when the earlier static map entries are not successfully matched. When the router accepts a request from a peer, at the same time it installs the new IPSec SAs, the router also installs a temporary crypto map entry. This entry will be completed with the results of the negotiation. After the negotiation is complete: The router resumes normal processing Uses this temporary crypto map entry as a normal entry Requests new SAs if the current ones are expiring, using the policy specified in the temporary crypto map entry. Once the flow expires and all the corresponding SAs expire, the temporary crypto map entry is then removed.

For static crypto map entries, when outbound traffic matches a permit statement in an access list and the corresponding SA has not yet been established, the router will initiate new SAs with the remote peer. For dynamic crypto map entries, if no SA existed, the traffic would simply be dropped (since dynamic crypto maps are not used for initiating new SAs). For both static and dynamic crypto maps, when unprotected inbound traffic matches a permit statement in an access list, and the corresponding crypto map entry is tagged as "IPSec," then the traffic is dropped, since it is not IPSecprotected. The security policy as specified by the crypto map entry states that this traffic must be IPSec-protected. Creating a Dynamic Crypto Map Set A dynamic crypto map set has a single dynamic crypto map entry which specifies the acceptable transform sets, and nothing else. Dynamic crypto map entries should specify crypto access lists that limit traffic for which IPSec SAs can be established. A dynamic crypto map entry which does not specify an access list will be ignored during traffic filtering. A dynamic crypto map entry with an empty access list will cause traffic to be dropped.

Chapter 8: Security

378

Dynamic crypto map entries, like regular static crypto map entries, are grouped into sets. A set is a group of dynamic crypto map entries with the same group dynamic-map-name but with an individual dynamic-map-number. You can add one or more dynamic crypto map sets into a crypto map set, via crypto map entries which reference the dynamic crypto map sets. As stated above, you should set the crypto map entries referencing dynamic maps to be the lowest priority entries in a crypto map set (that is, entries with the highest sequence numbers). Apply Crypto Map Sets to Interfaces A crypto map set must be applied to each interface through which CET or IPSec traffic will flow. Applying the crypto map set to an interface instructs the router to evaluate all interface traffic against the crypto map set and use the specified policy during connection or SA negotiation on behalf of traffic to be crypto protected. You can apply the same crypto map set to more than one interface to provide redundancy: Each interface will have its own piece of the SA database. The IP address of the local interface will be used as the default local address for IPSec traffic originating from or destined to that interface.

If, for redundancy purposes, you decide to apply the same crypto map set to multiple interfaces, you will need to specify an identifying interface. This will do the following: The per-interface portion of the IPSec SA database will be established one time and shared for traffic through all the interfaces that share the same crypto map. The IP address of the identifying interface will be used as the local address for IPSec traffic originating from or destined to those interfaces sharing the same crypto map set.

One suggestion would be to use a loopback interface as the identifying interface. Monitor and Maintain IPSec Some configuration changes will become effective only when negotiating subsequent SAs. For new settings to take immediate effect, you must clear the existing SAs so that they can be re-established with the changed configuration. For manually established SAs, you must clear and reinitialize the SAs or the changes will never take effect. If the router is actively processing IPSec traffic, it is desirable to clear only the SAs established by a given crypto map set.

379 Chapter 8: Security Clearing the full SA database should be reserved for large-scale changes, or for situations in which the router is processing very little other IPSec traffic.

IP Spoofing There are a few variations on the types of attacks that successfully employ IP spoofing. Although some are relatively dated, others are very pertinent to current security concerns. Non-Blind Spoofing This type of attack takes place when the attacker is on the same subnet as the victim. The sequence and acknowledgement numbers can be sniffed, eliminating the potential difficulty of calculating them accurately. The biggest threat of spoofing in this instance would be session hijacking. This is accomplished by corrupting the datastream of an established connection, then re-establishing it based on correct sequence and acknowledgement numbers with the attack machine. Using this technique, an attacker could effectively bypass any authentication measures taken place to build the connection. Blind Spoofing This is a more sophisticated attack, because the sequence and acknowledgement numbers are unreachable. In order to circumvent this, several packets are sent to the target machine in order to sample sequence numbers. While not the case today, machines in the past used basic techniques for generating sequence numbers. It was relatively easy to discover the exact formula by studying packets and TCP sessions. Today, most OSs implement random sequence number generation, making it difficult to predict them accurately. If, however, the sequence number was compromised, data could be sent to the target. Several years ago, many machines used host-based authentication services (i.e. RIogin). A properly crafted attack could add the requisite data to a system (i.e. a new user account), blindly, enabling full access for the attacker who was impersonating a trusted host. Man In the Middle Attack Both types of spoofing are forms of a common security violation known as a man in the middle (MITM) attack. In these attacks, a malicious party intercepts a legitimate communication between two friendly parties. The malicious host then controls the flow of communication and can eliminate or alter the information sent by one of the original participants without the knowledge of either the original sender or the recipient. In this way, an attacker can fool a victim into disclosing confidential information by "spoofing" the identity of the original sender, who is presumably trusted by the recipient.

Chapter 8: Security Denial of Service Attack

380

IP spoofing is almost always used in what is currently one of the most difficult attacks to defend against - denial of service attacks, or DoS. Since crackers are concerned only with consuming bandwidth and resources, they need not worry about properly completing handshakes and transactions. Rather, they wish to flood the victim with as many packets as possible in a short amount of time. In order to prolong the effectiveness of the attack, they spoof source IP addresses to make tracing and stopping the DoS as difficult as possible. When multiple compromised hosts are participating in the attack, all sending spoofed traffic, it is very challenging to quickly block traffic.

381 Chapter 8: Security Chapter 8 Questions For questions 1 - 7 , your actual access list may vary somewhat, but please make sure you have the same result. 8-1. Write a one-line access list that will meet the following criteria: First octet must be 10 Second octet can be anything Third octet must allow the odd values, starting with 193 Fourth octet must be in the .32 subnet, assuming a /27 mask length. 8-2. Create an access list to be applied to an outbound interface that will deny the user on external host 173.3.8.41 from accessing telnet service on your internal host with the address of 155.66.75.1, while allowing all other traffic to pass. Router(config)# access-list 110 permit ip any any 8-3. 8-4. Write an access list that will allow users to access web pages on server 172.16.2.12, but nothing else. Create and apply an access list that will allow only host 192.168.1.52 access to a VTY console session. Router(config)# line vty 0 4 Router(config-line)# access-class 1 in 8-5. Your branch offices connect to the company's HQ thru leased-lines. These remote facilities only have Internet access through HQ, which has one leased-line to Qwest Communications, a tier-one provider. HQ hosts all the company's e-mail on its MS Exchange servers. You want to block Internet web access from branch offices, but allow e-mail and other network access. Write and apply an outbound access list on interface E0 to block Internet access to the web, and allow other traffic. Router(config)# access-list 101 permit ip any any Router(config)# interface ethernet 0 Router(config-if)# ip access-group 101 out 8-6. What happens if you apply a non-existent access list? a) Nothing will be forwarded b) Everything will be forwarded c) The default access list in IOS versions 11.2 and newer will kick in d) The router will freeze e) The command will not be accepted and an error will be given at the console f) Smoke, huge billowing clouds of smoke

Chapter 8: Security 8-7.

382

Which of the following statements best describes what this access list is meant to accomplish? Router(config)#access-list 111 permit tcp any 192.168.0.0 0.0.255.255 established a) Telnet sessions will be denied to the 172.16.0.0 network only b) Telnet sessions will be permitted, regardless of the source address c) Telnet sessions will be permitted to the 172.16.0.0 network only d) Telnet sessions will be denied, regardless of the source address e) Telnet sessions will be denied, if initiated from any address other than from the 172.16.0.0 network f) Telnet sessions will be permitted, if initiated from any address other than from the 172.16.0.0 network

8-8.

Which of the wireless options provides encryption for data sent over a wireless network? a) WAP b) IPSEC/IKE c) AP d) WEP

8-9.

You try to perform a traceroute to an Internet destination from your PC, but the traceroute hangs when it reaches the router. Currently, there is an inbound accesslist applied to the serial interface on the Internet router with a single line: accesslist 101 permit tcp any any. What access-list entry may you need to be added to the access-list in order to get traceroute to work? a) access-list 101 permit tcp any any b) access-list 101 permit icmp any any time-exceeded access-list 101 permit icmp any any port-unreachable c) access-list 101 permit icmp any any time-exceeded access-list 101 permit icmp any any echo-reply d) access-list 101 permit icmp any any echo access-list 101 permit icmp any any net-unreachable e) access-list 101 permit udp any any access-list 101 permit icmp any any protocol-unreachable

8-10. You are writing an access list on a router to prevent users on the Ethernet LAN connected to Ethernet interface 0 from accessing a TFTP server (10.1.1.5) located on the LAN connected to Ethernet interface 1. Which of the following would be the correct configuration change if applying the ACL inbound on the Ethernet 0 interface? a) access-list 1 deny tcp 0.0.0.0 255.255.255.255 10.1.1.5. 0.0.0.0 eq 69 b) access-list 100 deny tcp 0.0.0.0 255.255.255.255 10.1.1.5. 0.0.0.0 eq 69

383 Chapter 8: Security c) access-list 100 deny tcp 0.0.0.0 255.255.255.255 10.1.1.5. 0.0.0.0 eq 68 d) access-list 100 deny tcp 10.1.1.5 0.0.0.0 0.0.0.0 255.255.255.255 eq 69 e) access-list 100 deny tcp 0.0.0.0 255.255.255.255 10.1.1.5. 0.0.0.0 eq port 68 f) None of the above 8-11. You wish to allow only telnet traffic to a server with an IP address 10.1.1.100. You add the following access list on the router: access-list 101 permit tcp any host 10.1.1.100 eq telnet access-list 101 deny ip any any You then apply this access list to the inbound direction of the serial interface. Which types of packets will be permitted through the router after this change? (Choose all that apply) a) A non-initial fragment packet passing through to another host that's not 10.1.1.100. b) A non-initial fragment packet en route to the server on port 23. c) A non-fragment packet en route to the server on port 2 1 . d) A non-initial fragment packet going to the server on port 2 1 . e) An initial-fragment or non-fragment packet en route to the server on port 23. 8-12. Unauthorized access to Cisco devices can be prevented through different privilege level settings. How many of these privilege levels exist? a) 0 b) 4 c) 5 d) 15 e) 16 8-13. The following access list is configured on router EMI: access-list 100 deny udp 0.0.0.0 255.255.255.255 10.1.1.5 0.0.0.0 eq 69 What does the access-list accomplish? Note: Assume that all other traffic is permitted with a permit all statement at the end of the access list. a) It blocks all incoming UDP traffic. b) It blocks all incoming traffic arriving on E0 from accessing any FTP server. c) It blocks all incoming traffic, except traffic addresses to 10.1.1.5, from accessing any FTP servers. d) It blocks all incoming traffic arriving on E0 from accessing the FTP server with an address of 10.1.1.5. e) This access list is trying to block traffic from accessing a TFTP server. You would also need the following to block the traffic:

Chapter 8: Security access-list deny tcp 0.0.0.0 255.255.255.255 10.1.1.5 0.0.0.0 eq 69 8-14. Private VLANs are set up in a Cisco switch for 3 ports as shown below: tamer (enable) show pvlan Primary Secondary Secondary-Type Port 500 501 community 5/37 500 502 isolated 5/38-39 tamer (enable) show pvlan mapping Port Primary Secondary

384

15/1 500 501-502


interface vlan 500 ip address 10.10.10.2 255.255.255.0 ip proxy-arp A PC called EMHostl is plugged in to port 5/38, using ip address 10.10.10.137/24. Using the above information, EMHostl has which of the following? a) Layer 2 connectivity with Port 5/37 and port 5/39. b) Layer 3 connectivity with Port 5/37 and port 5/39. c) Layer 3 connectivity with Port 5/39 but not with port 5/37. d) Layer 2 connectivity with Port 5/39 but not with port 5/37. e) None of the above. 8-15. The EnableMode system administrator wants to authenticate LAN users attached to ports on the existing Catalyst 6509 switch. In order to do this, the following is configured: aaa new-model username myname password abcl23 aaa authentication ppp access-dotx local aaa authentication login accessl local aaa authentication dotlx default radius dotlx system-auth-control tacacs-server host 192.168.1.15 key qvertl23 radius-server host 192.168.2.27 key poiuy098 ! interface fastethemet 5/1 dotlx port control auto What is the effect of the configuration on users attempting to access FastEthernet 5/1? a) They will be authenticated via 802.Ix using the local database. b) They will be authenticated via 802.Ix using the server at IP address 192.168.1.150. c) They will be authenticated via 802.Ix using the server at IP address 192.168.2.27. d) They will be authenticated via ppp using the local database.

385 Chapter 8: Security e) They will be authenticated via ppp using the server at IP address 192.168.2.27 f) They will be authenticated via ppp using the server at IP address

192.168.1.15.
8-16. For security reasons, you wish to maintain a degree of logical separation between your servers and the rest of the LAN. The servers should be able to see broadcasts and multicasts only from each other and the default gateway. They should not see this type of traffic from other LAN devices. What kind of ports should be configured for these servers on the Catalyst switch? a) Access Ports. b) Community Ports. c) Isolated Ports. d) Promiscuous Ports. e) Private Ports. f) Span Ports. 8-17. Using the VLAN Access Control List (VACL) configuration below, how many total mask entries are required in the Ternary Content Addressable table? set security acl ip Control_Access permit host 100.1.1.100 set security acl ip Control_Access deny 100.14.11.0 255.255.255.0 set security acl ip ControLAccess permit host 172.16.84.99 set security acl ip ControLAccess deny 177.163.4.0 255.255.255.128 set security acl ip ControLAccess permit host 72.16.82.3 set security acl ip ControLAccess deny host 175.17.1.4 set security acl ip ControLAccess permit host 191.169.99.150 set security acl ip ControLAccess deny host 191.169.230.1 a) 2 b) 3 c) 4 d) 6 e) 8 8-18. With regard to the use of VLAN Access Control Lists (VACL) on a Catalyst 6500 series switch, which of the following are true statements? (Choose all that apply.) a) VACLs can be used to forward, drop, and redirect traffic depending on Layer 2 and Layer 3 information. b) VACLs cannot be used when using QoS on the switch. c) VACLs can be used together with router interface access lists. d) VACLs can be used for traffic that is being Layer 3 switched. e) VACLs cause extra latency for traffic passing through the switch. 8-19. A new Catalyst 6500 running Cat OS was recently installed in the EnableMode network. In order to increase the security of your LAN, you

Chapter 8: Security

386

configure this Catalyst switch using port security. What statement is true about port security? a) Port security can be configured on a trunk port. b) Prot security can be configured on a SPAN destination port. c) If a security violation occurs, the Link LED for that port turns orange, and a linkdown trap is sent to the Simple Network Management Protocol (SNMP) manager. d) Port security can be configured on a SPAN source port. e) Static CAM entries can be configured on a port configured with port security. f) Ports that were disabled due to security violations will be automatically reenabled when the host with the valid MAC address is re-connected. 8-20. After properly configuring multiple VLANs, a The EnableMode network has decided to increase the security of its VLAN environment. Which of the following can be done on a switched network to enhance security measures? (Choose all that apply). a) Enable the rootguard feature to prevent a directly or indirectly connected STPcapable device to affect the location of the root bridge. b) If a port is connected to a foreign device, make sure to disable CDP, DTP, PagP, UDLD, and any other unnecessary protocol, and to enable Uplinkfst/BPDU guard on it. c) Configure the VTP domains appropriately or turn off VTP altogether if you want to limit or prevent possible undesirable protocol interactions with regard to network-wide VLAN configuration. d) Disable all unused ports and place them in an unused VLAN to avoid unauthorized access. e) Set the native VLAN ID to match the port VLAN ID (PVID) of any 802.1Q trunks to prevent spoofing from one VLAN to another. 8-21. Passwords for Enterprise guests should normally be: a) Easy to remember b) Be the same as the username c) Be at least 10 characters d) Contain uppercase letters e) Time limited to the guest visit 8-22. When segmenting guest traffic across the enterprise wireless network you should take which of the following approaches? a) Use a firewall b) Always give guest traffic higher priority c) Always give guest traffic lower priority d) Separate guest traffic as close to the edge as possible e) Use Access Lists f) None of the above

387 Chapter 8: Security 8-23. What are the differences between TACACS+ and RADIUS? (Choose all that apply) a) TACACS+ uses UDP while RADIUS uses TCP for transport. b) RADIUS and TACACS+ encrypts the entire body of the packet. c) RADIUS is an IETF standard, while TACACS+ is not. d) TACACS+ sends a separate request for authorization, while RADIUS uses the same request for authentication and authorization. e) RADIUS offers multi-protocol support while TACACS+ does not. 8-24. You want to prevent all telnet access to your Cisco router. In doing so, you type in the following: line vty 0 4 no login password Cisco Will this prevent all telnet access to the router as desired? a) Yes. The no login command disables all telnet access, even though the password is Cisco. b) Yes. The VTY password is needed but not set, so all access will be denied. c) No. The VTY password is Cisco. d) No. No password is needed for VTY access. e) No. The password is login. 8-25. A new TACACS+ server is configured to provide authentication to a NAS for remote access users. A user tries to connect to the network and fails. The NAS reports a FAIL message. What could be the problem? (Choose all that apply). a) The TACACS+ server is down. b) The password for this user is incorrect. c) The username does not exist in the TACACS+ user database. d) The TACACS+ service is not running on the server. e) The NAS server lost its route to the TACACS+ server. 8-26. While setting up remote access for your network, you type in the aaa new-model configuration line in your Cisco router. Which authentication methods have you disabled as a result of this change? (Choose all that apply.) a)TACACS b) TACACS+ c) Extended TACACS (XTACACS) d) RADIUS e) RADIUS+ f) Kerberos 8-27. With regard to IPSec, which of the following are true? a) IPSec supports Multicast only in combination with GRE tunnels.

Chapter 8: Security

388

b) IPSec does not support Multicast. c) IPSec supports Multicast in IOS 12.x or later. d) IPSec supports Multicast in IOS 10.x or earlier. e) IPSec supports Multicast. 8-28. You are setting up a secure connection to another company's device. You are not certain that they are using Cisco so you want your router to manually exchange the RSA public keys between each other. How should you configure your router? a) Use IPSec with manual keying b) Use IPSec with RSA signatures c) Use IPSec with RSA encrypted nonces d) Use Cisco Encryption Technology e) Use IPSec using preshared keys f) Use IPSec using RSA authentication 8-29. Which of the following are security services provided by IPSec? a) Data confidentiality b) Data integrity c) Data origin authentication d) Protection for multicast/broadcast traffic e) Anti-replay 8-30. You wish to change the IKE policies of your IPSec configuration in your site to site router VPN. Which of the following are valid ISAKMP policy parameters that can be changed in the configurations? a) Hash algorithm b) Authentication method c) Diffie-Hellman group identifier d) Security Association's lifetime e) Encryption algorithm f) All of the above g) None of the above 8-31. Router EMI has been configured for authentication as shown in the following display: aaa new-model username myname password abcl23 aaa authentication login default enable aaa authentication login accessl local aaa authentication login access2 radius tacacs+ aaa authentication login access3 tacacs+ local tacacs-server host 192.168.1.15 key qwertl23 radius-server host 192.168.2.27 key poiuy098
j

Line console 0 login authentication access3

389 Chapter 8: Security

line vty 0 4 password dfgh456 login What method is being used to secure the console port of this router? a) Authentication is being done using the local database. b) Authentication is being done using the login password dfgh456. c) Authentication is being done using the enable password as a default d) Authentication is being done using the server at IP address 192.168.2.27 e) Authentication is being done using the server at IP address 192.168.1.15. If a connection to that server fails, the local database will be used.

Chapter 8: Security

390

Chapter 8 Answers 8-1 Router(config)# access-list 10 permit 10.0.193.32 0.255.54.31 8-2 Router(config)# access-list 110 deny tcp host 173.3.8.41 host 155.66.75.1 eq telnet 8-3 Router(config)# access-list 110 permit tcp any host 172.16.2.12 eq 80 8-4 Router(config)# access-list 1 permit 192.168.1.52 255.255.255.255 8-5 Router(config)# access-list 101 deny tcp any any eq www 8-6 8-7 8-8 8-9 8-10 8-11 8-12 8-13 8-14 8-15 8-16 8-17 8-18 8-20 8-21 8-22 8-22 8-23 8-24 8-25 8-26 8-27 8-28 8-29 8-30 8-31 b d b f b, d, b c b b a, c, d c e, c, d d c, d d b, b a, c b a a, b, c, and f

Chapter 9

Enterprise Wireless Mobility


Wireless Standards The IEEE 802.11 family of standards is a group of specifications developed by a committee of the Institute of Electrical and Electronics Engineers Inc. (IEEE) for wireless local area networks (WLANs). These specifications define an over-theair interface between a wireless client and a base station, or access point, or between two or more wireless clients. The IEEE 802.11 committee was set up in 1990 and published the first of the 802.11 family of standards in the late 1990s. 802.11 came into existence after a Federal Communications Commission (FCC) decision in 1985 to open several bands of the wireless spectrum for general use without an FCC license. These bands were previously reserved for equipment such as microwave ovens. To operate in these bands, however, devices needed to use "spread spectrum" technology. This technology spreads a radio signal over a range of frequencies, making the signal less susceptible to interference and more difficult to intercept. The IEEE committee defined two initial variants of 802.11: 802.11b, operating in the Industry, Medical and Scientific (ISM) band of 2.4 GHz, and 802.11a, operating in the Unlicensed National Information Infrastructure (U-NII) bands of 5.3 GHz and 5.8 GHz. A more recent variant is 8 0 2 . l l g , which uses an advanced modulation method, orthogonal frequency-division multiplexing (OFDM), and operates in the 2.4 GHz band. Wireless/802.lib Although the first wireless networks appeared over two decades ago, adoption was initially slow because low data rates, proprietary solutions, limited interoperability and cost. In 1999, the IEEE ratified the 802.11b standard with data rates up to 11 Mbps, and interest in WLAN exploded. The popularity of wireless/802.lib increased with the growth of home broadband Internet access. Wireless remains the most convenient way to share a broadband link between several PCs spread over a home. The growth of hotspots, free and fee-based public access points, have added to WLAN's popularity.

Chapter 9: Enterprise Wireless Mobility

392

Vendor interoperability is ensured by the Wi-Fi Alliance (formerly known as the Wireless Ethernet Compatibility Alliance, or WECA), an independent international nonprofit association that identifies compliant products from more than 140 companies, including component manufacturers, equipment vendors, and service providers under the "Wi-Fi" Brand. As with any new technology, wireless is continually evolving. Multiple standards that offer advancements in speed, bandwidth and security either exist, or are being developed to compete for dominance in the high-bandwidth WLAN market. These include: 802.11bThis is the wireless standard in general use today, and can be found in both corporate and home wireless markets, with wireless "hot spots" popping up in hotels, airports, convention centers, and coffee shops worldwide. It operates using 11 channels that are 22 MHz wide in the 2.4 GHz unlicensed radio band. 802.11b delivers a maximum data rate of 11 Mbps using Direct Sequence Spread Spectrum (DSSS) for signal modulation. 802.11a -- Operates in the unlicensed portions of the 5.1 to 5.8 GHz frequency band, making 802.11a immune to interference from devices that operate in the 2.4 GHz band, such as microwave ovens, cordless phones, and Bluetooth (a short-range, low-speed, point-to-point, personal-area-network wireless standard) devices. 802.11a has a top data rate of 54 Mbps, nearly five times the bandwidth of 802.11b. 802.11a was the first of the higherspeed wireless standards to hit the market, but has a major drawback in that it does not provide interoperability with existing 802.11b equipment. 802.11a uses the 8 channels present in the lowest two U-NII bands, providing for a 200Mhz spectrum. 802.11a defines 8 channels in the 5 GHz spectrum at 25Mhz centers. 8 0 2 . l l g A later entry, this standard has a top data rate of 54 Mbps, but operates in the same unlicensed portion of the 2.4-GHz spectrum as 802.11b, making it backward compatible with 802.11b devices. This new standard is limited to the same three channels and crowded 2.4-GHz band as 802.11b, creating possible scalability and interference issues.

Wireless Networking Terms Access Point (AP)A wireless LAN transceiver that acts as a center point of an all-wireless network or as a connection point between wireless and wired networks. AntennaA device for transmitting or receiving a radio frequency (RF). Antennas are designed for specific and relatively tightly defined frequencies, and are quite varied in design. An antenna designed for 2.4-GHz 802.11b devices will not work with 2.5-GHz devices. BeamwidthThe angle of signal coverage provided by an antenna. Beamwidth typically decreases as antenna gain increases.

393 Chapter 9: Enterprise Wireless Mobility BroadbandIn general, a RF system is deemed "broadband" if it has a constant data rate at or in excess of 1.5 Mbps. Its corresponding opposite is "narrowband." Fresnel EffectA phenomenon related to line of sight whereby an object that does not obstruct the visual line of sight obstructs the line of transmission for radio frequencies. MicrocellA bounded physical space in which numerous wireless devices can communicate. Because it is possible to have overlapping cells as well as isolated cells, the boundaries of the cell are established by some rule or convention. MultipathThe echoes created as a radio signal bounces off of physical objects. OcclusionOcclusion is an obstruction to a propagation path. If there is no line of sight to a wireless access point, then there are physical obstructions (boxes, walls, warehouse shelving, etc) blocking the view. This can lead to poor wireless signal transmission. RoamingMovement of a wireless node between two microcells. Roaming usually occurs in infrastructure networks built around multiple access points. Spread SpectrumA radio transmission technology that "spreads" the user information over a much wider bandwidth than otherwise required in order to gain benefits such as improved interference tolerance and unlicensed operation. Wireless Access ProtocolA language used for writing Web pages that uses far less overhead, making it more preferable for wireless access to the Internet by personal digital assistants (PDAs) and Web-enabled cellular phones. Wireless BridgesWireless bridges are used to establish a direct link between two sites. The network traffic between the two sites is bridged or forwarded to the other bridge as if it were on one network. This is called a point-to-point link. A point-to-multipoint wireless link is an expansion of the point-to-point link in which a central bridge establishes multiple point-to-point links. Using point-tomultipoint connections, multiple remote sites, such as buildings, can be linked as a single logical network. In point-to-multipoint architecture, remote sites are linked to a single root bridge at a centralized site. Radio Frequency (RF) Terms HzThe international unit for measuring frequency is the Hertz (Hz), equivalent to the older measure of cycles per second. MHzone million Hertz. GHzone billion Hertz or one thousand MHz.

To understand these in context: Standard U.S. electrical power frequency is 60 Hz

Chapter 9: Enterprise Wireless Mobility

394

AM broadcast band is 0.55-1.6 MHz FM broadcast is 88-108 MHz TDMA mobile phones use bands in the 800 MHz and 1.9 GHz ranges GSM mobile telephones use 850 MHz and 1.8 GHz bands (900 MHz and 1.9 GHz in the North America) Microwave ovens operate at 2.45 GHz Cordless home phones typically run in the 900 MHz, 2.4 GHz or 5.8 GHz bands

Wireless Deployment Issues Interference sources802.11a (5.1 to 5.8 GHz band) may be the better choice for "noisy" wireless environments with interference sources, such as such as Bluetooth devices or non-802.lib wireless phones in the 2.4 GHz frequency band. Need for channels802.11b offers only three nonoverlapping frequency channels; 802.11a offers eight for more flexibility in structuring coverage areas. Installed baseThe more 802.11b clients installed, the greater the need to have access points that support 802.11b. Types of applications802.11b is better for transaction-intensive applications; 802.11a is better for data-hungry applications. Cost802.11a systems could cost 20 to 30 percent more than current 802.11b products and may have a higher deployment cost due to different RF characteristics of the 5-GHz frequency. Limited rangeThe farthest a device can currently receive an adequate signal from a standard 802.11 access point is about 300 feet (92 m). Performance decreases significantly with distance from the access point. Generally, 802.11b and 8 0 2 . l l g wireless access points have a range of up to 150 feet (46 m) indoors and 300 feet (92 m) outdoors, and 802.11a has about one third of that range. However, actual range depends on several factors (such as walls, room arrangement, antenna positions, antenna gain, RF interference).

801.11b Framing Information The IEEE 802.11b MAC Frame Format includes: Frame Control (FC): protocol version and frame type (management, data and control). Duration/ID (ID) Station ID is used for Power-Save poll message frame type. Duration value is used for the Network Allocation Vector (NAV) calculation. Address fields (1-4) with up to 4 addresses (source, destination, sender and receiver addresses) depending on the frame control field (the ToDS and FromDS bits).

395 Chapter 9: Enterprise Wireless Mobility Sequence Control consisting of fragment number and sequence number. Sequence control is used to represent the order of different fragments belonging to the same frame and to recognize packet duplications. Datainformation that is transmitted or received. CRC contains a 32-bit Cyclic Redundancy Check (CRC).

The Frame Control Format contains the following: < Protocol Version indicates the version of IEEE 802.11 standard. Type & Subtype: TypeManagement, Control and Data , SubtypeRTS, CTS, ACK etc To DS is set to 1 when the frame is sent to Distribution System (DS) From DS is set to 1 when the frame is received from the Distribution System (DS) More Fragment is set to 1 when there are more fragments belonging to the same frame following the current fragment Retry indicates that this fragment is a retransmission of a previously transmitted fragment. (For receiver to recognize duplicate transmissions of frames) Power Management indicates the power management mode that the station will be in after the transmission of the frame. More Data indicates that there are more frames buffered to this station. WEP indicates that the frame body is encrypted according to the WEP (wired equivalent privacy) algorithm. Order indicates that the frame is being sent using the Strictly-Ordered service class.

802. l x Authentication There are thee main components of 802.lx authentication for wireless LANs: Supplicant (client software) Authenticator (access point) Authentication Server (often a RADIUS server)

The main steps in authentication are: A client device initiates a connection to the access point The access point detects the client and enables the client port This forces the client port into an unauthorized state, so only 802.lx traffic can be forwarded. Other traffic such as DHCP, HTTP, FTP, SMTP and POP3 is blocked The client then sends an Extensible Authorization Protocol (EAP) start message.

Chapter 9: Enterprise Wireless Mobility

396

The access point replies with an EAP-request identity message to obtain the client's identity The client's EAP-response packet with the client's identity is forwarded to the authentication server The authentication server is configured with a specific authentication algorithm to authenticate clients. The result is an accept or reject packet from the authentication server to the access point When the access point receives the accept packet, it transitions the client's port to an authorized state, and traffic will then be forwarded

The IEEE 802.Ix standard defines 802.Ix port-based authentication as a clientserver based access control and authentication protocol that restricts unauthorized clients from connecting to a LAN through publicly accessible ports. The authentication server must validate any client connected to a switch port before making any LAN or switch services available. Prior to client authentication, 802.Ix access control allows only Extensible Authentication Protocol over LAN (EAPOL) traffic through the port to which the client is connected. After successful authentication, normal traffic can pass through the port. Cisco Compatible Extensions (CCX) The Cisco Compatible Extensions (CCX) Program for WLAN devices is a Cisco licensing and testing program to certify compatibility with Cisco wireless hardware and software products. In the program, Cisco licenses a specification with WLAN standards and Cisco proprietary innovations. A program participant, such as a maker of a WLAN client adapter or client device, implements support for all features and then submits the product to an independent testing laboratory. After passing all Cisco-required tests, a device is certified as "Cisco Compatible." The most recent CCX specification is CCX version 3 (CCX-V3). Cisco CCX specifications cover the company's implementations of strong user authentication and encryption. The CCX specification complies with the Cisco Wireless Security Suite, is compatible with Cisco's mechanism for assigning WLAN clients to virtual LANs, and is fully compliant with Wi-Fi and 802.11 standards. CCX supports IEEE 802.Ix PEAP (Protected Extensible Authentication Protocol) and the AES advanced encryption standard. Related Cisco Products The Cisco Catalyst 6500 Series Wireless LAN Services Module (WLSM) integrates wired and wireless network services in large enterprises. It also enables fast secure inter-subnet roaming, important for latency-sensitive applications such as wireless voice. The WLSM module, together with the CiscoWorks Wireless LAN Solution Engine (WLSE)which manages and secures the radio-frequency (RF) airspacedelivers scalable management, security, and RF control for very large wireless networks.

397 Chapter 9: Enterprise Wireless Mobility Cisco SWAN Wireless Domain Services (WDS) are Cisco IOS software services that expand WLAN client mobility, simplify WLAN deployment and management, and enhance WLAN security. These services are supported on access points, Cisco and Cisco Compatible client devices, the Cisco Catalyst 6500 Series WLSM, and other Cisco LAN switches and routers. WDS includes radio management aggregation, fast secure roaming, client tracking, and WAN link remote site survivability. Radio management aggregation under WDS supports RF managed services such as rogue access point detection for WLAN threat defense, interference detection, assisted site surveys, and self-healing WLANs. Tables 9 - 1 , 9-2, 9-3 and 9-4 cover features of Cisco's Aironet access points. Table 9 - 1 . Operational Capabilities of Cisco Aironet Access Points Cisco Series
1000 Series 1100 Series 1130AG Series 1200 Series 1230AG Series 1240AG Series 1300 Series

Autonomous Operation No Yes Yes Yes Yes Yes Yes

Lightweight Operation Yes No Yes Yes Yes Yes No

Table 9-2. Cisco Aironet Access Point Support for 802.11a/b/g Cisco Series
1000 Series 1100 Series 1130AG Series 1200 Series 1230AG Series 1240AG Series 1300 Series

802.11b Yes Yes Yes Yes Yes Yes Yes

802.llg Yes Yes Yes Yes Yes Yes Yes

802.11a Yes No Yes Optional* Yes Yes No

Chapter 9: Enterprise Wireless Mobility Table 9-3. Cisco Aironet Access Points for Different Environments Cisco Series
1000 Series model 1010* 1000 Series model 1020* 1100 Series 1130AG Series 1200 Series 1230AG Series 1240AG Series 1300 Series

398

Offices and Similar Environments Recommended*

Challenging Indoor RF Environments Not recommended Recommendedl (AP1020orAP1030 [for remote offices]) Not recommended Not recommended Recommended Recommended Ideal Not recommended

Outdoors

Not recommended

Recommended*(Model 1030 for branch offices) Recommended** Ideal Recommended*** Recommended*** Recommended*** Not recommended

Not recommended Not recommended Not recommended Recommended**** Recommended**** Recommended**** Ideal**

* For lightweight deployment only ** For autonomous deployment only. * * * Particularly for deployments above suspended ceilings. * * * * Can be deployed outdoors when deployed in a weatherproof NEMA-rated enclosure.

Table 9-4. Cisco Aironet Access Points | Product


Access Points for Offices and Sim ilar Environments Cisco Aironet 1130AG Series Access Point

Features/Benefits

Two high-performance IEEE 802.11a and 8 0 2 . l l g radios offering 108 Mbps of capacity ; . "* * ' 2.4- and 5 GHz integrated diversity omnidirectional antennas for easy deployment without

399 Chapter 9: Enterprise Wireless Mobility

Dual-band lightweight or autonomous access point with integrated antennas for easy deployment in offices and similar RF environments

external antennas Available in either a lightweight version, or an autonomous version that may be field-upgraded to lightweight operation Low-profile plastic case 32 MB of memory with 16 MB of storage Operating temperature range of 32 to 104F (0 to 40C) Inline power support (Cisco prestandard and 802.3af) Console port for management Support for WPA and 802.11i/WPA2 Integrated and secure mounting system UL2043-rated for placement in plenum areas Single 8 0 2 . l l g radio offering 54 Mbps of capacity

Cisco Aironet 1100 Series Access Point

2.4 GHz integrated diversity dipole antennas Available in an autonomous version only 16 MB of memory with 8 MB of storage

Single-band autonomous access point with integrated antennas for easy deployment in offices and similar environments

Operating temperature range of 32 to 104F (0 to 40C) Inline power support (Cisco prestandard) Support for WPA and

Chapter 9: Enterprise Wireless Mobility

400

802.11i/WPA2 Integrated and secure mounting system UL2043-rated for placement in plenum areas Two IEEE 802.11a and 8 0 2 . l l g radios offering 108 Mbps of capacity 2.4 and 5 GHz integrated antennas for easy deployment without external antennas Available in a lightweight version only 16 MB of memory with 8 MB of storage Plastic case Operating temperature range of 32 to 104F (0 to 40C) Inline power support (802.3af) Support for WPA and 802.1H/WPA2 UL2043-rated for placement in plenum areas
Access Points for Challenging I n d o o r RF Environments 1 Cisco Aironet 1240AG Series Access Point

Cisco Aironet 1000 Series L i g h t w e i g h t Access Point Model 1010

Dual-band lightweight access point with integrated antennas for easy deployment in offices and similar RF environments

|
i 'i i i

Two high-performance IEEE 802.11a and 8 0 2 . l l g radios offering 108 Mbps of capacity 2.4 and 5 GHz dual-diversity RPTNC connectors for external antenna support

\ Second-generation dual-band lightweight or autonomous access point with dual diversity antenna connectors for challenging RF environments

Available in either a lightweight version, or an autonomous version that may be field-upgraded to lightweight operation Rugged metal case

401 Chapter 9: Enterprise Wireless Mobility

32 MB of memory with 16 MB of storage Operating temperature range of -4 to 131F (-20 to 55C) Inline power support (Cisco prestandard and 802.3af) Console port for management Support for WPA and 802.11i/WPA2 Complete with integrated and secure mounting system UL2043-rated for placement in plenum areas Two high-performance IEEE 802.11a and 8 0 2 . l l g radios offering 108 Mbps of capacity ! 2.4 and 5 GHz dual-diversity RPTNC connectors for external antenna support
Cisco Aironet 1230AG Series Access Point

Available in either a lightweight version, or an autonomous version that may be field-upgraded to lightweight operation Rugged metal case 16 MB of memory with 8 MB of storage Operating temperature range of -4 to 131F (-20 to 55C) Inline power support (Cisco prestandard) Console port for management Support for WPA and 802.11i/WPA2

First-generation dual-band lightweight or autonomous access point with dual-diversity antenna connectors for challenging RF environments

Chapter 9: Enterprise Wireless Mobility

402

Complete with integrated and secure mounting system UL2043-rated for placement in plenum areas Single high performance 8 0 2 . l l g radio offering 54 Mbps of capacity Field-upgradable to support 802.11a with a hardware upgrade module 2.4 GHz dual-diversity RP-TNC connectors for external antenna support Available in either a lightweight version, or an autonomous version that may be field-upgraded to lightweight operation Rugged metal case 16 MB of memory with 8 MB of storage Single band lightweight or autonomous access point with dual diversity antenna connectors for challenging RF environments. Operating temperature range of -4 to 131F (-20 to 55C) Inline power support (Cisco prestandard) Console port for management ! Support for WPA and 802.11i/WPA2 Complete with integrated and secure mounting system 1 UL2043-rated for placement in plenum areas
Cisco A i r o n e t 1000 Series L i g h t w e i g h t Access Point Model 1020

Cisco Aironet 1200 Series Access Point

Two IEEE 802.11a and 8 0 2 . l l g radios offering 108 Mbps of capacity 2.4 GHz dual-diversity RP-TNC connectors for external antenna

403 Chapter 9: Enterprise Wireless Mobility

support 5 GHz non-diversity RP-TNC connector for external antenna support Available in a lightweight version only Dual-band lightweight access point with antenna connectors for challenging RF environments Metal and plastic case 16 MB of memory with 8 MB of storage Operating temperature range of 32 to 104F (0 to 40C) Inline power support (802.3af) Support for WPA and 802.11i/WPA2 UL2043-rated for placement in plenum areas Single 8 0 2 . l l g radio offering 54 Mbps of capacity 2.4 GHz dual-diversity RP-TNC connectors for external antenna support Configurable as an autonomous access point, wireless bridge, or as a workgroup bridge Support for both point-to-point and point-to-multipoint configurations Single-band autonomous access point and wireless bridge with a NEMA-4 compliant case for mounting in outdoor areas Weather resistant NEMA-4 compliant case Integrated or optional external antennas for flexibility in deployment 16 MB of memory with 8 MB of storage

Cisco Aironet 1300 Series Outdoor Access P o i n t / B r i d g e

Chapter 9: Enterprise Wireless Mobility

404

Operating temperature range of 22 to 131eF (-30 to 55eC) Inline power support (Cisco prestandard) Console port for management Support for WPA and 802.11i/WPA2 Complete with Integrated and secure mounting system UL2043-rated for placement in plenum areas Integrated or optional external antennas for flexibility in deployment Antenna Types Omnidirectional Antennas An omnidirectional antenna has a full 360-degree radiation pattern, and are used for communication with wireless devices when coverage in all directions is required. Omnidirectional antennas have gain ratings ranging from 2.2 to 12 dBi. Directional Antennas Directional antenna focus or direct radiation energy and reception sensitivity in a specific direction. Directional antenna offer greater range than omnidirectional antennas, but must be aimed at another antenna. Directional antennas include the parabolic dish antennas, the patch antenna, and the Yagi antenna. Parabolic dishes have very high gain (typically 21 dBi) and a very narrow radiation angle (typically 12.5 degrees) and must be precisely aimed at the other antenna. A patch antenna is more tolerant of orientation, but needs to be positioned to face the direction of the other antenna. Yagi antennas have high gain (typically around 13.5 dBi) and a fairly wide radiation angle (typically 25 to 30 degrees Table 9-5. Antenna Maximum Range Antenna Type Omnidirectional 2.2 dBi antenna Distance Indoor350 ft at 1 Mbps Outdoor-2000 ft at 1 Mbps

405 Chapter 9: Enterprise Wireless Mobility

Omnidirectional 5.2 dBi antenna Directional high-gain Yagi antenna Directional parabolic dish antenna Site Surveys

5000 ft at 2 Mbps data rate 6.5 miles at 2 Mbps data rate 25 miles at 2 Mbps data rate

Every wireless network application is a unique. Differences exist in component configuration, placement, and physical environment. To establish the feasibility of any WLAN or radio frequency (RF) project, a site survey, with complete supporting documentation, should be performed. A WLAN site survey takes into account the radius around one or more access points and the structural components of the facility to determine coverage. This survey involves verifying a clear line of sight between points, consulting topographical maps, and global positioning systems (GPS) to pinpoint locations and to evaluate needs relating to mounting equipment and towers. A number of documents are generated from this survey including a bill of materials, requirements for tower and antenna placement, drawings and site-related documents including construction materials, and an implementation plan. In setting up a WLAN for corporate guest access, a consideration is to ensure that the radio frequency coverage is limited to within a building. Leaking of RF coverage to the outside opens a potential for unauthorized access from unknown outside users. Keep the following guidelines in mind when preparing for a site survey: Perform the site survey when the RF link is functioning with all other systems and noise sources operational. Execute the site survey entirely from the mobile station. When using the active mode, conduct the site survey with all variables set to operational values.

Additional Information Also consider the following site survey operating and environmental considerations: Physical environmentClear or open areas provide more radio range than closed or filled areas. Also, the less cluttered the work environment, the greater the range. Antenna type and placementProper antenna configuration is critical to increasing radio range. As a general rule, range increases in proportion to antenna height. ObstructionsPhysical obstructions such as metal shelving or a steel pillar can hinder performance of the client adapter. Avoid locating the workstation where there is a metal barrier between the sending and receiving antennas.

Chapter 9: Enterprise Wireless Mobility

406

Building materialsRadio penetration is influenced by building construction material. Drywall construction allows more range than concrete block walls. Metal or steel construction is a barrier to radio signals. Data ratesSensitivity and range are inversely proportional to data transmission rates. Maximum radio range is achieved at the lowest workable data rate, and a decrease in receiver threshold sensitivity occurs as the radio data rate increases.

Wireless Handsets (Other Than Cisco) SpectraLink is the main vendor today in the VoWi-Fi industry. SpectraLink products range from small, stripped-down models to ruggedized devices with push-to-talk capabilities. Current choices are to use SpectraLink handsets or Cisco handsets. SpectraLink 802.11 offerings with a docking station that includes an integrated speakerphone and charging cradle. SpectraLink offers a Voice Priority protocol, which is supported both by established vendors and WLAN start-ups. Cisco Structured Wireless-Aware Network (SWAN) The Cisco Structured Wireless-Aware Network (SWAN) is a framework for integrating and extending wired and wireless networks to organizations deploying WLANs. SWAN extends "wireless awareness" into the network infrastructure, providing levels of security, scalability, reliability, ease of deployment, and management for wireless LANs that organizations have come to expect from their wired LANs. Cisco access points support all available 802.1x/EAP methods, in addition to supporting WPA for air link encryption. Two chief security concerns for public wireless (PWLAN) are user authentication, which is addressed through the 802,lx/EAP suite of protocols, and encryption of individual user data. The Cisco PWLAN solution has implemented features in the IOS Software for Cisco's access zone routers (AZR) that reduce the risk of session hijacking associated with malicious IP spoofing. Operators can use the features available in Cisco access points to prevent local peer attacks as well as to guard against man-in-the-middle spoofing of infrastructure addresses. 8 0 2 . 1 1 On Its Own is Inherently Insecure Conventional 802.11 WLAN security includes the use of open or shared-key authentication and static wired equivalent privacy (WEP) keys. This combination offers a rudimentary level of access control and privacy, but each element can be compromised. The following sections describe problems with wireless network security and various security elements and the challenges involved in their use in enterprise environments.

407 Chapter 9: Enterprise Wireless Mobility Prevention Physical security Organizational policy Supported WLAN infrastructure 802.Ix port-based security on edge switches

Detection
Physically observing WLAN access point placement and usage Using wireless analyzers or sniffers Using scripted tools on the wired infrastructure

Wireless Networks Are Targets for Intruders


Wireless networks have become a target for malicious intruders. Many organizations still deploy wireless technology without fully considering security. This is due to the wide availability of low cost devices, ease of deployment, and productivity gains. Because WLAN devices ship with security features disabled, many WLAN installations have attracted the attention of the intruder community. Although many intruders are using these connections as a means to get free Internet access or to hide their identity, a smaller, malicious group sees an opportunity to break into networks which are difficult to attack from the Internet. When WLAN data is not encrypted, the packets can be viewed by anyone within radio frequency range. For example, a person using a Linux laptop with a WLAN adapter and a program such as TCPDUMP can receive, view, and store packets circulating on a target WLAN.

Interference and Jamming


Interference is any unwanted RF signal that prevents a system from receiving information reliably. WLAN systems are not protected from interference under Federal Communications Commmission (FCC) or International Telecommuniaction Union (ITU) regulations, nor are WLANs allowed to create interference in other systems. All wireless signals are subject to interference. RF jamming can block signals. Repetitively hammering an access point with successful or unsuccessful access requests will exhaust its available RF channels and knock it off the network. Other wireless services using the same RF bands as a WLAN can interfere with the WLAN, reducing range and usable bandwidth. Bluetooth technology, used to communicate between handsets and other information appliances, uses the same 2.4 GHz radio frequency as WLAN devices and can interfere with WLAN communications. Common WLAN interference sources include airport radar, amateur radio, other wireless devices, and other wireless bridge or WLAN networks. An indicator of

Chapter 9: Enterprise Wireless Mobility

408

possible interference is a high level of packet errors with a strong signal. Before you can resolve any interference problem you must first isolate the cause of the interference. MAC Authentication Wireless cards have a unique Media Access Control (MAC) address burned into, and printed on, every card. This means that WLAN access points can through a card's MAC address, identify any wireless card ever manufactured. Some WLANs require that cards be registered before any wireless services can be used. The access point can identify a card by user, but this is difficult to manage since every access point will need to refer to the list of users. Also this cannot exclude intruders who use WLAN cards loaded with firmware which simulates a randomly chosen, or deliberately spoofed, address. Using such a fictitious address, an intruder can attempt to inject network traffic or spoof legitimate users. Ad Hoc Versus Infrastructure Modes The most common WLAN mode is infrastructure mode. In infrastructure mode, all wireless clients connect through an access point for all their communications. But WLANs can also be set up as independent peer-to-peer networks, commonly called "ad hoc WLANs". In an ad hoc WLAN, client devices equipped with compatible WLAN adapters within range of one another can share files directly. Ad hoc WLAN range varies, depending on the type of WLAN system and antenna. Laptop and desktop computers equipped with 802.11b or 802.11a WLAN cards can create ad hoc networks if they are within a short unobstructed distance of one another. Ad hoc WLANs have a significant security impact. Many wireless cards, including some shipped by PC manufacturers, support ad hoc mode. When adapters use ad hoc mode, any intruder with an adapter configured for ad hoc mode and using the same settings as other adapters may obtain unauthorized access to clients. Service Denial or Degradation One shortcoming of basic 802.11 is that management messages are not authenticated. Management messages include beacon, probe request or response, association request or response, re-association request or response, disassociation, and de-authentication messages. Denial-of-service (DoS) attacks are made possible through lack of authentication of these management messages. This type of DoS attack has been demonstrated using open source tools such as wlan-jack.

409 Chapter 9: Enterprise Wireless Mobility Wireless Networks Are Weapons An access point which is accessible to an organization's users but is not managed as a part of the organization's trusted network is called a rogue access point. Most rogue APs are installed by employees without knowledge of the network administrator. The threat posed by rogue access points can be reduced by preventing their deployment and by detecting those which are deployed. A typical rogue access point is an inexpensive AP that an employee purchases and plugs into an available switch port, with no security measures enabled. An intruder, even one located outside the organization's facilities, can gain access to the trusted network simply by associating with a rogue access point. Another type of rogue AP is one which appears to WLAN users as a trusted access point and tricks users into associating with it, allowing a knowledgeable intruder to manipulate wireless frames as they cross the access point. Authentication Two means of client authentication are supported in the 802.11 standard: Open authentication and Shared-key authentication.

Open authentication requires supplying the correct service set identifier (SSID). With open authentication, the use of WEP allows a client with the correct WEP keys to send data to and receive data from the access point. With shared-key authentication, the access point sends a client device a challenge text packet that the client must then encrypt with the correct WEP key and return to the access point. If the client has the wrong key or no key, authentication will fail and the client cannot associate with the access point. Shared-key authentication is not secure, since any intruder who detects both the clear text challenge and the same challenge encrypted with a WEP key can decipher a WEP key. Key Management A key that is often usedbut not considered secureis a "static" WEP key. A static WEP key is composed of either 40 or 128 bits statically defined by the system administrator for all clients that communicate with an access point. When static WEP keys are used, a system administrator will need to go through the time-consuming task of entering the same keys on every device in the WLAN. If a device using static WEP keys is lost or stolen, anyone with access to the device can gain access to the WLAN. The WLAN administrator will not be able to detect that an unauthorized user has infiltrated the WLAN unless the loss is reported. The administrator must then change the WEP key on every device that uses the same WEP key. Also, if a static WEP key is deciphered through a tool like AirSnort, an administrator has no way of knowing that the key has been compromised by an intruder.

Chapter 9: Enterprise Wireless Mobility 8 0 2 . 1 1 Wired Equivalent Privacy (WEP)

410

WEP is defined in the 802.11 standards as a mechanism to protect over-the-air transmission between WLAN access points and network interface cards (NICs). Working at the data link layer, WEP requires that communicating parties share the same secret key. To avoid conflicting with U.S. export controls in effect at the time the standard was developed, 802.11b required 40-bit encryption keys. Many vendors now support a 128-bit standard. WEP can be cracked, using available off-the-shelf tools, in both 40- and 128-bit versions. On a busy network, 128-bit static WEP keys can be obtained in as short a time as 15 minutes. Security Extensions to WEP Are Required Three technologies are recommended as alternatives to basic WEP. These include: Network layer encryption approach relying on IP security (IPsec) Mutual authentication-based, key distribution method using 802.Ix Proprietary improvements to WEP recently implemented by Cisco

IEEE 802.Hi and the Wi-Fi Alliance compliance testing committee are standardsetting bodies for WLAN authentication and encryption improvements to WEP. IPsec in a WLAN Environment IPsec was developed as a framework of open standards for remote access protection and secure private communications over IP VPN networks to assure confidentiality, integrity, and authentication of data communications across public networks, such as the Internet. IPSec acts at the network layer, protecting and authenticating IP packets between participating IPSec devices. IPSec is a network-layer authentication and encryption security protocol which uses: An encryption key exchange to build a secure connection Authentication and encryption protocols that two peers negotiate and then use throughout the lifetime of the encrypted connection.

As covered in Chaper 8, IPsec is a security protocol which can protect multiple data flows between a pair of devices. IPSec supports both link-by-link and endto-end security. IPsec can thus be used in virtual private networks (VPNs) or for remote access protection. IPsec can secure WLANs by overlaying IPsec on 802.11 wireless traffic: When deploying IPsec in a WLAN environment, an IPsec client must be placed on every PC connected to the wireless network.

411 Chapter 9: Enterprise Wireless Mobility o The client device must establish an IPsec tunnel to route any traffic to the wired network. Filters are used to prevent wireless traffic from reaching any destination other than the VPN gateway and Dynamic Host Configuration Protocol (DHCP) or Domain Name System (DNS) server. IPsec supports confidentiality of IP traffic, as well as authentication and antireplay capabilities. Confidentiality is achieved through encryption either by using a Data Encryption Standard (DES) variant called Triple DES (3DES), or by usimg the newer Advanced Encryption Standard (AES).

802.1X/EAP Another security approach for WLANs focuses on centralized authentication and dynamic key distribution. This approach uses the IEEE 802.11 Task Group " i " end-to-end framework, combining 802.Ix and the Extensible Authentication Protocol (EAP) to provide this enhanced functionality. Cisco has incorporated 802.Ix and EAP into its Wireless Security Suite. The three main elements of a combined 802.Ix and EAP approach are: Mutual authentication between client and RADIUS authentication server Encryption keys dynamically derived after authentication Centralized policy control, where a session time-out triggers reauthentication and new encryption key generation

Combined 802.Ix and EAP follow these rules: A wireless client which associates with an access point cannot access the network before the user successfully logs onto the network. After association, the client and the network (access point or RADIUS server) are required to exchange EAP messages for mutual authentication. The client verifies RADIUS server credentials, and vice versa. An EAP supplicant is used on the client to obtain the user credentials (user ID and password, user ID and one-time password [OTP], or digital certificate). After the client and server mutually authenticate, the RADIUS server and client then create a client-specific WEP key for use by the client during the current logon session. User passwords and session keys are never transmitted unencrypted over wireless link.

The step-by-step procedure is illustrated in Figure 9-1 and shown below:

Chapter 9: Enterprise Wireless Mobility

412

RADIUS Server Q Access PointBlocks All User Requests to AccessLAN User Database ,t

( j ) Client Associates with Access Point

r Access Poi rrt w rth Wireless Computer /.15C Supporl with EAP Supplicant

Access Switch

@ User Provides Login Authentication Credentials

RADIUS Server with EAP Authentication Protocols Support and Dynamic WEP Key Generation RADIUS Server Authenticates User* User Auth entic ate s RADI US Server* RADIUS Server and Client Derive UnicastWEP Key J \ Campus Network V -4 User Database V_

i*o*** o 0 * 4 W w l
Access Point with EAP/802.1X Support Access Switch Access PointDelivers Broadcast WEP Key Encrypted with UnicastWEP Key to Client

:S

Wireless Computer with EAP Supplicant

0 RADIUS Se rve r Delivers UnicastWEP Key to Access Point * Steps 4 and 5 May Be Reversed Depending Upon the EAP Type

Client and Access Point Activate WEP and Use Unica stand Broadcast WEP Keys for Transmission

Figure 9 - 1 . 802.1x/EAP Authentication Process

The 802.1x/EAP authentication process step-by-step: A wireless client accesses (associates with) an access point. The access point blocks client access to the network's resources until a client logs on to the network. The client supplies network login credentials (user ID and password, user ID and OTP, or user ID and digital certificate) via an EAP supplicant. The wireless client and a RADIUS server on the wired LAN perform a mutual authentication, using 802.lx and EAP. In the first phase of EAP authentication, the RADIUS server verifies the client credentials, or vice versa. In the second phase of EAP authentication, mutual authentication is completed by the client verifying the RADIUS server credential, or vice versa.

413 Chapter 9: Enterprise Wireless Mobility When mutual authentication is successfully completed, the RADIUS server and the client determine a WEP key that is distinct to the client. The client loads this key and prepares to use it for the logon session. The RADIUS server sends the WEP key, called a session key, over the wired LAN to the access point. The access point encrypts its broadcast key with the session key and sends the encrypted key to the client, which uses the session key to decrypt it. The client and access point activate WEP and use the session and broadcast WEP keys for all communications during the remainder of the session or until a time-out is reached and new WEP keys are generated. Both the session key and broadcast key are changed at regular intervals. The RADIUS server at the end of EAP authentication specifies session key timeout to the access point and the broadcast key rotation time can be configured on the access point.

EAP provides three major benefits over basic 802.11 WEP security: First, a mutual authentication scheme. This scheme effectively eliminates "man-in-the-middle (MITM) attacks" introduced by rogue access points and RADIUS servers. Second, centralized management and distribution of encryption keys. Even if WEP implementation of RC4 had no flaws, there would still be the administrative difficulty of distributing static keys to all access points and clients in the network. Each time a wireless device was lost, the network would need to be rekeyed to prevent anyone using the lost system from gaining unauthorized access. Third, ability to define centralized policy control, in which session time-out triggers reauthentication and generation of a new key.

EAP Authentication Protocols Several types of EAP are available: . EAP-Cisco Wireless (LEAP) EAP-Transport Layer Security (EAP-TLS) Protected EAP (PEAP) EAP-Tunneled TLS (EAP-TTLS) EAP-Subscriber Identity Module (EAP-SIM)

In the Cisco SAFE wireless architecture, LEAP, EAP-TLS, and PEAP are all viable EAP protocols for mutual authentication in WLAN deployments. Lightweight Extensible Authentication Protocol (LEAP) Cisco LEAP is the most common EAP type in WLAN use today. LEAP supports the three 802.lx and EAP elements mentioned above: mutual authentication, encrypted keys, and central policy control. With LEAP, mutual authentication

Chapter 9: Enterprise Wireless Mobility relies on the user's logon password, which is known by the client and the network.

414

As illustrated in Figure 9-2, a RADIUS server sends an authentication challenge to the client.

Figure 9-2. LEAP Authentication Process Extensible Authentication Protocol-Transport Level Security (EAP-TLS) EAP-TLS is an IETF standard (under RFC 2716) using the TLS protocol (RFC 2246). EAP-TLS uses digital certificates for both user and server authentication. EAP-TLS supports the three key elements of 802.1x/EAP mentioned above. The steps in EAP-TLS authentication are illustrated in Figure 9-3:
RADIUS Server Access PointBlocks All User Requests to Access LAN User Database J Campus Network ) Access Switch

Client Associates with Access Point

Access Point with Wireless Computer EAP/802.1X with EAP Supplicant Support

i.&*

,-</ WK!

X.

User Authenticates RADI US Server (via Digital Certificate)

RADIUS Server with EAP-TLS Authentication Support and Dynamic WEP Key Generation

RADIUS Server Authenticates User (via Digital Certificate) 0 RADIUS Server and Client Derive UnicastWEP Key

~"-\
User Database

, ^ ^ - ~ * " Access Point with Access Switch EAP/802.1X Support Wireless Computer with EAP-TLS Access Point Delivers Broadcast WEP Key Supplicant Encrypted with UnicastWEP Keyto Client

r
"t Campus Network J

RADIUS Server Delivers UnicastWEP Key to Access Point

Clientand AccessPointActivate WEP and Use Unicastand Broadcast WEP Keys for Transmission

Figure 9-3. EAP-TLS Authentication Process

415 Chapter 9: Enterprise Wireless Mobility The RADIUS server sends its certificate to the client in phase one of the authentication sequence (server-side TLS). The client validates the server certificate by verifying the issuer of the certificatea certificate authority server entityas well as the contents of the digital certificate. When this is complete, the client sends its certificate back to the RADIUS server in phase two of the authentication sequence (client-side TLS). The RADIUS server then validates the client's certificate by verifying the issuer of the certificate (certificate authority server entity) and the contents of the digital certificate. When this is complete, an EAP-Success message is sent to the client and both the client and the RADIUS server generate the dynamic WEP key.

Protected Extensible Authentication Protocol (PEAP) PEAP, now in general use, originated as a joint proposal for an open standard by Cisco, Microsoft, and RSA Security. PEAP is covered in a 2004 IETF EAP Working Group Internet working draft. PEAP uses only server-side public key certificates to authenticate clients, creating an encrypted SSL/TLS tunnel between the client and the authentication server. This protects the subsequent exchange of authentication information. For user authentication, PEAP supports various EAPencapsulated methods within the protected TLS tunnel. PEAP supports the three main elements of 802.1x/EAP, as mentioned previously. Figure 9-4 illustrates that phase one of the authentication sequence is the same as that for EAP-TLS (server-side TLS). At the end of phase one, an encrypted TLS tunnel is created between the user and the RADIUS server for transporting EAP authentication messages. In phase two, the RADIUS server authenticates the client through the encrypted TLS tunnel via another EAP type. In the Figure 9-4 example, a user can be authenticated using an one-time password (OTP) using the EAP-GTC (generic token card) subtype. In this case, the RADIUS server relays the user credentials (user ID and OTP) to an OTP server to validate the user login. When this exchange is complete, an EAPSuccess message is sent to the client and the client and the RADIUS server each generate the dynamic WEP key

Chapter 9: Enterprise Wireless Mobility

416

RADIUS Server @ Access PointBlocks All User Requests to Access LAN

Client Associates with Access Point

>y iC-.M?

Wireless Computer with PEAP Supplicant

Access Point with PEAP Support

Access Switch

C l i e n t Verifies RADIUS Server's Digital Certificate

RADIUS Server with PEAP Authentication Support and Dynamic WEP Key Generation RADIUS Server Authenticates User "*" (Example: OTP Authentication) RADIUS Server and Client Derive Unicast WEP Key // j l ^ Z Z Z Z . . .". . ZZZZZfCarnpus Network^. RADI US Server Delivers Unicast WEP Key to Access Point

r Wireless Computer with PEAP Supplicant

Access Point with PEAP Support

Access Switch

Access Point Delivers Broadcast WEP Key Encrypted with Unicast WEP Key to Client

Clientand Access Point Activate WEP and Use Unicast and Broadcast WEP Keys for Transmission

Figure 9-4. PEAP Authentication Process WEP Enhancements After the original 802.11 standard was agreed, it became clear that enhancements were needed to reduce WEP vulnerabilities discussed in the "802.11 On Its Own is Inherently Insecure" section. IEEE 802.Hi includes two encryption improvements in its draft standard for 802.11 security: Temporal Key Integrity Protocol, or TKIP, a set of software enhancements to RC4-based WEP Advanced Encryption Standard (AES), a stronger alternative to R.C4 encryption

Cisco introduced support for TKIP as a component of the Cisco Wireless Security Suite in December 2001. Because the standard for TKIP was not final at that time, the Cisco implementation was prestandard, sometimes referred to as Cisco TKIP.

417 Chapter 9: Enterprise Wireless Mobility In 2002, 802.Hi finalized the specification for TKIP, and the Wi-Fi Alliance announced that it was making TKIP a component of Wi-Fi Protected Access (WPA). WPA is a requirement for Wi-Fi compliance. The enterprise version of WPA (WPA2) is based on the final IETF 802.Hi security standard. Both Cisco TKIP and the WPATKIP include per-packet keying (PPK) and message integrity check (MIC). WPATKIP introduces a third element: extension of the initialization vector from 24 bits to 48 bits. This section discusses the Cisco TKIP implementation of TKIP Cisco TKIP: Per-Packet Keying Since the most common attack on WEP relies on exploiting multiple weak initialization vectors in a stream of encrypted traffic using the same key, requiring different keys for each packet is a potential approach to reducing the threat. As illustrated in Figure 9-5, the initialization vector and WEP key can be hashed to produce a unique packet key (called a temporal key), which is then combined with the initialization vector and the plaintext through a logical function called XOR. This is the basis for Cisco TKIP.

IV

l\ +
4

Base- K

* :

pi ain

* :t D a t a

t
;-:OP. A -* Ciph*t*:-:t Data

HASH

r
IV

P a c k e t i-:*

1
P.C4 > StE-sam Ciph-rL-

Figure 9-5. Per-Packet WEP Key Hashing This method prevents use of the weak initialization vectors to derive the base WEP key since the weak initialization vectors only allow the per-packet WEP key to be derived. In order to prevent attacks due to initialization-vector collisions, the base key needs to be changed before the initialization vectors repeat. Because initialization vectors on a busy network can repeat in a matter of hours, methods such as EAP authentication protocols should be used to perform the rekey operation. Similar to a unicast key, the WLAN broadcast key (used by access points and clients for Layer 2 broadcast and multicast communication) is susceptible to attacks which take advantage of initialization vector collisions. To reduce this vulnerability, Cisco APs support broadcast key rotation. The AP dynamically calculates the broadcast WEP key (from a random number) and a new broadcast WEP key is delivered to clients using EAPOL-Key messages. Thus, broadcast WEP key rotation can be enabled using tose EAP protocols which support dynamic derivation of encryption keys, such as LEAP, EAP-TLS, and PEAP.

Chapter 9: Enterprise Wireless Mobility Cisco TKIPMessage Integrity Check

418

Another WEP concern is vulnerability to replay attacks. The MIC protects WEP frames from tampering. The MIC adds several bytes to each packet to make the packets tamper-proof. The MIC uses source MAC, destination MAC, a seed value, and payload (changes to any of these will affect the MIC value). The MIC is included in the WEP-encrypted payload. MIC uses a hashing algorithm to derive its value. This is an improvement of the cyclic redundancy check CRC-32 checksum function as performed by standards-based WEP. A weakness in CRC-32 is that the bit difference between two CRCs can be computed using the bit difference of the messages over which they are taken. In other words, flipping bit n in the message generates a deterministic set of bits in the CRC that must be flipped to produce the correct checksum for the modified message. Because flipping bits carries through in CRC-32 after an RC4 decryption, this can allow an attacker to flip arbitrary bits in an encrypted message and correctly adjust the checksum to make the resulting message appear valid. MIC prevents "bit-flip" attacks on encrypted packets, when an intruder intercepts an encrypted message, alters it slightly, and retransmits it. You do not want the receiver to accept such a retransmitted message as legitimate. It is the MICadded bytes in each packet that make the packets tamper-proof. The purpose of MIC is similar to CRC in offering a way to detect if a packet has been intercepted and changed between its source and destination EAP Authentication Summary Organizations can choose to deploy IPsec or alternatively, 802.1x/EAP with either TKIP or Cisco TKIP. One or the other should be considered, but generally not both. Basic WEP enhancements can be used anywhere WEP is implemented. 802.1x/EAP with TKIP should be used when an organization wants reasonable assurance of confidentiality and a transparent user security experience. IPsec when an organization has extreme concern for protecton of the transported data. However, an IPSec solution is more complex to deploy and manage than 802.1x/EAP with TKIP

For the majority of networks, the security provided by 802.1x/EAP with TKIP is sufficient. Table 9-6 gives a detailed view of the pros and cons of IPsec and EAP authentication protocols in WLAN designs:

419 Chapter 9: Enterprise Wireless Mobility Table 9-6. Wireless Encryption Technology Comparison
Cisco LEAP with TRIP Key length (in bits) Encryption algorithm Packet integrity Device authentication User authentication Certificate requirements User differentiation 1 Single sign-on support ACL requirements Additional hardware Peir-userkeying Protocol support Client OS support Ojjenstandard 128 RC4 CRC-32/MIC No Username/password None Group Yes Optional No Yes EAP-TLS with TKIP 128 RC4 CRC-32/MIC Certificate Certificate RADIUS server/ WLAN client Group Yes Optional Certificate server Yes EAP-PEAP with TKIP 128 RC4 CRC-32/MIC No Username/password orOTP RADIUS server Group No Optional Certificate server Yes Any Wide range IETF draft RFC IPsec-based VPN 168/128, 192, 256 3DES or AES MD5-HMAC/ SHA-HMAC Pre-shared secret or certificates Username/Passwonl orOTP Optional User No Required IPsec Concentrator Yes IP unicast Wide range Yes

Any
Wide range No

Any
Wide range Yes

RF Troubleshooting Transmission range in a system is determined by link margin calculations. The overall system link margin includes transmission power output, antenna gain, receiver sensitivity and path loss. Path loss is due to cable and antenna attenuation, air content and obstacles preventing clear lines-of-sight (occlusion). Obtaining long ranges with wireless transceiver modules requires a combination of output power, antenna gain and receiver sensitivity. Each of these parameters can have dramatic effects on the link margin of a wireless link path. Table 9-7. RF Troubleshooting
Connectivity Tools Button Ping Description Tests device reachability. Results If successful, statistics are displayed on the packets transmitted and received.

Chapter 9: Enterprise Wireless Mobility

420

Traceroute NSLookup

Detects routing errors between the WLSE and a device. Looks up hostname or IP address information via the name server.

If successful, the routes to the device are displayed. If successful, displays the name server name and IP address and the device name and IP address. Displays the active ports. If the device is reachable, its sysObjID is displayed. If no sysObjID is returned: The query may be timing out because the device is busy or is remotely located. The SNMP agent in the device may not be functioning.

TCP Port Scan SNMP Reachable

Finds the active ports on a device. Tries to reach a device by using SNMP. To reach a device by using SNMP, the device's credentials must be in the WLSE database. To check credentials, select Administration > Devices > Discover > Device Credentials > SNMP Communities.

The RF environment is dynamic, and can deteriorate over time. Addition or removal of structures, other RF signals in the same band, or separate sources of electromagnetic interference (EMI) such as radar or welding equipment: all these affect the RF environment. RF signals are subject to diffraction ("bending"), refraction, reflection, multipath fading, and absorption. Undesirable signals at or near the frequency of a wireless network can affect its performance. Interference on a frequency acts in the same way as overlapping signals, reducing 802.11 signals and producing outages or intermittent low throughput. Reference: http://www.cisco.com/en/US/netsol/ns473/networking_solutions_white_paper09 00aecd801016cf.shtml VoWLAN VLANs Virtual LANs (VLANs) are important for IP Telephony networks, where the standard recommendation is to separate voice and data traffic into different Layer 2 domains. VLANs allow networks to be segmented into one or more broadcast domains, in order to: Segment traffic into distinct broadcast domains or IP subnets Create separate security domains for various security models (Open, WEP, EAP-TLS, LEAP, and PEAP).

Voice and Data VLANs You should configure separate VLANs for data and voice traffica native VLAN for data and a voice or auxiliary VLAN for voice traffic. Deploying a separate

421 Chapter 9: Enterprise Wireless Mobility voice VLAN allows the network to use Layer 2 marking and provides priority queuing at the Layer 2 access switch port. This helps to ensure that the appropriate QoS is maintained for various classes of traffic, and also helps resolve issues such IP addressing, security, and network dimensioning. Cisco Support for VLANs Cisco Aironet 350, 1100, and 1200 Series access points will support up to 16 VLANs Cisco APs can be connected via 802.1Q trunks to Cisco Catalyst Switches In hybrid mode, the native VLAN's Port VLAN ID (PVID) is not tagged Each VLAN is mapped to a unique Service Set Identifier (SSID) on an access point Users (or IP Phones) can be assigned to VLANs either statically using SSID or dynamically using RADIUS authentication Each VLAN has the ability to employ a different security mechanism, although only one of these can be unencrypted or open.

IP Telephony Network Sizing A network which supports IP Telephony should be designed with enough bandwidth and resources to support mission-critical voice traffic. Usual IP telephony design guidelines for sizing components such as PSTN gateway ports, transcoders, and WAN bandwidth need to be followed. The following 802.11b issues also need to be considered when sizing a wireless IP telephony network: Number of 802.11b Devices per AP Cisco recommends that you keep the number of 802.11b devices per AP below 15 to 25. Number of 802.11b Phones per AP It may help to to cover the basics of network capacity before we cover network planning: The maximum number of phones per AP varies according to the calling patterns of individual users (using Erlang ratios). Cisco recommends no more than seven concurrent calls using G.711 or eight concurrent calls using G.729. Beyond that number of calls, when excessive background data is present, the voice quality becomes unacceptable for all calls. The calling ratio used to determine the number of active and non-active calls. This ratio is often determined using Erlang calculators. Using these factors and normal business-class Erlang ratios (between 3:1 and 5:1), the Cisco recommendation iss that you deploy no more than 450 to 600 Cisco 7920 phones per Layer 2 subnet or VLAN. These recommendations are based on packetization using 20 ms sample rates with VAD disabled. This rate generates 50 packets per second (pps) in

Chapter 9: Enterprise Wireless Mobility

422

each direction. Using a larger sample size (such as 40 ms) might produce a larger number of simultaneous calls, but will also increase the end-to-end delay of the VoIP calls. Beyond bandwidth needed for an 802.11b VoIP call, you also need to consider overall radio contention for RF channels. A general rule is to maintain the number of 802.11b endpoints per access point below 20 to 25. The more endpoints per AP, the more effective bandwidth is reduced and potential transmission delays increase.

Multicast and Wireless Voice Although 802.11b WLANs can handle multicast IP packets, there are technical limitations that make multicast a poor choice for voice networks and real-time applications, like multicast music on hold (MoH). There are a number of multicast network traffic issues on wireless networks: The unacknowledege nature of WLAN multicast transport might not seem relevant for voice traffic using UDP datagrams. However, the difference is that an Ethernet connection has bit-error rates (BERs) of about 10 10 , while WLANs typically have BERs of about 105. WLANs resolve this particular issue by using acknowledgements on the Link Layer to ensure reliable delivery. WLAN multicast traffic does not have this kind of reliable delivery. Multicast packet transmission over a WLAN occurs at the lowest rate of any client device associated with an AP, whether that device wants multicast packets or not. Thus, an AP supporting multiple bit rates will send multicast traffic at the lowest common denominator bit rate. This behavior degrades WLAN overall performance. For devices operating in power-save mode (for example, when a Cisco 7920 phone enters power-save mode to extend battery life), the access point buffers multicast packets and does not send them until the devices are no longer in power-save mode. This practice ensures that all clients receive the multicast traffic.

The Cisco 7920 Wireless IP Phone does not support multicast traffic, for some of the above reasons. Cisco recommends that unicast-only traffic be employed in wireless telephony environments. Current Cisco CallManager software does not have the ability to differentiate automatically between those endpoint devices that are enabled for multicast and endpoint devices capable of unicast only. Although it would be useful send traffic such as music-on-hold to phones via multicast for all voice endpoint devices, multicast is not possible for a Cisco 7920 phone, since only unicast MoH is supported. Cisco CallManager can be provided with this information by configuring separate media resource groups (MRG) and media resource group lists (MRGL) for multicast and unicast resources.

423 Chapter 9: Enterprise Wireless Mobility Server and Switch Recommendations Some basic recommendations can be made about server and switch types in wireless network infrastructure for Cisco 7920 Wireless IP Phones. These are covered in this section. Server Recommendations There must be at least one Cisco CallManager server in a voice-enabled network. This server can be located on-site or be connected remotely over a WAN link However, servers located over a WAN link can cause delays in phone registration, roaming, and call set-up If problems arise, test the scenario with wired phones connected to the same Cisco CallManager to test the WAN speeds The wired phone and the AP must be on the same VLAN and switch port in order to check the entire path of the packet, as if it came from the AP to the Cisco CallManager server It may be necessary to decrease delay times on the WAN link or move the remote CallManager server on-site.

A central authentication, authorization, and accounting (AAA) server can be used to perform LEAP or MAC authentication: This server can also be placed on site or over a WAN link A WAN link can add considerable delays in authentication, so the general recommendation is that you deploy a local AAA server to expedite the authentication process AAA functions can also be performed by a dedicated Cisco IOS AP running local authentication However, an AP running in this mode can only support 50 users and should be considered only for small offices, retail stores, or specialty locations.

If two APs terminate on the same network appliance, the strong recommendation is that you not use a hub A hub will add delays on the Ethernet interface as well as on the RF interface. Instead, use a switch, which has multiple collision domains Be careful that you do not use hubs on devices connected to an AP since the hub will send unnecessary data to the AP.

Switch Recommendations You can create a switch port template when configuring any switch port for connection to an AP:

Chapter 9: Enterprise Wireless Mobility

424

Your template should include all baseline security and resiliency features of the standard desktop template When attaching an AP to a Cisco Catalyst 3550 switch, you can improve AP performance by using Multilayer Switching (MLS) QoS commands to limit the port rate and to map Class of Service (CoS) to Differentiated Services Code Point (DSCP) settings.

While Ethernet switch ports can transmit and receive at 100 Mbps, APs have a lower throughput rate, since 802.11 standards allow for a maximum data rate of 54 Mbps Wireless LANs are a shared medium and, with contention for bandwidth, actual throughput is substantially lower This throughput mismatch means that the AP will drop packets when presented with a burst of traffic. This creates additional processor burden for the AP and degrades its performance. You can use Catalyst 3550 policing and rate limiting capabilities to eliminate excessive AP packet drops An AP switch port template will limit port rates to a practical throughput of 7 Mbps for 802.11b and also guarantee 1 Mbps for high-priority voice and control traffic.

The template in Table 9-8 applies to the Cisco 7920 Wireless IP Phones. Table 9-8. Switch Port Throughput for Different Radio Types Type of Radio 802.11a 802.11b 802.llg 802.11a + 802.11b 802.11a + 8 0 2 . l l g Throughput Allowed by Switch Port 42 Mbps 7 Mbps 36 Mbps 49 Mbps 78 Mbps

This template will support a secure and resilient network connection with these features: Return Port Configurations to "default"Prevents configuration conflicts by clearing preexisting port configurations. Disable Dynamic Trunking Protocol (DTP)Disables dynamic trunking, which is not needed for AP connection. Disable Port Aggregation Protocol (PagP)PagP is enabled by default but is not needed for user-facing ports. Enable PortFastAllows a switch to rapidly resume forwarding traffic when a spanning tree link goes down.

425 Chapter 9: Enterprise Wireless Mobility Configure Wireless VLANCreates a unique wireless VLAN that isolates wireless traffic from other data, voice, and management VLANs, thereby separating traffic and ensuring greater control of traffic.

Enhanced Distribution Channel Access (EDCA) EDCA (Enhanced Distributed Channel Access), also known as prioritized DCF (distributed coordination function), improves on DCF by giving higher-priority traffic an advantage during contention. EDCA was originally specified in the IETF draft for 8 0 2 . l i e specification covering enhancements to 802.11 with QoS features. In EDCA, instead of waiting before transmitting after the back-off period expires, higher-priority traffic will attempt to transmit only after a PIFS (point coordination function interframe space) period and associated back-off time. Using EDCA scheme, nodes can be designated as supporting high-priority traffic, such as VoIP. These nodes will have a higher probability of gaining channel access than nodes supporting lower-priority traffic, such as PC downloads. Cisco 7 9 2 0 Wireless IP Phone You should consider these guidelines when setting up environments for Cisco 7920 Wireless IP Phones: Deploy at least two APs on non-overlapping channels, with a received signal strength indicator (RSSI) greater than 35 at all times in the site survey utility. Deploy no more than one AP per overlapping channel set, with a received signal strength indicator (RSSI) greater than 35. Multiple APs that show on a phone with an RSSI of less than 35 (on overlapping APs) can still cause interference, and should be avoided to the extent possible. This interference or noise will degrade voice quality. Noise is additive. Setting up three additional APs on the same channel, all with low RSSI, can be less effective than a single additional AP with a higher RSSI.

Figure 9-6 shows a typical deployment, with a 15% to 20% overlap between each AP's cell and its neighbors. This configuration meets the above requirements and supports redundancy throughout the cell.

Chapter 9: Enterprise Wireless Mobility

426

The radius of the cell should be: -67dBm "^ <

Channel 1 Channel 11 Channel 8

-86dBm -o7dBm The separatbnof same channel cells shoufci be: l9dBm Cisco 7920 RSSI=20

Figure 9-6. Cell Overlap Guidelines Two of the APs (including the one with which the wireless phone is associated) must meet the following two requirements: An RSSI greater than 35 (equivalent to a receiver threshold of -67 decibels per milliwatt) A channel utilization QoS Basis Service Set (QBSS) load that is less than 45.

Meeting these requirements provides for smoother roaming and a backup AP if one of the APs is busy or fails, or otherwise becomes unavailable. The QBSS load represents the percentage of time that the channel is in use by the AP: The channel load can be much higher on an overall basis than the QBSS load since several APs might share an RF channel, with background or environmental noise adding to the load The Cisco 7920 Wireless IP Phone uses the QBSS load in its roaming algorithm

The measured QBSS load will vary, depending on the time of day when you perform the site survey. For example, at night a network is largely idle, and its

427 Chapter 9: Enterprise Wireless Mobility QBSS load would typically be low. Site surveys should therefore be performed during peak hours. Adding APs will reduce the QBSS load. More guidelines: o Maintain a packet error rate (PER) no higher than 1% (or a success rate of 99%) Maintain at least 11 Mbps of available link speed at all times for data clients as well as voice clients Maintain a minimum signal-to-noise ratio (SNR) of 25 dB (see Figure 9-7) Maintain an AP coverage overlap of at least 15% to 20%, as in Figure 9-6
Power jdBm)

RSSI {Signal Strength)

"4

^~~\.

> S^iraHo-Nolse < Ratio

ise level ~

Time (seconds)

Figure 9-7. Signal-to-Noise Ratio Try to use the same transmit power on the AP and on the phones. If the transmit power of the APs varies, set the transmit power of the phones to the highest transmit power of the APs. All AP antennas should use diversity to eliminate multipath signal distortion. This is a case where two antennas are better than one, and it is common for APs to select one of the antennas as the active antenna, ignoring the antenna with the lower signal-to-noise ratio (SNR). In an optimal setting, APs can handle seven G.711 or eight G.729 concurrent phone calls. If more concurrent phone calls are needed in a high usage area, plan to have load-balancing APs available during the site survey. Overlapped basic service sets (BSSs, or APs sharing the same RF channel) reduce the number of concurrent phone calls per AP. Wireless Domain Services (WDS) WDS provides a new core capability for access points in Cisco IOS Software:

Chapter 9: Enterprise Wireless Mobility WDS enables other features such as Fast Secure Roaming, Wireless LAN Solution Engine (WLSE) interaction, and Radio Management Relationships between access points that participate in WDS must be established before other WDS-based features can work

428

A main purpose for WDS is to eliminate a need for the authentication server to validate user credentials every time. This reduces time required for client authentications. Wireless Domain Services (WDS) supports a fast client rekey as part of a central authentication entity: o Access points and clients in an 802.lx Layer 2 domain authenticate via WDS to a RADIUS server The RADIUS server performs the role of 802.lx authenticator WDS does not require RADIUS to perform a full reauthentication each time the client roams Because all clients and access points authenticate via the WDS, the WDS is able to establish shared keys between itself and every other entity in the Layer 2 domain These shared keys enable CCKM fast secure roaming.

Figure 9-8 illustrates access points and clients authenticating to WDS. Client authentication is defined by one or more client server groups on the WDS access points: Whenever a client makes an attempt to associate with an infrastructure access point, the infrastructure AP passes the user's credentials to the WDS AP for evaluation If this is the first time that the WDS access point has seen that user's credentials, it requests the authentication server to validate the user's credentials The WDS access point then caches the user's credentials, so it does not have to return to the authentication server when that user attempts authentication again (for example, reauthentication for rekeying, for roaming, or when the user turns on the client device).
502. IK Supplicant 8 02. IK SO 2. IK 80 2. iK A u t h e n t i c a t o r / S u p p l i c a l S i f c t h e n t i c a t o r A u t h e n t i c a t i o n Server

/
Clier' /exXoax0acXoiL,^J?.<:e9a3 \.

,
Clients Authenticate v i a t h e 03

01 Point" 4 '-- 4 -':.'' 1 ""'" _._,_^ -;xth 7JDS 'I jy\ '.,

,Zccess

teifttH
RADIUS

Access P o i n t s A u t h e n t i c a t e via W S D

429 Chapter 9: Enterprise Wireless Mobility Figure 9-8. Access Points and Clients Authenticating to WDS The WDS function is written in Cisco IOS software and runs on Cisco IOS software on Cisco Aironet access points. Cisco planned to make WDS available in 2005 for Cisco router and switch infrastructure products. At least one WDS is needed for each Layer 2 domain. The CCKM architecture supports WDS redundancy via a MAC-layer multicast primary WDS election process: If redundant WDS are configured, the WDS with the highest priority becomes the primary WDS If equal or no priorities are configured, a primary WDS is dynamically determined Redundancy also provides a cold backup. If the primary WDS fails, all authenticated clients continue to operate until a roaming event occurs When a roaming event occurs, the client completes a full initial authentication to the RADIUS server, via the backup WDS All access points in a Layer 2 domain dynamically learn the address of the active WDS via a Layer 2 multicast. The address of the WDS is not configured in any access point.

WDS supports a single Layer 2 domain with up to 30 access points. The 30 access point limit is not a physical limit, but is the maximum recommended by Cisco, and is also the maximum number supported by the Cisco Technical Assistance Center (TAC). Understanding WDS When you configure an access point to provide WDS, other APs (such as your APor bridge if it is configured as an APon your wireless LAN use the WDS AP to: Provide fast, secure roaming for client devices Participate in radio management.

Fast, secure roaming allows rapid reauthentication when a client device roams between APs, avoiding delays in voice and other time-sensitive applications. If your APs are configured to participate in radio management: APs forward information about the radio environment (for example, suspected rogue access points, and client associations and disassociations) to the WDS AP The WDS AP aggregates this information and forwards it to the wireless LAN solution engine (WLSE) device on your network.

Chapter 9: Enterprise Wireless Mobility Role of the WDS Access Point The WDS AP performs these tasks on your wireless LAN:

430

Advertises its WDS capability and participates in selecting the optimal WDS AP for your WLAN. When you configure your WLAN for WDS, you set up one AP as the primary WDS AP candidate and one (or more) additional APs as backup WDS access point candidates Authenticates all subnet APs and sets up a secure communication channel with each subnet AP Collects radio data from APs in the subnet, aggregates the data, and forwards it to your network's WLSE device Registers all client devices in the subnet, establishes session keys for them, and caches their security credentials. When a client roams to another AP, the WDS AP forwards the client's security credentials to the new access point

Role of Access Points Using the WDS Access Point The APs on a WLAN interact with the WDS AP to do the following: Discover and track the current WDS AP and relay WDS advertisements to the WLAN Authenticate with the WDS AP and establish a secure communication channel to the WDS AP Register associated client devices with the WDS AP Report radio data to the WDS AP

Understanding Fast Secure Roaming APs points in many WLANs serve mobile client devices that may roam from AP to AP throughout the installation. Some client device applications need fast reassociation when they roam to a different AP. Voice applications require transparent roaming to prevent delays and conversation gaps. LEAP-enabled client devices mutually authenticate with a new AP by performing a complete LEAP authentication, including communication with the main RADIUS server, as in Figure 9-9 below.

431 Chapter 9: Enterprise Wireless Mobility

Wired LAN

CI tent device 1. Authentbation request 2. Identity request 3. Username (relay to client) 5. Authentication response (relay to client) 7. Authentication challenge (relay to client) 9. Successful authentication

Access point or bridge

RADIUS Server

(relay to server) 4. Authentication challenge (relay to server) 6. Authentication success (relay to server) 8. Autrerrticaiion response

(relay to server)

Figure 9-9. Client Authentication Using a RADIUS Server When you configure a wireless LAN for fast, secure roaming, however, LEAPenabled client devices roam from one AP to another without involving the main server. Using Cisco Centralized Key Management (CCKM), an AP configured for Wireless Domain Services (WDS) takes the place of the RADIUS server and authenticates the client without any perceptible delay in voice or other timesensitive applications. Figure 9-10 shows client authentication using CCKM.
Wired LAN

Roaming client device Re association request

Access point

WDS Device-Router/ Swtterv'AP

Authentication server

Pre-registration request Pre-registration reply Reasscciation response

Figure 9-10, Client Reassociation Using CCKM and a WDS Access Point When a CCKM-capable client roams from one AP to another, the client sends a reassociation request to the new AP, and the new AP relays the request to the WDS AP. The WDS AP maintains a cache of credentials for CCKM-capable client

Chapter 9: Enterprise Wireless Mobility

432

devices on your WLAN. The WDS AP forwards the client's credentials to the new AP, and the new AP sends the reassociation response to the client. Only two packets pass between the client and the new AP, significantly shortening the reassociation time. The client device uses the reassociation response from the new AP to generate the unicast key. Understanding Radio Management WDS access points can participate in radio management: Participating APs scan the radio environment and send reports to the WDS AP on such radio information as suspect rogue APs, associated clients, client signal strengths, and the radio signals from other APs The WDS AP forwards the aggregated radio data to the WLSE device on your network Participating APs also assist with self-healing a WLAN, automatically adjusting settings to provide coverage in case a nearby AP fails

Guidelines for WDS These guidelines apply to WDS APs or bridges: You cannot configure your AP or bridge as a WDS AP. However, when you configure your access point or bridge as an access point, you can also configure it to use the WDS AP. Repeater APs do not support WDS.

Requirements for WDS and Fast Secure Roaming The WDS WLAN must meet these requirements for AP or bridge operation: At least one AP available to be configured as the WDS AP An authentication server (or an AP configured as a local authenticator) Cisco Aironet devices running Cisco client firmware version 5.20.17 or later

Cisco Unified Wireless Network The Cisco Unified Wireless Network is a unified wired and wireless solution to address WLAN security, deployment, management, and control issues facing enterprises: It combines wired and wireless networking to deliver scalable, manageable, and secure WLANs It delivers security, scalability, reliability, ease of deployment, and WLAN management that organizations have come to expect from their wired LANs It includes RF capabilities that enable real-time access to business applications and provides enterprise-class secure connectivity.

433 Chapter 9: Enterprise Wireless Mobility The Cisco Unified Wireless Network is designed to reduce overall operational expenses by simplifying network deployment, operations, and management. With this solution, several hundreds, or even thousands of central or remotely located wireless access points can be managed from a centralized management console. The flexibility of the Cisco Unified Wireless Network allows network managers to design and deploy networks to meet their specific needs, whether these involve integrated networks or simple overlay networks. Cisco Unified Wireless Network management features include: Support for high-capacity, versatile deployments in campus, branch offices, remote sites, and outdoor locations with challenging environments, over a wide range of operating temperatures Simplified WLAN management and operations support that removes the complexity of managing the RF environment Real-time RF scanning, monitoring, and control of the WLAN infrastructure, delivering a self-configuring, self-optimizing, and self-healing wireless network Support for deployment of several, hundreds, or thousands of central or remotely located access points Ability to simultaneously track thousands of devices from directly within the WLAN infrastructure using Cisco's RF fingerprinting technology Wireless without boundaries both indoors and outdoors. Wireless devices need uninterrupted network access when roaming across access points (within and between subnets). Scalability to meet current and future business requirements. Advanced WLAN planning, deployment, and management tools WLAN resiliency, redundancy, and fault tolerance Troubleshooting and diagnostic tools for proactive performance and fault monitoring, including graphical heat maps for analysis Centralized policy engines that allow system-level security and QoS policies to be easily configured and enforced

Building Enterprise-Class Wireless LANs The Cisco Unified Wireless Network can be deployed in corporate offices, hospitals, retail stores, manufacturing plants, warehouse environments, educational institutions, financial institutions, local and national government organizations, and other locations worldwide. It supports Wi-Fi enabled business applications, including mobile healthcare, inventory management, retail point-ofsale, video surveillance, real-time data access, asset tracking, and network visibility. The Cisco Unified Wireless Network enables mobile access in venues such as public hotspots, hotels, convention centers, and airports for mobile users and traveling executives. It delivers real-time access to business applications, while providing secure mobility and guest access for campus and branch offices.

Chapter 9: Enterprise Wireless Mobility

434

The Cisco Unified Wireless Network consists of five elements that work together to deliver a unified wireless system:

Client devices
A mobility platform Network unification Enterprise-class network management Unified advanced services Beginning with a base of client devices, each element can incorporate additional capabilities as your network needs evolve, connecting with other elements to create an overall WLAN solution (Figure 9-11).
Unified Advanced Services Unified Wi-Fi VoIP, advanced threat detection, identity networking, location-based security, asset tracking, and guost access. Workf-Class Network Management Sams level of security, scalability, reliability, ease of deployment, and management for wireless LANs as wired LANs, Network Unification Secure innovative WLAN controllers. Future integration into selected switching and rousing platforms. Mobility Platform Ubiquitous network access in indoor and outdoor environments. Enhanced productivity. Proven platform with large install base and 6 1 % market share. Plug and play. Client Devices 90% of Wi-Fi silicon is Cisco Compatible Certified, Proven Aironet platform, "Out-of-the-Box" wireless security,

_ "

Figure 9 - 1 1 . Cisco Unified Wireless Network Cisco Unified Wireless Network Products Cisco offers a range of WLAN products to support the five elements of the Cisco Unified Wireless Network (Figure 9-12).

435 Chapter 9: Enterprise Wireless Mobility

i* Sg

Cisco Self-Defending Network

Ural ed built-in -rjpo" v.' leading-edge applicationsnot an afterthought. Cisco Wireless Location Appliance, Cisco WCS. Cisco Self-Defending Network, MAC. and Wi-Fi phones,

A'ciid-cla^!; nanarjemont system (NMS) that .'iscali/cs and hnlpb 4f;cjn: nr space via Cisco WCS.

Network infrastructure that Junctions smoothly across a range of platforms. Cisco 4400 and 2000 Series wireless LAN controllers. Future router and switch integration.

FfKKMItlf r f S f i C I f I I I

Access points dynamically configured and managed through LWAPP, Cisco Airanet access points: 1300. 1240AG, 1230AG, t130AG,and 1000. Bridges: 1400 and 1300,

Cisco. Compatible

Client Devices Secure clients that work out of the box. Cisco Compatible and Cisco Aironet client devices,

Figure 9-12. Cisco Unified Wireless Network Product Portfolio Cisco Unified Wireless Network Deployment The five elements of the Cisco Unified Wireless Network are fundamental to building secure enterprise-class WLANs. The Cisco solution is services-oriented. Organizations can begin with client devices and a mobility platform and then add additional elements to meet their growing wireless networking requirements. In addition to full 802.11a/b/g wireless support, the Cisco Unified Wireless Network includes immediate support for advanced services like voice, intrusion prevention system (IPS), Network Admission Control (NAC), Cisco SelfDefending Network, dynamic RF management, high-resolution location services, and guest access. These advanced services are built-in, and are ready for immediate implementation, or they can be deployed over time in a phased implementation. Organizations have the flexibility to decide when and how to implement these advanced services based. Client Devices More than 95 percent of today's notebook computers are Wi-Fi enabled. Many specialized Wi-Fi client devices are also available for specialized wireless

Chapter 9: Enterprise Wireless Mobility applications. To address enterprise WLAN needs, client devices need to interoperate securely with WLAN infrastructures. Cisco Compatible client devices and Cisco Aironet client devices, as you might expect, work with the Cisco Unified Wireless Network:

436

By providing third party tested compatibility, the Cisco Compatible Extensions (CCX) program helps ensure that devices from different suppliers are interoperable with a Cisco WLAN infrastructure More than 90 percent of today's notebooks are Cisco Compatible certified More than 300 other wireless devices have certified as Cisco Compatible. In summary, most other client devices on the market will support Cisco's advanced features.

Cisco has in the past introduced a number of pre-standard features through the Cisco Compatible Extensions program, in order to meet customer enterprise application requirements. Cisco supports and enables software upgrades for Cisco Compatible mobile devices from third-party partners to help ensure a migration path: To future industry standards To Cisco's own future Cisco WLAN infrastructure features.

Mobility Platform It is important to have secure 802.11a/b/g connectivity for WLAN clients via APs that adhere to standards, while also delivering advanced air and RF deployment, management, and performance features. Organizations also need reliable WLAN solutions for wide-area networking for outdoor areas, campuses, or building-tobuilding connectivity. Cisco Aironet APs support a wide array of deployment options such as single or dual radios, integrated or external antennas, and rugged metal enclosures Aironet access points deliver high capacity, security, and enterprise-class features demanded by WLAN users Cisco Aironet access points come standard with plug and play wireless features for "zero-touch" configuration

Some of the capabilities designed into Cisco Aironet Access Points include: Cisco Aironet 1000 or 1130AG Series APs are designed for offices and similar indoor environments. These APs have integrated antennas that with omnidirectional coverage patterns. Cisco Aironet 1240AG Series APs are designed for more challenging RF environments such as factories and warehouses, or for installation above suspended ceilings which tend to require flexible external antennas and rugged metal cases.

437 Chapter 9: Enterprise Wireless Mobility o Most Cisco Aironet APs will support either Lightweight Access Point Protocol (LWAPP) or are able to operate autonomously (without WLAN controllers) Cisco Aironet lightweight APs are dynamically configured and managed through the LWAPP. All Cisco Aironet lightweight APs connect to Cisco WLAN controllers. This assures system administrators that they can deploy a variety of different APs within their networks, and still use the full range of Cisco Unified Wireless Network capabilities. Autonomously operating Cisco Aironet devices (operating without wireless LAN controllers) can be deployed as part of the Cisco Unified Wireless Network. These autonomous devices can be managed using the CiscoWorks Wireless LAN Solution Engine (WLSE) or CiscoWorks WLSE Express.

WLSE is a centralized, systems-level application for managing and controlling Cisco Aironet APs and bridges operating autonomously. To make full use of Cisco Unified Wireless Network, you will need to make sure that Cisco Aironet autonomous APs have been upgraded to run LWAPP and operate with a wireless LAN controller. Cisco Aironet wireless bridges connect multiple LANs in a metropolitan area or public access environment. These bridges provide system administrators with a solution that meets the security requirements of wide-area networking professionals. They support both point-to-point and point-to-multipoint configurations with excellent range and support for data rates up to 54 Mbps: You can configure Cisco Aironet 1300 Series outdoor APs/bridges as autonomous APs, bridges, or workgroup bridges. These devices have a ruggedized enclosure and support wireless connectivity between multiple fixed or mobile networks and clients. Cisco Aironet 1400 Series wireless bridges offer autonomous, high-speed, high-performance outdoor bridging for line-of-sight applications. They have a ruggedized enclosure designed for harsh outdoor environments and a wide range of operating temperature.

The Cisco Unified Wireless Network advanced WLAN security protects the network from security breaches and unsecured WLAN connections that can put the entire network at risk: Customizable attack signature files can be used to rapidly detect and contain common RF-related attacks, such as Netstumbler, FakeAP, and v o i d l l Advanced RF fingerprinting technology supports high-accuracy device tracking Cisco Self-Defending Network and NAC limit damage from emerging security threats such as viruses, worms, and spy ware Wired and wireless rogue AP and client containment maintain network security and prevent unauthorized users from accessing enterprise resources Cisco Compatible client devices extend air/RF rogue detection capabilities

Chapter 9: Enterprise Wireless Mobility Cisco Unified Wireless Network Security

438

A fundamental best practice in wireless LAN security is the ability to secure and control the RF environment. Cisco Unified Wireless Network security features include: Controlling access to the WLAN using diffrenet authentication and encryption policies, including 8 0 2 . H i , Wi-Fi Protected Access (WPA), WPA2, and mobile VPNs Support for different types of authentication, authorization, and accounting (AAA) servers WLAN intrusion prevention that identifies and detects suspect rogue APs, unassociated client devices, and ad-hoc networks, and that provides RF attack signatures to protect against common wireless threats Secure management of the wireless infrastructure and RF-layer security boundaries Cisco simplifies WLAN management by providing visibility and control of the RF environment. This increases network scalability, simplifies troubleshooting, and improves productivity for system administrators, resulting in lower operational expenditures.

Cisco Network Admission Control (NAC) Cisco NAC is the first step of the multiphased Cisco Self-Defending Network initiative to identify, prevent, and adapt to security threats. Using Cisco NAC, organizations can provide network access to endpoint devices, such as PCs, PDAs, and servers that fully comply with established security policy. The Cisco Self-Defending Network is a strategy for integrated network security. It is designed to help organizations identify, prevent, and adapt to known and unknown security threats. Cisco WLANs integrate with the Cisco Self-Defending Network to provide end-to-end network security and identity-based networking. Damage caused by worms, viruses and spyware has demonstrated the inadequacy of existing safeguards. Cisco NAC is intended as a comprehensive solution that allows organizations to enforce host patch policies, and to regulate noncompliant and potentially vulnerable systems to quarantined environments with limited or no network access. By combining information about endpoint security status with network admission enforcement, Cisco NAC enables organizations to dramatically improve the security of their computing infrastructures. Cisco NAC allows noncompliant devices to be denied access, placed in a quarantined area, or given restricted access to computing resources. The access decision can rely on information such as the endpoint's anti-virus state and operating system patch level.

439 Chapter 9: Enterprise Wireless Mobility Network Unification Wired and wireless networks integration is essential for unified network control, scalability, security, and reliability. System-wide wireless LAN functions, such as security policies, intrusion prevention, RF management, QoS, and mobility must be available to support enterprise-class wireless applications. Smooth integration into existing enterprise networks must be readily supported. Cisco WLAN controllers manage system-wide wireless LAN functions, such as integrated IPS, real-time RF management, zero-touch deployment, and N + l redundancy. These controllers work with lightweight access points and a management device to deliver enhanced performance and advanced management capabilities. Cisco WLAN controllers are designed to support a range of platforms, with the same level of security, scalability, reliability, ease of deployment, and management for WLANs as for wired LANs. There is a migration path to other Cisco switching and routing platforms. Wired and wireless unification most commonly relies today on Cisco 4400 and 2000 Series wireless LAN controllers. The capacity of these controllers ranges from six access points for the 2000 Series and up to 100 access points for the 4400 Series. In the future, selected Cisco routers and switches will operate as Cisco WLAN controllers supporting large-scale, branch office, or small and medium-sized business deployments. World-Class Network Management System managers need a reliable, cost-effective centralized management tool for wireless LAN planning, configuration, and management. This tool needs to be centrally available and support simplified operations and easy-to-use graphical interfaces. Cisco's WLAN management interface is the Cisco Wireless Control System (WCS). WCS is a foundation that allows IT managers to design, control, and monitor enterprise wireless networks from a central location. Cisco WCS supports WLAN planning and design, RF management, location tracking, IPS, and WLAN systems configuration, monitoring, and management It manages multiple controllers and their associated lightweight APs It supports zero-touch deployment and robust graphical interfaces to make WLAN deployment and operations simple and cost-effective Detailed trending and analysis reports support ongoing network operations System administrators have a single solution for RF prediction, policy provisioning, network optimization, troubleshooting, user tracking, security monitoring, and WLAN systems management.

Chapter 9: Enterprise Wireless Mobility Unified Advanced Services

440

A WLAN must also be able to support new mobility applications, emerging Wi-Fi technologies, and advanced threat detection and prevention capabilities. Cisco Unified Wireless Network Advanced Services delivers unified support for applications. This support is built into Cisco's end-to-end solution Cisco's solution includes advanced services that are ready for immediate implementation or can be deployed over time through a phased implementation Organizations can selectively deploy the services and applications they need depending on their individual requirements.

High-resolution location services support critical applications such as item tracking, IT management, and location-based security: This appliance brings the power of a high-resolution location solution to critical applications such as item tracking, IT management, and locationbased security This appliance provides the ability to integrate tightly with a spectrum of technology and application partners through a rich and open application programming interface (API) to support deployment of new business applications In addition, Cisco's location services can be combined with wireless VoIP to make e911 emergency response capabilities available.

WLAN coverage must be reliable and RF bandwidth must be managed operationally to ensure optimal and secure WLAN performance. Cisco achieves this via the following capabilities: A unified wireless and wired infrastructure, delivering a single point of control for all WLAN traffic Extension intelligent network infrastructure device features to wireless traffic such as QoS and management policies QoS for voice and delay-sensitive applications Real-time capacity management with load balancing Self-healing WLANs for high availability, including coverage hole detection and correction Secure Layer 2 and Layer 3 roaming, with context sensitive transfer of security and QoS policies, and with "follow-me VPNs," which enable clients to maintain VPN tunnels when roaming, Proactive Key Caching (PKC), helping to ensure fast, scalable roaming in 8 0 2 . H i environments

Cisco Unified Advanced Services Benefits Enhanced Productivity, Collaboration, and Responsiveness:

441 Chapter 9: Enterprise Wireless Mobility Healthcare environments can improve patient care Universities and educational institutions can connect students and teachers Financial institutions can have real-time access to client data Government agencies can deliver faster access to information, thereby enhancing public safety Manufacturing can share realtime data from the manufacturing floor and support "just-in-time" manufacturing and assembly Retail environments can provide data mobility throughout the entire store and warehouse, allowing sales staff to serve customers more effectively Public access WLANs can provide access to corporate networks while employees are on the road Corporations can better track assets, access critical business information, and enhance employee productivity through real-time information exchange

Enhanced WLAN Visibility and Control Cisco provides enhanced visibility and control of the wireless LAN: Thousands of authorized and unauthorized active Wi-Fi devices can be tracked simultaneously to within a few meters within the WLAN infrastructure with the Cisco 2700 Series Wireless Location Appliance System design delivers built-in resiliency and centralized control and management Plug-and-play wireless devices with zero-touch configuration Dynamic RF Management The Cisco Unified Wireless Network supports dynamic RF management in order to: Detect changes in the RF environment and dynamically adapt in real time Intelligent RF control for self-configuration, self-healing, and selfoptimization

Radio Management The Radio Manager features simplify the deployment, expansion, and day-to-day management of the WLAN by: Automatically configuring network-wide radio parameters during initial deployment and network expansion. Continuously monitoring the radio environment, detecting interference and rogue APs, and alerting the WLAN administrator to radio network changes. Providing information to help visualize the network radio topology, including the path loss between APs and RF coverage

Chapter 9: Enterprise Wireless Mobility The Radio Manager provides these features: Rogue AP detection Interference detection Automatic radio parameter generation

442

The Radio Manager can generate optimal values for the radio parameters of a given group of APs. Each set of radio parameters can modify the following: AP frequency AP transmit power AP beacon interval

You can also choose to run these procedures manually. The Table 9-9 summarizes which procedures produce the data required by the different Radio Manager features: Table 9-9. Radio Manager Features Feature Rogue AP detection Interference detection Automatic radio parameter generation Run these procedures Radio Monitoring AP Radio Scan Radio Monitoring AP Radio Scan Client Walkabout (recommended) Results are used in: Location Manager Faults Faults RM Assisted Configuration Location Manager Radio Manager Reports

The results of these procedures make up the radio knowledge base. This knowledge base is saved in the WLSE database and accessed by other Radio Manager features. The Radio Manager tab displays information to manage your WLAN radio environment. All device information shown in this tab is polled from the managed devices in the network. The Radio Manager tab has these options: Radio Monitoring AP Radio Scan Client Walkabout Location Manager

443 Chapter 9: Enterprise Wireless Mobility RM Assisted Configuration Manage RM Measurements

Public WLAN (PWLAN) Numerous new Internet service providers (ISPs) have emerged to support public access WLAN or public WLAN (PWLAN). These providers became known as wireless Internet service providers (WISPs) or virtual network operators (VNOs). Their business model is provision of Internet access at public locations through use of the unlicensed wireless spectrum of 802.11b. Public Wireless LAN Market Wireless LAN (WLAN) started in the enterprise in the 1990s. Early adopters deployed WLANs to gain business advantage, streamline processes, and enable the workforce and applications mobility. PWLAN gained momentum in the late 1990s, especially among mobile operators, who understood PWLAN was a complementary to their existing mobile data networks. Demand for PWLAN services continues to grow as WLAN technology has become widely accepted in enterprise and home networking markets, and as 802.11-enabled user devices have become more common. The PWLAN market is now evolving very rapidly. Many service providers have plans for wider deployment of PWLANs. Although the analyst projections for PWLAN growth vary, the rate of hot-spot deployments is expected to continue climbing.

Figure 9-13 shows growth projections made by research firms as to the number of hot spots that are likely to be deployed worldwide through 2006.
Hots pots (k) 200 160 160 140 120 100 3D 60 65,000 Planned by End of 2034 Gartner May 03

_...-- '

Gartner August 02 Ovum 02 - IDC July 0 3 InStatMDRce

1
~ss=

*
20 0 - - ~ l 2002 2003

._
2004

._^r;
2035 2003

TeleAnalyticsQ2 Actual Aug 03 Planned Aug 03

WLAN hotspot operators have publicly stated intentions to build more than 65jQQ0 additional hotspots by end of 2004.

Chapter 9: Enterprise Wireless Mobility Figure 9-13. Cisco Worldwide Hot-Spot Deployment Actual and Forecasts Seizing the Opportunity

444

Offering PWLAN services is not limited to any service provider market segment. Several categories of PWLAN operators are emerging in the marketplace: Mobile operatorsCellular operators, both Global System for Mobile Communications (GSM) and Code Division Multiple Access (CDMA), are looking at PWLAN as a cost-effective, complementary service to their existing mobile data services. Local exchange carriersThese include incumbent LECs, competitive LECs, and international PTTs, many of whom have existing network infrastructure that can be used in the deployment of a PWLAN service. This group is looking to bundle PWLAN services with their other offerings. Virtual network operators (VNOs) or WISPsThese represent another category of service provider that either owns its own hot-spot locations or establishes partner relationships with other site owners or providers to offer PWLAN services. Cable/Multiple system operators (MSO)Cable operators are becoming interested in PWLAN as a potential complementary offering to their broadband services. Conversely, other PWLAN operators are interested in partnering with cable operators. By doing so, the PWLAN operator can use the MSO's network to provide WAN connectivity to hot-spot venues served by that MSO. Independent site ownersThis type of (non-service provider) operator ranges from airports, hotels, and convention centers to small retail establishments. Site owners typically deploy a PWLAN offering in order to attract patrons to their establishment or enhance the experience they have while visiting an establishment. Educational InstitutionsA growing category, typically deploying campus PWLAN networks to support their "customers", both students and partners. This category shares requirements of both PWLANs and Enterprise WLANs.

Hot-Spot Venues Hot spots are emerging everywhere as operators race to capture venue locations. Examples include: Airports Hotels Convention centers College campuses Coffee houses Fast food restaurants Book stores Phone booths

445 Chapter 9: Enterprise Wireless Mobility Aircraft And many others...

There is a debate going on among industry analysts about the viability and profitability of various locations, but most agree that today, larger venuesor wherever mobile professionals tend to congregaterepresent opportunities for PWLAN operators. What PWLAN Operators Should Care About Almost anyone willing to deploy an access point and connect it to the Internet can offer PWLAN service. But to establish a revenue-generating presence in the market and compete for prime public locations, operators need to consider a number of factors in planning for or deploying PWLAN service: PWLAN Ease of Use Easy accessServices needs to be easily accessible and available to the broadest possible group of subscribers. Easy access implies that no special configuration setup or changes are required for a laptop or mobile computing device. RoamingThere is value for customers in being able to roam across hot spots that are owned by different operators and be billed as if your own operator provided the service. This is the way cellular telephone services commonly work today. Easy sign-upSubscription to the service must be easy. A prospective customer should be able to register and be granted credentials immediately. Flexible subscription and payment optionsDifferent mobile users have different access requirements. Thus different subscriber offerings such as day-pass, usage-based, and unlimited monthly plans are neede in order to attract and retain the broadest range of potential customers.

PWLAN Security Authenticationidentifying and validating users before granting them access to network servicesis a necessary function in any PWLAN. In PWLAN today, two main authentication methods are being deployed; Universal access method (UAM)This Layer 3 authentication method is common today. It uses a client Web browser to access the service for signing up or submitting log in credentials. Custom client software, often used by virtual network operators (VNOs), can perform the same function without user intervention. 802.1x/Extensible Authentication Protocol (EAP)This Layer 2 authentication technique, based upon the 802.Ix framework and using EAP authentication, is gaining global acceptance. EAP authentication is enforced at the edge of the network through the access point. Several EAP types are available. There are implementation complexities associated with using certain EAP types in the public space, so only two EAP types are currently

Chapter 9: Enterprise Wireless Mobility

446

being consideredEAP Subscriber Identity Module (EAP-SIM) and Protected EAP (PEAP). EAP-SIM is a method that is starting to emerge o EAP-SIM uses SIM-based authentication, which is the standard method of authentication used by GSM-based mobile operators worldwide EAP-SIM requires that a client device to be equipped with a SIM card and SIM card reader, together with a network that is enabled for this capability.

Both UAM and EAP frameworks have the ability to use one-time passwords (OTPs) as a method for providing stronger password authentication. Some form of generic token card or soft token program can generate these OTPs, or they can be sent to a subscriber via Short Message Service (SMS). User privacy and protectionSecurity weaknesses exist in PWLAN because of the need for broad interoperability between clients and the public network. Most deployments today do not implement any encryption. This deficiency has been addressed through emerging standards such as WiFi Protected Access (WPA). Some predictions based on urrent trends are: Until there is wider support for the new standards across client platforms, operating systems, and network infrastructure components, PWLAN operators will likely continue to offer "open," unencrypted access at the access point while at the same time moving to more secure authentication methods. PWLAN operators will still need to use available security measures to reduce vulnerabilities such as sniffing, IP spoofing, and denial-of-service attacks, all of which are drrawbacks associated with open access. Corporate users will continue to use remote access VPN technology to secure their connections over PWLAN services. This is the same technology used in mobile data services such as General Packet Radio Service (GPRS) or CDMA lx RTT (Radio Transmission Technology) and usually consists of IP Security (IPSec) or Secure Sockets Layer (SSL).

Access control and service enablementControls access to services before and after user authentication and authorization occurs: Captive redirection to a Web server portal for subscription to services and service log-on Accounting record consolidation and generation Free and unauthenticated access to specific content (often referred to as white list URLs) for advertising, providing venue-specific information or links to sign-up subscription services

Infrastructure sharingIn order to deploy 802.11b/g wireless technology effectively, no more than three non-overlapping channels should be used simultaneously. It is often not practical for more than one operator to offer a

447 Chapter 9: Enterprise Wireless Mobility service at a given location. Many large airports require a single PWLAN infrastructure within the facility, with a network capable of being shared by multiple service providers. In some cases the network may be shared with nonpublic agencies or enterprises at the airport. Location-aware portal brandingPWLAN operators need to be able to customize what the user sees, varying by location. This is especially important where an operator provides access at different venues and venue types. Often an operator will negotiate a revenue-sharing contract with a venue owner that will require the portal to contain co-branded elements. ManageabilityManagement of a PWLAN solution is essential for the viability and profitably of this growing service. The potential for any new provideroperated service is governed not only by its revenue stream and rate of adoption, but by the capital and operating costs of deploying the service and maintaining and supporting its operation and user base. PWLAN is no exception. PWLAN service must be efficiently deployable and effectively manageable. Operational SupportNetwork management and operational support requirements are part of overall service requirements. Support metrics need in place to measure and track cost of services, return on investment, operational costs, and profit contribution. Functionality must be in place to assist with the automation of deploying new service locations, managing and billing subscribers and subscriber services, tracking and locating faults and service degradation points, and ensuring stability and security in the system. The Cisco PWLAN Solution The Cisco PWLAN solution builds on enterprise Cisco platforms. PWLAN-specific features were developed for these platforms as a complete end-to-end solution (Figure 9-14).

Chapter 9: Enterprise Wireless Mobility

448

:'

Public WLAN Operator

Integration

Figure 9-14. Cisco PWLAN Solution Architecture Overview Access pointsThe Cisco PWLAN solution builds on Cisco 1100 Series and Cisco 1200 Series access points. Access control and service enablementAccess control relies on flexible Cisco IOS Service Selection Gateway (SSG) technology, now available across a broad range of Cisco platforms, including the 2651XM Router, Cisco 2691 Router, Cisco 3725 Router, Cisco 3745 Router, Cisco 7200 Series, and Cisco 7301 Router. Working with the Cisco CNS Subscriber Edge Services Manager (SESM), the Cisco SSG provides subscriber authentication, service selection, service connection, and accounting to subscribers of Internet and intranet services. Captive portal and branding serverThe Cisco CNS SESM works with the Cisco SSG to manage and control the subscriber experience offering, supporting customization and personalization depending on device, client, location, service, and other criteria, to offer both value to end users and maximize service and advertising revenue for a PWLAN operator. Access Control ServerThe Cisco CNS Access Registrar is a RADIUS-compliant access policy server used to support Internet and 802.1x/EAP user authentication. When used with the Cisco IP Transfer Point (ITP) MAP gateway, a Cisco CNS Access Registrar performs home location register (HLR) proxy services, supporting EAP-SIM authentication for mobile operator networks. Cisco

449 Chapter 9: Enterprise Wireless Mobility CNS Access Registrar provides carrier-class performance and scalability required for integration with service management systems. Access Zone Router (AZR)Introduced on the Cisco 1700 platform, with features now available on Cisco 2600 and Cisco 3700 platforms, the AZR provides connectivity, client address management, security services, and routing across a WAN from each access point to an operator's point of presence (POP) or data center. Mobile operator Signaling System 7 (SS7) interconnectThe Cisco ITP transports SS7 traffic over IP (SS7oIP) networks. When deployed in a PWLAN network, the Cisco ITP acts as a gateway, taking SIM authentication credentials from 802.1x/EAP-SIM and formatting them into standard SS7 MAP messages for routing to the operator's HLR/AuC. (Authentication Center) Network managementCisco has an element management system combined with a service management layer for PWLAN applications. This includes the CiscoWorks Wireless LAN Solution Engine (WLSE), CiscoWorks LAN Management Solution (LMS), Cisco Distributed Administration Tool (DAT), Cisco Signaling Gateway Manager (SGM), Cisco Information Center (Cisco Info Center), Cisco Networking Services Configuration Engine, and the Cisco CNS Performance Engine (CNS-PE). Cisco PWLAN Solution Differentiation Flexibility and Feature Richness The Cisco PWLAN solution continues to evolve, adding flexibility and features to meet changing requirements: Shared infrastructure supportThe Cisco PWLAN solution can address user group separation and equal access with Layer 2 and Layer 3 methods, supporting different services for different situations, including outsourced or wholesale PWLAN, guest-access WLAN services, and PWLAN services, using a common infrastructure. Enabling servicesPWLAN services that merely simple gateways to the Internet will inevitably become commoditized. Operators need to generate increased revenue at lower cost and in shorter timeframes. The Cisco PWLAN solution supports introduction of new value-added services with incremental revenue streams. Examples of such services include music, movies, sports, gaming, or ring tones for the consumer, and business-class, WLAN-optimized services for voice over IP (VoIP) for enterprise or hosted VPN services that offer security for all users. Cisco IOS Software offers operators opportunities for bundling of solutions to meet emerging requirements. Comprehensive branding supportLarge users with prime hot-spot locations such as hotel chains, convention centers, and airports require cobranded, customized portals. These portals also must include localized information such as area guides and points of interest. The Cisco PWLAN solution enables operators to meet these demands. In addition to providing differentiated branding acrsss venue locations or types, the solution can

<

Chapter 9: Enterprise Wireless Mobility

450

support differentiated branding within a venue based upon access-point location. Flexible service billing optionsThe Cisco PWLAN solution supports flexible billing models, including postpaid, prepaid (time or volume), tariff, and billing depending on subscription content. The network can detect when a user has left the service area without logging out and automatically close a session. In doing so, billing accuracy is preserved and network resources are freed. Ready-to-use support for ease of useFeatures developed for the Cisco PWLAN solution allow clients with static host client configurations to access the service without changes to their laptop or mobile device. Cisco supports clients with static IP and Domain Name System (DNS) entries in addition to supporting clients with static HTTP proxy configurations. Authentication transparencyNot all PWLAN subscribers will use UAM Web-based authentication. As service providers begin to implement emerging 802.1x/EAP authentication methods in their network, there will be scenarios where both authentication methods will exist. The Cisco SSG access control platform can proxy EAP authentication messages from hot-spot access points and automatically create user sessions upon successful EAP authentication, thus eliminating need for "double authentication," first at Layer 2 with 802.1x/EAP and then at Layer 3 through the Web portal. You can use Cisco SSG for centralized accounting record generation for both 802.1x/EAP and Web-authenticated users. Open architecture with broad platform feature supportOperators can implement centralized, distributed, or hybrid deployment models with flexible platform choices to configure and scale their networks.

PWLAN Security Security compromises exist in today's "open" PWLAN networks. The Cisco PWLAN solution extends IOS Software support for Cisco's access zone routers (AZR) to reduce risk of session hijacking associated with malicious IP spoofing activity. Operators can use Cisco access point features to block local peer attacks as well as prevent man-in-the-middle infrastructure address spoofing. Cisco access points support all 802.1x/EAP methods available today, in addition to supporting WPA for air link encryption.

CiscoWorks Wireless LAN Solution Engine (WLSE) The CiscoWorks Wireless LAN Solution Engine (WLSE) is a hardware and software solution for managing the wireless LAN (WLAN) infrastructure. As the management component of the Cisco SWAN framework, WLSE uses the intelligent WLAN capabilities to automate advanced radio frequency (RF) and device management capabilities in ways that simplify deployment, reduce operational complexity, and provide system administrator visibility into the WLAN. This automation reduces cost and time needed for WLAN deployment, management, and security.

451 Chapter 9: Enterprise Wireless Mobility WLSE is a GUI based CiscoWorks-based tool used to maintain and manage Cisco Wireless networks centrally. It identifies and configures access points (AP) in customer defined groups and reports on throughput and client associations, contributing to optimized wireless network performance and overall operational efficiency. WLSE's centralized management capabilities include an integrated template-based configuration tool. You can think of a configuration template as a configuration update file for an access point. This file might contain the update for a single parameter or a complete AP configuration. WLSE enables administrators to detect, locate, and disable unauthorized (rogue) access points and RF interference. The assisted site survey feature automates the process of determining optimal access point settings, including transmit power and channel selection. WLSE automatically configures APs and bridges, assures the consistent application of security policies, and actively monitors faults and performance. WLSE is a core component of the Cisco Structured Wireless-Aware Network (SWAN). WLSE detects, locates, and disables rogue access points, in order to ensure that network security policies are applied consistently. WLSE also detects unauthorized WLAN client networks. These capabilities can benefit any organization, including those that simply want to guard against intruders. The Location Manager (Figure 9-15) is a WLSE GUI that displays wireless APs and bridges on a facility floor plan. The location of rogue APs and RF interference is represented visually on the floor plan, as is the individual AP coverage areas.
S**4*t f^t VSp &*$!* $ivi* te*'d **Nf*

^"5 .

! -

Figure 9-15. WLSE "Location View" Displays Rogue Access Point Location

Chapter 9: Enterprise Wireless Mobility WLSE automates bulk firmware updates and mass configuration of APs and bridges. WLSE may be integrated with other network management systems, operations support systems, and other Cisco applications through syslog messages, Simple Network Management Protocol (SNMP) traps, and an Extensible Markup Language (XML) interface. The secure HTML-based WLSA user interface provides access anywhere, even through firewalls.

452

WLSE dynamic RF management supports self-healing, which enables a Cisco Aironet AP to adjust its cell coverage area when an adjacent AP becomes disabled or fails. It also helps optimize performance by detecting and locating RF interference while at the same time monitoring utilization and faults. WLSE Key Features and Benefits WLSE Deployment WLSE supports deployment by automating configuration and setup: Simple AP configurationNewly deployed APs can be automatically discovered and configured using Dynamic Host Configuration Protocol (DHCP), with the flexibility to assign different configurations depending on the access point device type, source subnet, and software version. You can automate deployment and maintain control in rapidly growth environments. Cisco Aironet access points, bridges, and the switches to which they are connected are automatically discovered with the Cisco Discovery Protocol. Assisted site surveysSite surveys are essential during deployment, and should be performed regularly thereafter to address changes in the environment. Complete and reliable WLAN coverage needs to rely on a detailed site survey. Site surveys were once expensive and time-consuming, and required special expertise to perform. WLSE enables system managers to conduct cost-effective site surveys in-house without needing to be experts in RF propagation and measurement. The assisted site survey tool automatically determines optimal frequency selection, transmit power, and other settings, which an administrator can then apply. The coverage areas desired can be defined to include only specified areas. Mass configurationConfiguring a group with hundreds of devices requires little more effort than configuring a single device. Configuration tasks may be scheduled or executed on demand. WLSE supports all the configuration settings available on access points, including Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access 2 (WPA2) security settings. Configuration updates are done using Secure Shell (SSH) Protocol.

Table 9-10 summarizes the features and benefits of WLSE. Table 9-10. WLSE Features and Benefits Feature Wireless LAN IDS with rogue AP detection, automatic switch port Benefit Eliminates security threats posed by malicious intruders and by employee-

453 Chapter 9: Enterprise Wireless Mobility

shutdown, and unauthorized WLAN detection Interference detection Self-healing adjusts cell coverage area to compensate for disabled or failed APs Assisted site survey tool

installed unauthorized APs Notifies administrators quickly about conditions that may affect network performance Increases WLAN availability and optimizes WLAN performance Assisted site surveys performed by IT personnel reduce the costs, skills, and time required to make optimal radio settings for best network performance Maintains peak WLAN performance and reliable WLAN coverage by periodically reassessing the performance of optimal settings in the network Simplifies daily operations and management Enhances security by monitoring consistency throughout the network Allows for rapid deployment and expansion Increases WLAN availability Fast troubleshooting improves user satisfaction Facilitates integration with third-party applications

Automated resite surveys Automated configuration and bulk firmware updates Access point and bridge security policy misconfiguration detection and alerts Out-of-box AP deployment Proactive fault and performance monitoring Access point group usage reports XML data export WLSE Operations

WLSE automates a wide range of repetitive time-consuming tasks, simplifying the management of Cisco Aironet access points and bridges to enhance productivity for system administrators. Access point and bridge firmware may be updated in mass through centralized firmware updates. Updates may be assigned to a specific device or to groups. Tasks may be scheduled or executed on demand. WLSE can perform mass upgrades of older Cisco Aironet 1200 Series and 350 Series access points running VxWorks to newer Cisco IOS Software versions. Many RF management and Cisco SWAN features require that APs run Cisco IOS Software.

Chapter 9: Enterprise Wireless Mobility WLSE operational features include:

454

VLAN configurationVLANs on access pointss may be configured and monitored, allowing differentiation of LAN policies and services, such as security and quality of service, for different users on enterprise and public-access VLANs. Configuration archiveWLSE is able to store the last four configuration versions for each managed AP, allowing configuration tasks to be undone. Automated discoveryWLSE automatically discovers Cisco Aironet access points, bridges, and switches connected to access points using Cisco Discovery Protocol. Discovery may be scheduled or run on demand. Dynamic groupingA "Device Groups" feature makes administering a WLAN easier andmore intuitive. The system administrator can organize devices into hierarchical groups. These groups may span multiple subnets. Customizable thresholdsAdministrators may define different faults and performance thresholds for different sites and groups accompanied by specific actions and fault priorities. A centralized fault screen simplifies quick problem resolution. Network load, RF usage, errors, and client associations can all be monitored. Fault statusWLSE provides a centralized tree view of all APs and device groups. Color coding and group icons indicate fault status. Faults can be filtered and sorted by priority to simplify viewing and resolution of problems. Fault notificationFault notification and forwarding are implemented through syslog messages, SNMP traps, and e-mail. Switch monitoringSwitches connected to APs are monitored for availability and utilization of ports, CPU, and memory. WLSE Security and Wireless LAN Intrusion Detection Wireless LAN threat defense is through the Cisco SWAN Wireless LAN Intrusion Detection System (IDS). Unauthorized (rogue) APs installed by employees or intruders can create security breaches that put the entire network at risk. Cisco SWAN quickly detects, locates, and automatically disables rogue APs. WLSE also detects unauthorized APs and WLAN networks, locating them and identifying which wireless clients are participating. It also monitors WPA message integrity failures, which may indicate man-in-middle attacks. WLAN IDS protection can be tailored to meet individual needs: Integrated WLAN IDSStandard Cisco Aironet APs can be deployed in multifunction mode to service client devices and to provide WLAN intrusion monitoring. Intrusion detection information is gathered from the APs that scan the RF environment. Optionally, Cisco client cards and Cisco Compatible client devices provide additional information about the RF environment.

455 Chapter 9: Enterprise Wireless Mobility Dedicated WLAN IDSA dedicated access point-only WLAN is deployed with the AP (802.11a, b, or g) placed in radio scan mode to support WLAN intrusion monitoring. This provides continuous monitoring of the RF environment. Activebut-unassociated device monitoring is supported to minimize the risk of clients associating to rogue APs, as well as to protect the network from intruders probing the RF environment for weaknesses. Other security features of WLSE include: Monitoring of IEEE 8 0 2 . l x server availabilityIEEE 802.lx Extensible Authentication Protocol (EAP) servers, including Cisco Secure access control servers (ACSs), are monitored for response time. Various authentication types are supportedCisco LEAP, EAP-Flexible Authentication via Secure Tunneling (EAP-FAST), Protected EAP (PEAP), TAACACS+ and generic RADIUS. Secure user interfaceWLSE uses a secure HTML-based user interface that may be accessed anywhere, even through firewalls. In addition to the Web-based GUI, a command-line interface similar to that in Cisco IOS Software provides direct console, Telnet, or SSH access for basic configuration and troubleshooting. WLSE communicates with access points using HTTP Secure Sockets Layer (SSL) sessions for management. Role-based access model WLSE uses a flexible, role-based user access model. For example, help desks can be limited to viewing reports and faults. Security policy monitoringAll APs on the network are monitored for consistent application of security policies. Alerts are generated for violations. These can be delivered by e-mail, syslog, or SNMP trap notifications.

Performance Optimization and High Availability Cisco Aironet APs have built-in RF scanning and measurement capabilities. WLSE analyzes these RF measurements, provides notification if performance degrades, and displays air/RF coverage (Figure 9-16). It also analyzes RF measurements from Cisco Aironet and Cisco Compatible client devices. Interference detection and location is critical to a reliable WLAN. RF measurements transmitted to WLSE include those for both 802.11 and non802.11 interference. If RF interference exceeds an administrator-defined threshold, a fault indication is generated so that the administrator can quickly locate and suppress the source of the interference. Client air scanning and monitoring provides ten to twenty times more RF measurement data than access-point RF measurements alone. Because WLAN clients can freely move about all areas of a building, using client scanning and monitoring extends RF monitoring into areas most likely to contain rogue APs.

Chapter 9: Enterprise Wireless Mobility

456

f ; '-**& s f

Figure 9-16. Assisted Site Survey, "Access Point Scan Mode" Interference detectionWLSE stores the physical location of all managed APs and creates a site map for a WLAN installation. A wireless-aware network can detect points of RF interference that impact network performance. The source of this interference could be a rogue AP or a device that operates in the same frequency range, such as a cordless telephone or leaky microwave oven. Interference detection and location is critical to a reliable WLAN. Administrators can define thresholds to generate fault notifications when preset interference levels are exceeded. Automated resite surveysWLSE automatically monitors radio throughput and performance to provide notification if performance falls below administratordefined thresholds. Optimal settings can be found on-the-fly by running the site survey wizard, then applied to the network. Warm standby redundancyWLSE supports fault-tolerance and redundancy through a primary and backup mechanism. If the primary fails, the backup automatically takes over. Data such as performance data, fault messages, and radio scans between the primary and backup is synchronized on a user-defined interval to minimize data loss when a backup takes over. Self-healing WLANsWLSE can detect and compensate for an access point that has failed by automatically increasing the power and cell coverage of surrounding APs. The self-healing process assures contiguous coverage to maintain available coverage in the WLAN and minimize client impact. WLSE Reporting, Trending, Planning, and Troubleshooting Real-time device tracking and reporting flexibility, is a tool for troubleshooting and capacity planning. Using only a client name, user name (supported for Cisco

457 Chapter 9: Enterprise Wireless Mobility LEAP and PEAP), or MAC address, you can determine in real time what AP a client device is associated with. To aid in troubleshooting, the previous ten associations for the client device and associated APs can also be accessed. Network utilization, client association and utilization, historical and current client usage statistics, access point Ethernet and radio interface status, and error details can be displayed in both graphical and tabular form Reports can be generated at the individual device level and at group level All reports may be scheduled, delivered by e-mail, or exported in CSV, XML, and PDF formats.

WLSE provides comprehensive coverage display overlaid on floor maps to provide visibility into the RF environment. The Location Manager feature provides coverage by data rate and signal strength. WLSE supports RF management for directional antennas. Details about device settings, including channel and power, can be overlaid on the coverage display. WLSE Integration To manage a converged wired and wireless network, integration with third-party network management systems is performed through syslog messages, SNMP traps, and an XML interface. WLSE integrates with the CiscoWorks LAN Management Solution and other CiscoWorks applications. Device inventory and credentials, for example, can be imported or exported between WLSE and CiscoWorks Resource Manager Essentials (RME) (an application that provides broad network management for a wide range of Cisco devices). Device discovery may be turned off to allow automatic inventory synchronization with CiscoWorks RME WLSE uses the same default user roles as CiscoWorks RME, but it allows customization WLSE can be launched from the CiscoWorks Cisco Management Connection desktop.

Cisco Fast Secure Roaming WDS is a collection of Cisco IOS Software features that enhance WLAN client mobility and simplify WLAN deployment and management. Wireless domain services (WDS) was introduced with the Cisco Structured Wireless-Aware Network (SWAN). These services, supported today on APs and client devices, and on specific Cisco LAN switches and routers in 2004, include fast secure roaming and 802.Ix local authentication. With fast secure roaming, authenticated client devices can roam securely at Layer 2 from one AP to another without any perceptible delay during reassociation. Fast secure roaming supports latency-sensitive applications such as wireless voice over IP (VoIP), enterprise resource planning (ERP), or Citrixbased solutions (Figure 9-17). WDS provides fast, secure handoff services to APs, without dropping connections, for under-150ms roaming within a subnet.

Chapter 9: Enterprise Wireless Mobility

458

WAN
AP Based WDS Cisco ACS AAA Server

5:>-^

Access Point must now 902.1 Xauthentteate with the WDS Access Point (API) to establish asecuresessbn I nit's I c lie nt 802.1X authe nticatb n cpes to a central AAA server (~500ms) During a client roam, the client signals to the WDS it has roamed and WDS will send the client's base toy to Die new Access Point (AP2) The orail roam time is reduced to <150ms, and in most cases, 400ms

AP2

Note: Because the tacalWDS device handles roaming and reauthent'calion, the WAN link is not used

Figure 9-17. Fast Secure Roaming Fast secure roaming implemented under Cisco IOS Software release 12.2(11)JA is comprised of two main enhancements. Improved 802.11 channel scanning during physical roaming Improved reauthentication using advanced key management

Improved reauthentication, using advanced key management enhancements, speeds up Cisco LEAP authentication. 802.11 channel scanning speeds up all Layer 2 roaming, regardless of the security method used. Improved 8 0 2 . 1 1 Channel Scanning Channel scanning is enabled by default on Cisco clients and APs and is not configurable. The fast secure roaming enhancements to channel scanning require communication between a client device and an AP. Channel scanning requires the following software: Cisco IOS Software release 12.2(11)JA or greater Cisco Aironet Client Utility, firmware, and driver software, included in Cisco Aironet Client Adaptor Installation Wizard version 1.1 or greater.

459 Chapter 9: Enterprise Wireless Mobility Channel Scanning Prior to Introduction of Fast Secure Roaming Prior the release of Cisco IOS Software 12.2(11)JA, Cisco Aironet clients took 37 ms to scan for each of the 11 802.11 channels in the United States, for a total scan time of around 400 ms. (Eleven channels are used in the United States. Other countries use different channel sets.) For each of the 802.11 channels valid in a specific regulatory domain, the client performs the following steps: Client device selects a specific WLAN radio channel Client listens to avoid a collision Client transmits a probe frame Client waits for probe responses or beacon frames

Fast Secure Roaming Channel Scanning Improvements Improvements to the Cisco channel scanning in Cisco IOS Software release 12.2(11)JA andlater include: Re-associating clients communicate information to the new AP such as the length of time since they lost association with the prior AP, channel number, and SSID. Using client association information, an AP builds a list of adjacent APs and the channels these APs are using. If the client reporting an adjacent AP was disassociated from its previous AP for more than 10 seconds, its information is not added to the new access points list. Access points store a maximum list of 30 adjacent APs. This list is aged out after a one-day period. When a client associates to an AP, the associated AP sends the adjacent AP list to the client as a directed unicast packet.

The communication between client and access point is shown in Figure 9-18.

P o i n t s Channel X

Channel 6

Channel I I
.1

.-.crja cent Channels d t e Channel ib Xl and XX

/
5oa mi ng C l i e n t

Figure 9-18. Client and Access Point Communication During Association

Chapter 9: Enterprise Wireless Mobility

460

When a client roams, it uses the adjacent access point list received from its current access point to reduce the number of channels it will need to scan. How a client uses the adjacent access point list depends upon how busy the client is. There are three types of client roaming: Normal Roam: The client has not sent or received a unicast packet in the last 500 ms. The client does not use the adjacent access point list obtained from the previous AP. Instead it scans all valid channels. Fast Roam: The client has sent or received a unicast packet in the last 500 ms.The client scans channels on which it has been informed there is an adjacent AP. If no new APs are found after scanning the adjacent AP list, the client reverts to scanning all channels. The client limits its scan time to 75ms if able to find at least one better AP. Very Fast Roam: the client has sent or received a unicast packet in the last 500 ms and the client is contributing a non-zero percentage to the load of the cell. Identical to a Fast Roam except the scan terminates when a better AP is found.

When the client does not receive an adjacent AP list from its previous AP, and it wants to fast roam or very fast roam, then it will use the list of channels on which APs were found during its last full scan. Improved Cisco LEAP Authentication Fast secure roaming provides a fast rekey capability for clients using Cisco LEAP as their 802.lx authentication protocol. Improved Cisco LEAP authentication introduces the CCKM protocol, a component of the Cisco Wireless Security Suite. Cisco LEAP Authentication Prior to Fast Secure Roaming Prior to Cisco IOS Software version 12.2(8)JA, a Cisco LEAP client used to need to perform a full Cisco LEAP reauthentication each time it roamed. A Cisco LEAP reauthentication requires: A minimum of 100ms An average of ~600ms Up to 1.2 or more

These times are in addition to the channel-scanning portion of the Layer 2 roam. Cisco LEAP authentication takes time because it requires three roundtrips to a RADIUS server using the following process: Client sends identity, and Cisco Secure Access Control Server (ACS) or RADIUS Server sends challenge Client sends challenge response, and Cisco Secure ACS sends success

461 Chapter 9: Enterprise Wireless Mobility Client sends challenge, and Cisco Secure ACS sends challenge response

Each of these round trip transactions requires time-consuming cryptographic calculations. Fast Secure Roaming Advanced Key Management Fast secure roaming speeds the reauthentication process. Cisco fast secure roaming requires 802.Ix authentication of APs and clients to a RADIUS server. This authentication uses a dedicated RADIUS server, or local authentication service running on a Cisco Aironet access point. Wi-Fi Protected Access (WPA) and 8 0 2 . H i both require 802.Ix. They introduce a new key hierarchy to WLAN security. Cisco fast secure roaming is a WDS capability using this new key hierarchy. Currently, 8 0 2 . H i and WPA have no equivalent fast secure roaming capability. Fast Secure Roaming Compared to 8 0 2 . H i and WPA Security The CCKM protocol is closely related to 8 0 2 . H i and WPA security specifications. CCKM adds additional steps to support fast secure roaming. Cisco APs support both WPA and CCKM concurrently, but only CCKM clients can perform fast secure roaming. Figure 9-19 is a high-level overview of the similarities and differences between 8 0 2 . H i or WPA key management schemes and CCKM. The additional steps performed (during initial client authentication only) by CCKM are circled. CCKM derives different, additional keys and introduces WDS between the APs and RADIUS server.

Chapter 9: Enterprise Wireless Mobility

462

wwaoLiii
Suppliant i Authenticator Authentication Server Supplicant

CCKM Authenticator/ Supplicant WDS Authenticator Authentication Server

~D
Sbbc Pas'ad CecoLEAP hMnerbabax

Infrastaicture Autre ntcatbn


Eexcnpctx RSHE
<
>

Discewerv

-<
Qciifcd Cerr.e 1T.11-:

>
E=iP Ciedaiul PMN

Authentication

E*P

J'..-J, Harfcliie inaxe: i

<~
w

Key Management

.
P71<

PTK

<

'-

>

GIF

Date Protection

<

PTK-UnrasSOifci

J*

PTK-UnestData
<
>

-&!

GTK-r.UhoaiBraciad

Figure 9-19. WPA/802.11i Key Management Compared to CCKM Initial Key Establishment Fast Secure Roaming Stages There are three fast secure roaming stages: Infrastructure authenticationAll APs in a Layer 2 802.lx domain authenticate, via the WDS, to a RADIUS server. Initial authenticationA WLAN client first associates to an AP in a new Layer 2 domain by performing full 802.lx authentication with the RADIUS server, via the WDS. This initial authentication has the same latency characteristics as non-CCKM (Cisco LEAP) authentication. Fast secure roaming starts when a client moves to a new AP in the same Layer 2 domain. Fast secure roamingWhen a client roams to a different AP in the same Layer 2 domain, CCKM performs fast rekeying, without the client needing to contact the RADIUS server.

463 Chapter 9: Enterprise Wireless Mobility

Infrastructure Authentication
During infrastructure authentication, all Cisco Aironet access points, including any running WDS, authenticate to a RADIUS server using Cisco LEAP via the WDS. This process is illustrated in Figure 9-20.

Authertt i c d t o r / Supplicant

Au

the

fit

icator

A u t h e i t t i c a t xoit Serve*

/ T ^
L a j e s i ,^Access Point " i t h WDS

yC00000CQ0000O...,_,..,,_, ! . _ Client Access Point

Strive

H*ci*a*

ii

cn:

era

Figure 9-20. Infrastructure Authentication Phase During the infrastructure authentication phase, all Cisco infrastructure devices in a Layer 2 domain must authenticate to the WDS. Each AP can establish a shared key with the WDS. This shared key is the context transfer key (CTK) and is used to pass key material from the WDS to the new AP during fast secure roam.

Initial Authentication
A WLAN client first associating to an AP in a new Layer 2 domain performs full 802.lx authentication with the RADIUS server, via the WDS. This initial authentication has the same latency characteristics as a non-CCKM Cisco LEAP authentication. Initial authentication consists of the following three sub-stages: Discovery stage Authentication stage Key management stage

Fast secure roaming occurs after initial authentication, when the client device moves to subsequent APs in the same Layer 2 domain. Initial AuthenticationDiscovery Stage The discovery phase is the same under WPA/802.11i or CCKM client authentication (Figure 9-21).

Chapter 9: Enterprise Wireless Mobility

464

f Glent Access Point Access Point with WDS

Layer 2 \ or Layer 3 f

/7

RADIUS

- response RSHE
< >

Figure 9 - 2 1 . Initial AuthenticationDiscovery Stage The AP advertises its security capabilities via the Robust Security Network Information Element (RSNIE) in the AP's beacons and probe responses. CCKM capability is communicated by a MAC organizationally unique identifier (OUI) value of 00:40:96 and a type value of 0 in the Authenticated Key Management (AKM) suite selector in the RSNIE. Initial AuthenticationAuthentication Stage The 802.Ix Cisco LEAP authenticator in CCKM is split between the AP with which the client is associated and the WDS. The AP the client is authenticating to blocks all client data traffic until Cisco LEAP authentication is complete, following the standard authentication process Instead of communicating directly with the RADIUS server to perform the Cisco LEAP authentication, the AP places a wireless LAN context control protocol (WLCCP) header on the packets, and sends them to the WDS The WDS communicates with the RADIUS server to complete the Cisco LEAP authentication.

A network session key (NSK) is mutually derived on the RADIUS server and the client following successful authentication. (Figure 9-22).

/oaxccoooax>. <:.,<>;, ..J Access Point

Layer 2

''>/

f Layer2 \ \ or Layer 3 j~ RADIUS

Client

Access Point with WDS

Dai HSK

Figure 9-22. Initial AuthenticationAuthentication Stage

465 Chapter 9: Enterprise Wireless Mobility Initial AuthenticationKey Management Stage CCKM authentication in the key management stage differs significantly from WPA/802.11i authentication: An additional keythe base transient key (BTK)is established on the WDS. In the CCKM scheme, the BTK is used for fast secure roaming. For WPA/802.11i, the BTK does not exist and a full reauthentication is required for roaming clients (Figure 9-23).

-^ooocoexxxoxoK:......
Cfent Access Point ~ ^ Access Point with WDS "
HSK

RADIUS

/ 1 "SK I

4-','ayHjhhik ihtfrasf

. .

j , ><

' ^-

} ,J

^. ^ *' ~ - - _ /"BTKKRK*--f HI-1 ', ,l B03D * "" PTK < ! _ _ - ' -. ff, r . feci'* ' ~ ~ P1 Del'.?

ECTKBTKBfi.1l ,' f

^ 0\' i^ ~ ' * BTK'KRK

Figure 9-23. Initial AuthenticationKey Management Stage The RADIUS server forwards the NSK it derived after the Cisco LEAP authentication process to the WDS (from the RADIUS server's viewpoint, the WDS was the 802.Ix authenticator) The NSK is used as the basis for all subsequent keys for the lifetime of the client's association with this extended basic service set (EBSS), or until the RADIUS server's rekey interval changes it. The WDS and the client derive a BTK and a key request key (KRK) by combining the NSK with random numbers (nonces) obtained via a process known as a four-way handshake A four-way handshake appears to the client as between the client and the AP it is authenticating to, but the AP puts a WLCCP header on the frames to set up the four-way handshake, and forwards them to the WDS. After the four-way handshake is complete, WDS forwards the BTK, as well as a rekey number (RN) to the AP to which the client is authenticating (since this is the initial authentication the WDS sets the RN to one) The AP the client is authenticating to uses the BTK, RN, and basic service set identifier (BSSID)to derive a pairwise transient key (PTK) which includes a shared session key for unicast traffic.

Chapter 9: Enterprise Wireless Mobility

466

After the PTK is derived, the AP sends the group transient key (GTK) used for multicast and broadcast traffic to the client, encrypted by an element of the PTK The process of sending the GTK to a client is called the two-way handshake The BTK and KRK are used when a client roams to quickly establish a new PTK.

Fast Secure Roaming Fast secure roaming, occurs after the client has performed its initial LEAP authentication. Any subsequent roams to an AP in the same Layer 2 domain will use the preexisting key hierarchy to perform a very fast rekey. Comparing a WPA/802.11i Roam with a CCKM Roam CCKM offers a real advantage when the WLAN client roams. In Figure 9-24, the WPA client is shown completing a reauthentication when it roams (including 802.lx re-authentication to a central RADIUS server). In contrast, the CCKM client sends a single reassociate-request frame to the AP and the AP sends a single frame to a local WDS and receives a single frame reply.

WPft/B02.11i Supplicant Autre nfeator Authentication Serar Suppleant

CCKM Autre nticator/ Supplicant WDS Autre nticator Authentication Sewer

/
WLCCP Infrastructure
BucdvPlctxRSNE

~ ;

Discovery

BeicMi'FVcbeRSHIE

Authentication

Qrieltb E-P _, AHietttaleli _^ Derive l-Vjy Haiil-uke <" . ilidxesf . Dcti.e Dai- PTK 3-VMf; HJrfcluk Daijpi 6TK

DeM.e PT.V.

'

"

mi.
CVir.c

Reassoelaisfc*itsi ^ BTK Dal'* PTK

Key merit

Data Protection

GTK-MUHlHsieraaefcasI

PlK-UHcslQili

>

FTK-UllcaslDuia GiTK- MMusiSicadasI

Figure 9-24. Comparing a CCKM Roam Establishment with Industry Standard WPA/802.11i Key Management

467 Chapter 9: Enterprise Wireless Mobility Table 9-11 compares a CCKM roam re-establishment with industry standard key management. Table 9 - 1 1 . Comparing a CCKM Roam Establishment with Standard Key Management WPA/802.11i Cisco CCKM

When a WPA/802.11i client roams, When a CCKM client roams, it it completes a full reauthentication, sends a reassociate request to its as it did for initial authentication: new AP. A full Cisco LEAP reauthentication with a central RADIUS server The complete four-way handshake to derive the PTK The complete two-way handshake to determine the GTK The new AP forwards the reassociate request to the WDS The WDS sends the new AP the client's BTK The new AP and the client mutually derive a new PTK The GTK, encrypted by the PTK, is sent to the client

When a CCKM client roams, it sends a reassociation request message to the new AP: A message integrity check (MIC) using the KRK A sequentially incrementing RN

After sending the reassociation request, the client can calculate its next PTK. It does this by performing a cryptographic hash of the BTK, the RN, and the BSSID. Figure 9-25 shows the CCKM key management phase in more detail.

Layer? "XCfent
,,|BTK<KBKl|

fccess

Point

"""

"

" - ' ' ' Access Point with WDS


[BTKiKBKl)

,;

"

lm<2

\ a Layer 3 ^ /) 4 ~" RADIUS

JisassoelatcH

VvLCCPElKjpllllfctl Volt; MB -

"11^111
.Del'.* <r

^u

feed'* UTK

Figure 9-25. CCKM Fast Rekey

Chapter 9: Enterprise Wireless Mobility

468

The AP encapsulates the reassociation request using the WLCCP protocol and passes it to the WDS The WDS verifies the MIC The WDS then encrypts the BTK and the RN with the CTK shared by the WDS and the new AP, and passes the encrypted message to the new AP The new AP then hashes the BTK, RN and BSSID and calculate the same new PTK as the client After the PTK has been mutually derived by the AP and the client, the AP uses an element of the PTK to encrypt the GTK The AP then passes the GTK to the client.

Cisco AVVID Design Cisco AVVID provides comprehensive campus network architecture guidance. For existing networks with a wireless overlay or for freestanding all-wireless networks, any existing Cisco AVVID Layer 3 architecture should be maintained where possible, with the WLAN deployed as additional, dedicated, wireless subnets associated with wiring closets. Figure 9-26 shows a representative Cisco AVVID architecture to which a WLAN subnet has been added to each access layer switch.

H5RP VIA11 20.

Act: 41,

Laver

HSRP A c t i v e VLAll -JO. 2 1 , 120

10 . 1 .20 .OfJLftH 2 0 D a t a 10 .i .2i -0;LMA li -.;IAH 10 . 1 . 1 2 0 .011 120 V o i .

1 0 . 1 . 4 0 lO.-LAll 40 D a t a 1 0 . 1 . 4 1 0.1 41 1.11 1 0 . 1 .1 -JO . 0. 14 0 V o i c e

Figure 9-26. Adding WLAN to Cisco AVVID Architecture Sizing the Layer 2 Domain Each access layer switch is considered to be a separate wiring closet. A dedicated VLAN for each wireless LAN AP is added to each switch. Access points are connected to a dedicated VLAN to minimize broadcast domains since WLANs are a shared half-duplex media and broadcasts have greater impact on APs than most devices connected to switch ports. Some organizations may decide to forgo a Layer 3 architecture, and instead extend the Layer 2 network to provide Layer 2 mobility across a larger section of the enterprise. For these organizations, advanced spanning tree features such as Rapid Per VLAN Spanning Tree Plus (Rapid PVST+) should be considered.

469 Chapter 9: Enterprise Wireless Mobility Roaming Cisco Aironet IAPP supports seamless mobility only within a single subnet. In the absence of mobile IP, when a WLAN client moves to an AP on a different subnet, the IP address must be renewed. Renewing the IP address will break application sessions that use IP addresses. Windows 2000 and Windows XP automatically renew IP addresses Some applications, such as e-mail, and Web-based applications, may recover and continue to operate normally when their IP address is changed (either automatically as in Windows 2000 or XP, or manually under a different operating system) Other applications such as Telnet, File Transfer Protocol (FTP), and other connection-based applications will fail when their IP address is changed and will need to be manually restarted

Mobile IP or proxy mobile IP (PMIP) is a solution for this application problem, as it maintains a fixed IP address for host applications across L3 subnet boundaries.

Chapter 9: Enterprise Wireless Mobility

470

Chapter 9 Questions
9-1. At what frequency does 802.11b operate? a) 900Mhz b) 2.4Ghz c) 5Ghz d) 54Ghz 9-2. Which of the following are true about EtherChannel? a) Cisco proprietary b) Aggregates the bandwidth of Ethernet channels to be one logical connection c) Aggregates the bandwidth of Fast Ethernet channels to be one logical connection d) If there is a partial failure, the remaining links will continue to pass data as part of the channel 9-3. Which of the 802.11 standards operates in the 5Ghz range? a) 802.11a b) 802.11b c) 802.llg d) 802.llx Which of the 802.11 standards can transfer date at 54Mbps and is backward compatible with 802.11b? a) 802.11a b) 802.11b c) 802.llg d) 802.llx 9-5. Which of the wireless options provides encryption for data sent over a wireless network? a) WAP b) IPSEC/IKE c) AP 9-6. d) WEP Which of these are wireless deployment concerns that you should be familiar with (choose four)? a) MAC b) Cost c) Impedance d) Interference e) Overloading of an AP f) 802.11 wireless standards

9-4.

471 Chapter 9: Enterprise Wireless Mobility 9-7. The new Cisco WDS features are being implemented in the EnableMode wireless network. What is true of the Wireless Domain Service (WDS)? a) It runs only on an AP, implements CCKM, implements QoS for the wireless traffic. b) It runs only on an AP, connects to WLSE, responsible for all authentications from other APs on the subnet. c) It connects the WLSE to the other APs on the subnet and delegates RM jobs from the WLSE. d) It often runs on the AP, implements CCKM, securely connects to other APs on the subnet, connects to the WLSE and delegates Radio Management jobs from the WLSE to all other APs. e) Often runs on the AP, securely connects to other APs on the subnet, connects to the WLSE and delegates Radio Management jobs from the WLSE to all other APs. 9-8. When comparing wireless Point to Point (p2p) and Point to Multipoint (p2mp) networks, which of the following statements are true? a) There are more bridges in a p2p network. b) There are more root bridges in a p2mp network. c) There is higher throughput in p2mp network. d) There is one root bridge and one or more non-root bridges in a p2mp network 9-9. e) P2p networks are more secure A wireless system using the 802.Ix standard is being implemented on the EnableMode network. What are the three main components of an 802.Ix architecture? a) Client, Authenticator, Certificate Server b) Authenticator, Certificate Server, Authentication Server c) Authenticator, Authentication Server, Supplicant d) Certificate Server, Supplicant, Authenticator e) Client, Authentication Server, Supplicant 9-10. CCX version 1 and version 2 require support for: a) WEP, Wi-Fi compliance, Cisco pre-standard TKIP b) AES Encryption c) Cisco LEAP, support multiple SSIDs/VLANs, pre-standard eDCF d) WPA Compliance, and WPA 2 Compliance e) All of the above 9-11. The EnableMode network is replacing its 802.11 a/b devices with 8 0 2 . l l g devices. What statement is FALSE about the 8 0 2 . l l g standard? a) It operates in the same frequency spectrum as 802.11b. b) It has the same number of non overlapping channels as 802.11a. c) It requires antennas specific to the 2.4 GHz band. d) All statements above are true about the 8 0 2 . l l g standard.

Chapter 9: Enterprise Wireless Mobility

472

e) None of the above statements are correct. 9-12. The IEEE standard controlling client network access in WPA authentication is: a) EAP-TLS b) EAP c) 802.Ix d) 802.1Q e) All of the above 9-13. In a Wireless network environment, why do point-to-multipoint links usually have less maximum range than a point to point link? a) The total sum of the energy is distributed across numerous radios in a point to multi-point architecture versus most of the RF energy being distributed between only two points in a point to point architecture. b) Point-to-point antennas usually employ higher gain antennas at both link ends than point-to-point links. c) Point-to-multipoint archetectures require lower power settings than point to point links. d) On a statistical basis, a point-to-point link is more likely to be a greater distance than point to multi-point e) All of the above. 9-14. What IEEE 802-x standard supports eight adjacent channels in the UUNI1 and UUNI-2 bands designated for indoor use? a) 8 0 2 . l l g b) 802.11a c) 802.11b d) 8 0 2 . H i e) None of the above 9-15. What device can function as a Wireless Domain Server capable of RF aggregation?

a) BR1300 b) AP1200 c) WLSM d) AP1100


e) All of the above 9-16. You are in the planning stages for the new EnabieMode wireless network, and are determining the types of antennas that should be utilized. Which are the four basic antenna types that can be used? a) Dipole, non-pole, ground effect, bipole b) Omnidirectional, patch, yagi, parabola c) High gain, omni, point to point, point to multi-point d) Wall mount, mast mount, pole mount, window mount e) Directional, omni-directional, dipole, distributed

473 Chapter 9: Enterprise Wireless Mobility 9-17. You have been given the task of setting up access points within a building for wireless users. The best position for an access point in a corporate wireless network is: (Select the best answer). a) The center of the building b) In a position determined by a site survey c) At the edges of the building d) At the edge of the coverage area shown by the site survey in the ceiling or the floor e) Away from any metal or glass 9-18. A new Cisco wireless network is being installed in an EnableMode location, and you need to determine the best antenna to be used. What is the fundamental difference between an omni-directional and directional antenna? a) Cisco omni-directional antennas always have the letter "0" in their part number. b) Omni-directionla antennas always look like straight rods. c) Directional antennas always look like a dish. d) Omni-directional antennas distribute RF energy in a relatively even manner in most directions while directional antennas use most of the available RF energy in a specific direction with a specific RF coverage shape. e) There is no real technical difference, omni-directional and directional antennas are both dipoles. 9-19. You need to purchase a number of Wi-Fi handsets for the EnableMode network and need to compare and contrast the different options. Identify the main Wi-Fi voice handset vendors other than Cisco: a) Symbol b) Nortel c) Avaya d) Spectralink 9-20. The EnableMode system administrator is performing location site surveys in order to plan for installation of wireless networking devices. In terms of wireless networking, what are leading indicators of links with excessive occlusion (blockage with physical elements)? a) Coverage area less than 10 square meters at signals greater than-65 dBm b) Drops in RF signal in excess of 20 dBm over distances of less than two meters c) Inability to physically see the infrastructure device d) Distance in excess of 20 meters from an infrastructure device in a carpeted environment e) All of the above. 9-21. What are the main advantages of the Cisco SWAN architecture? a) Security

Chapter 9: Enterprise Wireless Mobility b) Layer 3 mobility c) Visibility and management of the wireless network d) Centralized Management

474

e) None of the above 9-22. As the system administrator of an EnabieMode network, you are considering the benefits and disadvantages of installing a wireless LAN at your headquarters office. In weighing in these considerations, what is NOT a reason for deploying wireless in a corporate environment? a) There is a need to eliminate rogue access points in the organization and increase LAN security. b) The organization needs to provide greater mobility to users. c) Wireless is cheaper to deploy than a wired network. d) The employer wishes to obtain greater productivity from the employees. e) All of the above are wireless networking features 9-23. The WLSE is being configured for managing the EnabieMode WLAN. When the WLSE generates an alarm, what actions can the device take? a) Send an e-mail to an administrator b) Disable the switch port that the rouge Access Point is connected to c) Send a message to a syslog server d) Generate an SNMP trap e) All of the above 9-24. The EnabieMode network plans to implement the use of Public Wireless hot spots and security issues are a concern. Which of the following are primary requirements in PWLAN security? (Choose all that apply) a) Encryption of user data b) IPSec encryption c) Accounting of time, and throughput d) 802.lx e) Broadcast SSIDs 9-25. A wireless LAN needs to be implemented in a new EnabieMode location. How is a baseline RF environment established? a) With a carefully detailed RF site survey and supporting documentation. b) With a carefully detailed RF site survey only. c) By using WLSE's Assisted Site Survey feature. d) Usually with a spectrum analyzer 9-26. When implementing corporate guest access an important consideration of the RF coverage is; a) That the area RF coverage should offer high data rates only. b) That the RF coverage offers low latency roaming.

475 Chapter 9: Enterprise Wireless Mobility c) That the area of RF coverage avoids leakage outside the building as much as possible. d) That the RF coverage is as secure as possible. e) That the area RF coverage is as large as possible. 9-27. A site survey needs to be completed at one of the remote EnableMode locations. How does a site survey confirm a deployment plan? a) By auditing the signal strengths in selected physical areas. b) By auditing the channel selections in selected physical areas. c) By auditing the various direction antenna performances in specific physical areas. d) By ensuring an optimal number of RF instrastructure devices are deployed e) All of the above 9-28. You are trying to determine if a Cisco SWAN environment would fit your network needs. SWAN deployments are most often deployed in what types of networks? a) Large enterprises b) Branch offices c) Hot spots d) Campus Environments e) All of the above 9-29. EnableMode is using the WLSE to manage their Cisco wireless network. What network connectivity tools are available on the WLSE administration page? a) Ping and traceroute only b) SNMP reachable, Ping and Traceroute only c) Ping, Traceroute, and SNMP reachable only d) Ping, traceroute, nslookup, tcp port scan, SNMP reachable only e) Ping, Traceroute, L2 Traceroute, nslookup, and SNMP reachable only 9-30. What is NOT an optimal method for detecting co-channel reference? a) A properly planned and documented site survey, with continued monitoring of the radiating environment. b) Well enforced policies on the deployment of rogue APs. c) Deploying WLSE d) Deploying SWAN 9-31. The EnableMode WLAN is experiencing problems associated with poor link margins. What are the leading indicators of insufficient link margin? a) Difference of less than 10 dBm from signal to node

Chapter 9: Enterprise Wireless Mobility b) Links works fine initially but flaps or goes down shortly after being turned up

476

c) Competing sources of 802.11 arrive in the radiating area d) Link initially deployed at full power settings on infrastructure and client devices but link still goes down shortly after being turned up. e) All of the above 9-32. An EnableMode user has an 802.11g/a capable client card. They are able to associate to a Cisco BR1300 without any trouble; however, they are not able to associate to a Cisco BR1400, although all the wireless settings appear to be correctly configured. What is the most likely explanation? a) The BR1300 is hard-coded not to accept client associations, while the 1400 is capable of this feature. b) 802.11a bridging uses the UNII-3 frequency band which is in a different frequency band than what 802.11a clients use. c) The BR1400 can only accept one associated connection, which is already taken up by the radio on the other end of the bridge link. d) The BR1400 uses a unique MAC layer protocol implementation that prevents any clients from associating. e) The user is trying associate to the root bridge of the 802.11a bridging link. Only non-root bridges can accept a client association. f) None of the above 9-33. The EnableMode wireless network appears to be having some problems related to cochannel interference. Which are good indicators that interference problems are from a co-channel or adjacent channel source? a) WLSE indicates levels of 2.4 GHz RF in excess of -45 dBm from non-AP sources within 5 meters of 802.11 clients. b) Non native radios with signals within 10 dB of the closest 802.11 infrastructure device. c) Rogue APs operating on the same channel near approved infrastructure devices. d) Non-native radios with higher gain antennas then the closest approved 802.11 infrastructure device. 9-34. A new WLSE is being installed at the EnableMode NOC. Which are the primary functions of the Wireless LAN Solutions Engine 1130? (Select three) a) Fault monitoring b) Authentication Server 802.lx clients c) Configuration Management d) Wireless client management e) Radio Management

477 Chapter 9: Enterprise Wireless Mobility 9-35. You are experiencing some 802.Ix issues on one of the EnableMode locations. When troubleshooting 802.Ix authentications, what command is most useful? a) debug d o t l l aaa authenticator all b) debug aaa authenticator all c) debug d o t l l aaa radius all d) debug d o t l l 802.Ix all e) debug 802.Ix all 9-36. EnableMode is utilizing VOIP on a wireless LAN. How many simultaneous WLAN VOIP calls can be supported by an AP with Quality of Service enabled, assuming that the G.711 codec is used? a) 7 b) 8

c) 12
d) 64 e) None without proxy ARP enabled 9-37. What is eDCA? a) The difference in the delay used by 802.11 management frames, and data frames b) The time taken between the when a channel becomes free and a radio tries to send a frame c) The standard 802.11 contention mechanism d) A mechanism for adjusting the random backoff of WLAN traffic based in traffic classification e) An authentication type for handheld devices 9-38. The EnableMode network is utilizing Voice over Wireless LANs to provide for a mobile workforce. How would you design frequency overlap for voice over WLAN versus 802.11 for data only? a) You would ensure that all areas where an 802.11 voice call could be initiated is covered by at least two RF infrastructure devices. b) You would configure all the RF instrastructure devices to select optimal channels as required. c) You would ensure each cell is at least 20% overlapped by second RF infrastructure device. d) You would ensure all infrastructure RF devices were set to maximum power. e) None of the above. 9-39. The EnableMode network plans on using VOIP phones over the Wireless data network. When deploying a low latency wireless network, what are the key guidelines that should be maintained?

Chapter 9: Enterprise Wireless Mobility

478

a) The access points requirements. b) Use fixed channels, static WEP keys, all AP on the same channel. c) Dynamic channels, diversity antenna, overlapping channels with more than 20% RSSI d) Use fixed channels, diversity antenna, same transmit power on phone as the AP, overlapping channels have less than 20% RSSI. e) Use fixed channels, CCKM, all AP on the channel, diversity antennas. 9-40. Within the EnableMode WLAN, fast secure roaming needs to be implemented to support wireless VOIP. What components are necessary when implementing fast secure L3 roaming? a) AP, clients, WLSE b) , clients c) , clients WLSE d) AP and clients e) AP, CCX clients, WLSM 9-41. The EnableMode network has recently installed a Cisco Works Wireless LAN Solutions Engine (WLSE) to aid in the maintenance and management of the wireless LAN devices. When upgrading the firmware on access points, the WLSE can perform which of the following functions? (Choose the best option) a) Upgrade firmware of the access point only b) Upgrade firmware, validate the target AP type and convert configurations from VxWorks to IOS all at a scheduled time/date c) Upgrade firmware, and convert configurations from VxWorks to IOS at a scheduled time/date d) Update firmware, and convert configuration from VxWorks to IOS immediately 9-42. The EnableMode network is utilizing the Cisco Wireless LAN Solution Engine (WLSE) to manage the structured WLAN. The WLSE Location Manager performs which of the following functions: a) Discovers the location of APs, and the links them with imported site survey data b) Is a separate module in the Catalyst 6500 providing location based services for Mobile Applications c) Builds a database of APs location for use in device grouping and radio management d) Contains the location of AP management devices, allowing them to correlate GPS data e) None of the above. 9-43. What is the primary purpose of a template in the WLSE?

479 Chapter 9: Enterprise Wireless Mobility a) A template is used to model the RF distribution pattern from access points in Location Manager. b) Templates are used to set up a model for setting alarm levels in the WLSE. c) A template is used as create a configuration model for access points in the network. d) Templates push out configuration files to the access points. e) Templates are used to generate firmware upgrades to the WLAN components. 9-44. The Cisco Compatible Extensions Program (CCX) provides which of the following with respect to wireless networking? a) A way for Cisco to avoid joining the standards bodies for wireless LAN b) Cheaper wireless network. c) A more secure wireless network. d) Faster wireless network with faster L3 roaming times. e) A way for Cisco to accelerate deployment of wireless features. 9-45. While troubleshooting some intermittent 802.11b wireless LAN problems, you use a protocol analyzer. While looking at the wireless LAN packets, which of the following should you find as part of the Frame Control Field? (Choose all that apply.) a) Duration b) Power Management c) Order d) Wired Equivalent Privacy

e) Retry
f) More Fragment 9-46. As part of the new EnableMode wireless network implementation, the use of the WLSE Radio Manager is planned. What are the main functions of the Radio Manager in the WLSE? a) Rogue access point detection, interference detection and client walk about b) Client walkabout, AP scanning, RM assisted configuration, self healing and auto re-site survey c) Client walkabout, interference detection, rogue access point detection, location based services. d) RM assisted configuration, rouge access point detection, interference detection, location based services. e) None of the above. 9-47. A new Cisco Works Wireless LAN Sloutions Engine (WLSE) is being implemented into the EnableMode network. This WLSE does NOT perform what network management function?

Chapter 9: Enterprise Wireless Mobility a) Aggregating SNMP and syslogs from its managed APs. b) SNMP queries of the APs c) The aggregation of Radio Management data d) CDP Discovery e) The WLSE performs none of the above functions.

480

9-48. An SSG is being utilized within the EnableMode Public Wireless LAN. What best describes the function of an SSG in a Public Wireless LAN (PWLAN)? a) The SSG provides connectivity, client address management, security services, and routing across a WAN from each wireless access point to the service provider data center. b) The SSG provides subscriber authentication and maintains the state of all users in the hotspot. c) The SSG is an http proxy that provides captive portal capabilities to the service provider hot spot network. d) The SSG is a central device that allows wireless clients to cross layer three subnets with sub-second roam times. e) The SSG provides central management for the PWLAN hotspot network 9-49. In the Wireless EnableMode network, Layer 2 Fast Secure Roaming technology has been implemented. Layer 2 Fast Secure Roaming is enabled by what type of device? a) An ACS or other AAA server b) A device running as a WDS c) The Ethernet switch d) The WLSE e) A firewall 9-50. The WLSE and the WLSM perform which roles in the wireless network? a) WLSM is responsible for management and the WLSE is responsible for Mobility. b) WLSE is responsible for management and the WLSM is responsible for Mobility. c) WLSM is responsible for security and the WLSE is responsible for Management. d) WLSE is responsible for security and the WLSM is responsible for Management. e) WLSE is responsible for security and the WLSM is responsible for Mobility.

481 Chapter 9: Enterprise Wireless Mobility

Chapter 9 Answers 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 9-14 9-15 9-16 9-17 9-18 9-19 9-20 9-21 9-22 9-23 9-24 9-25 9-26 9-27 9-28 9-29 9-30 9-31 9-32 9-33 9-34 9-35 9-36 b a, c, d (remember that Ether Channel only works with Fast Ethernet interfaces, not regular Ethernet interfaces)

a c
d b, d, e, f b d

c a
b

c
b b b d d

c
d

a
a, d d d


d b

c a, c, a a

Chapter 9: Enterprise Wireless Mobility 9-37 9-38 9-39 9-40 9-41 9-42 9-43 9-44 9-45 9-46 9-47 9-48 9-49 9-50 d

482

c
d b b

c c
b,

c a
b b b

483 Index

802.11a 802.11b 802.llg 802.1Q 802.IX


Access Lists Address Resolution Using (ARP) Administrative Distance Autonomous Systems (ASs) BackboneFast BGP (Border Gateway Protocol) BGP Path Selection BOOTP

389 389 389 39 345


346 14 124 123 52 153 161 91

Clocking Committed Access Rate (CAR)

289 240

Concurrent Routing and Bridging (CRB) 35 Confederations Convergence Custom Queuing Dampening DCE DHCP (Dynamic Host Configuration Protocol) Differentiated Services Code Point (DSCP) Distance Vector Multicast Routing Protocol (DVMRP) Distance-Vector 163 122 231 164 282

91

242

314 4 127 275 90

Bridge Protocol Data Units (BPDUs) ...47 Distribution Lists Bridging CDP (Cisco Discovery Protocol) CGMP 35 DLCI's 20 Domain Name Service (DNS) 304 Dynamic Inter-Switch Link Protocol... 39 Cisco Group Management Protocol (CGMP) 52, 304 Cisco Hierarchical Internetworking Model Class of Service (CoS) EIGRP (Enhanced Interior Gateway Routing Protocol) 143 Fancy Queuing Fast EtherChannel (FEC) File Transfer Protocol (FTP) Filter Lists 228 40 92 129

3 237

Class-Based Weighted Fair Queuing .234 Classless Inter-Domain Routing (CIDR) 81

Index First-In, First-Out (FIFO) Frame-Relay Frame-Relay Compression Frame-Relay Mapping 228 239, 275 278 279 MAC Address Access-Lists Maximum Transmission Unit Metrics MISTP MST Multicast Multicasting Multi-Layer Switching (MLS) Named Access-list

484 341 11 122 46 45 303 52 53 349

High-Level Data Link Control (HDLC)273 Holddowns 6, 123

HSRP (Hot Standby Router Protocol)..82 Hybrid Routing Protocols IEEE 802.x Protocols IGMP 5 9 305

NAT (Network Address Translation) ...81 Network-Based Application Recognition (NBAR) 241 Open Shortest Path First (OSPF) 133 1 140 234 15 162 275 274 6, 124, 133 238 51 80 128

Integrated Routing and Bridging (IRB) 36 Interior Gateway Routing Protocol (IGRP) Internet Control Message Protocol (ICMP)

132 OSI Model 90 OSPF Metrics Packet over SONET Passive Interface

Internet Group Management Protocol (IGMP) 52, 304 Internet Protocol Version 6 (IPv6) ... 101

Peer Groups Inter-Switch Link IP Addressing IP Precedence IP Receive Access-list (rACL) Link-State LMI Load Balancing 39 Permanent Virtual Circuits 77 Point-to-Point Protocol (PPP) 235 Poison reverse 344 Policing 5 PortFast 276 Ports 244 Prefix Lists

485 Index Priority Queuing Private AS Private VLANs 229 161 345 Split horizon Static Routing Subnetting Summarization 6, 123, 277 122 78 79

Protocol Independent Multicast (PIM) 313 Protocol Type-Code Access-Lists PVST PVST+ Random Early Detection (RED) Rapid Spanning Tree Protocol Red istribution Rendezvous Points 342

Telnet
Traffic Shaping 45

92
238

Transmission Control Protocol (TCP) . 12 45 Triggered updates 236 Trivial File Transfer Protocol (TFTP)... 92 46 Trunking 125 Tunnels 310 UplinkFast Resource Reservation Protocol (RSVP) 244 Route Flapping Route Summarization Route tagging Route-maps 122 6 Virtual LAN (VLAN) 123, 133 VLAN Access-list (VACL) 127 VLAN Trunk Protocol (VTP) Routing Information Protocol (RIP) ..130 VTP pruning Routing Loops Shaping Show Commands 6, 123 Weighted Fair Queuing (WFQ) 238 16 Weighted Random Early Detection (WRED) Weighted Round-Robin Wireless/802.lib 237 237 388 228 44 42 343 42 User Datagram Protocol (UDP) 52 13 9 38 6

Variable Length Subnet Masking (VLSM 81

Simple Mail Transfer Protocol (SMTP).92 SNMP Spanning-Tree Protocol (STP) 92 44

WWW.ENABLEMODE.COM
CISCQ SYSTEMS

Learning Partner

WWW.CCBOOTCAMP.COM

ISBN: 1-931881-08-1

Anda mungkin juga menyukai